Introduction to Evidence-Based Practice part 2: Framework for Understanding

This text is the second part of my attempt at introducing you to evidence-based practice and making this process generally better understood by everyone. If you haven’t already, check part 1 before reading this one to understand why we can’t just rely on our own thought processes to gather knowledge about the world and make decisions in healthcare. Our minds, even in their most rational moments, are flawed things – and this statement is devoid of any moral judgment. It’s just how we are, myself included. 

Throughout the centuries, a lot of smart people have been thinking about this problem of overcoming the limitations of our cognitive process and better understand both the world and ourselves, as well as help us decide on the best way of doing things. Let’s have a look at what has been done. 

Evidence-based medicine is not something just for academics or scholars, as from the beginning it was created with the aim of educating front-line clinicians (Djulbegovic and Guyatt, 2017). This is reinforced by the notion that one of the characteristics that define a group of people as a profession is the opportunity to develop knowledge through research activity that will then be utilised to inform said practice (Finch, 2007). 

Part of the reason why Evidence-based medicine was created was the recognition, which it itself does, that understanding research, its results, and applying it to individuals who often don’t share the same characteristics as those selected for studies is quite difficult (Djulbegovic and Guyatt, 2017). The best way of talking about a difficult topic is to start by defining what we are actually talking about. In part 1 I presented the initial definition of Evidence-based medicine proposed by Guyatt et al (1992), which I recognise may point us towards what not to do more than what we should be doing in order to have an evidence-based practice. But worry not, as its definition has been clarified over time and since 1996 is described as integrating the best available research evidence, the individual clinician’s experience and expertise, as well as the patient’s values, preferences and circumstances (Sacket et al, 1996; Finch, 2007; Meira, 2020).

But let’s not stop here, let’s define what is meant by each of these three elements. The best available research evidence means looking at all available evidence relating to our clinical question in a systematic manner (Meira, 2020). This sounds very laborious, but the reason it needs to be done is that not all pieces of evidence obtained from research have the same quality and presently it is very easy to find a research article that supports or favours our preferences (Djulbegovic and Guyatt, 2017; Meira, 2020). It is also arguable that the higher the quality of the evidence, the closer to the truth are our conclusions regarding diagnostics, prognosis, and effects of interventions (Djulbegovic and Guyatt, 2017). Thus, we come to the first epistemological principle of Evidence-based medicine: not all evidence is created equal and clinical practice should be based on the best available evidence (Djulbegovic and Guyatt, 2017). 

Previously, this was associated with the methodological design of the study, with randomised controlled trials being considered the best studies and everything else, was considered to be below that type of design. But science and research aren’t perfect, as many critics are ready to point out. However, this is not in itself an argument to discredit the use of the scientific method or evidence-based practice. Interestingly, these critics forget to mention that these flaws have been identified by the same researchers who partake in the scientific method (Finch, 2007; Djulbegovic and Guyatt, 2017). 

At present, a study’s quality is considered based on how well the study design fits the question being asked and how well it fits the particular clinical setting and patient circumstances, as well as the quality, not category, of the methodological design of said study (Finch, 2007; Djulbegovic and Guyatt, 2017). In order to help critically and systematically assess the quality of a research paper, several tools have been developed such as the Grades of Recommendation Assessment, Development and Evaluation (GRADE) system published in 2004 (Oxman, 2004), the Preferred Reporting Items for Systematic reviews and Meta-Analyses (Liberati et al, 2009) and the PROSPERO international prospective register of systematic reviews (Booth et al, 2012). 

Next, clinical expertise refers to being skilled in critical thinking. Clinical expertise is more complex than supporting a claim by saying “in my professional experience”. It is a nuanced process that requires a synthesis of possible contradictions during moments of uncertainty, instead of just saying something is ‘right’ or ‘wrong’ (Meira, 2020). Even though scientific literature cannot deny what we see in clinic, it can however refute our explanations of those events. In other words, even if something appears to work during practice, after being scrutinized through current evidence, we may find that neither our explanation or the perceived benefit of the treatment approach are accurate (Meira, 2020). This is exactly why science doesn’t work by results of individual studies or observations but from the knowledge obtained from critically analyzing the totality of existing research on a topic (Djulbegovic and Guyatt, 2017). Following this thought process, we have arrived at the second epistemological principle of Evidence-based medicine: the quest for truth is best accomplished by analyzing the totality of the evidence, instead of just selecting evidence that favours a particular argument (Djulbegovic and Guyatt, 2017). 

Lastly, but still highly important, we have patients’ preferences and circumstances. The patient is always the final decision maker (Meira, 2020). I’m going to confess this is a view I have not always shared. Throughout my training I was taught the idea that clinicians are the experts with all the knowledge, so during the beginning of my career, I often would consider that a patient’s opinion was not worth evaluating when deciding on care. Patients don’t have any clinical knowledge, how could their opinion be useful in healthcare? I’m sharing this to show that I’m not above making mistakes and continue to make an effort to learn more and improve – this website and these texts are part of that effort. 

We need to consider the patient’s circumstances and preferences because the first may be a barrier to the direct application of research findings, as the patient’s presentation may not match the sample used in clinical studies, and neither we nor the patient may have the same equipment or time resources as the ones used in the literature (Djulbegovic and Guyatt, 2017; Meira, 2020); the latter also needs consideration as patients will be the ones living with the consequences, positive or negative, of their decision (Djulbegovic and Guyatt, 2017; Meira, 2020) and sometimes the evidence is still not fully clear in what interventions are more beneficial, not giving us a clear path to follow, so we may follow what the patient thinks will be more suitable for them and increase both adherence and develop the therapeutic relationship. This can be neatly summarised into the third epistemological principle of Evidence-based medicine: we need to consider patients’ values and preferences during clinical decision-making. Evidence is necessary, but not sufficient for effective decision making, which has to consider the relevant consequences to the patient within their context and circumstances (Djulbegovic and Guyatt, 2017). 

We cannot say we are evidence-based clinicians if we don’t follow these principles and integrate these elements into our daily practice. However, it is important to point out that although the best available evidence, clinical expertise, and patient circumstances and preferences are all necessary for evidence-based practice, they don’t all have necessarily the same weight in informing clinical decisions. In the past some authors and clinicians have described these three elements as “three legs of a stool”, which has led to mistakes such a citation that supports our bias being confused for available evidence, choosing an intervention that suits our bias being for clinical expertise and the patient agreeing to that intervention that we prefer and is supported by a single article being confused for integrated patient’s values into decision making (Meira, 2020). 

Instead, Evidence-based medicine is better seen as a funnel, a framework that guides our practice by narrowing the highest quality evidence, which is then through clinical expertise further narrowed and explained to the patient, who then based on their preferences and circumstances will further narrow down the available options into a plan that suits them. This framework has been described and illustrated by Meira (2020) in an excellent way: 

Also worth notice, is that the steps of this framework can only be taken in the narrowing direction. In other words, it is not evidence-based to select an intervention because it fits the clinician or patient preference if it is not supported by the totality of the best quality evidence. By practicing in this way, we risk selecting interventions that have not shown to be effective, wasting both the patient’s time and money, as well as possibly endangering the safety and well-being of the patient. 

Considering this, it is arguable that developing our research literacy is as important as developing any other knowledge or skill related to patient assessment (Finch, 2007). Thus, universities should prioritize that a significant focus of their curricula is guiding students towards competence in understanding research methodology, being able to perform critical evaluation of evidence, and systematically applying it to their clinical setting (Finch, 2007). 

This text is in no way an exhaustive description or explanation of what is Evidence-based medicine and how it is informed by the scientific process. However, I hope it has helped you understand that Evidence-based medicine is not a dogmatic faith-like cult, but instead a critical and systematic process that aims at challenging our own bias through a pursuit of a humanely achievable approximation of the truth that promotes patient safety and better healthcare outcomes. 

Don’t forget to share if you found this text interesting and comment if you disagree or have questions about any part of it. Hopefully, I’ll see you in the next text. 

References: 

Booth, A., Clarke, M., Dooley, G., Ghersi, D., Moher, D., Petticrew, M., & Stewart, L. (2012). The nuts and bolts of PROSPERO: An international prospective register of systematic reviews. Systematic Reviews, 1(1). https://doi.org/10.1186/2046-4053-1-2 

Djulbegovic, B., & Guyatt, G. H. (2017). Progress in evidence-based medicine: a quarter century on. In The Lancet (Vol. 390, Issue 10092, pp. 415–423). Lancet Publishing Group. https://doi.org/10.1016/S0140-6736(16)31592-6 

Finch, P. M. (2007). The evidence funnel: Highlighting the importance of research literacy in the delivery of evidence informed complementary health care. Journal of Bodywork and Movement Therapies, 11(1), 78–81. https://doi.org/10.1016/j.jbmt.2006.09.001 

Guyatt, G., Cairns, J., Churchill, D., Cook, D., Haynes, B., Hirsh, J., Irvine, J., Levine, M., Levine, M., Nishikawa, J., Sackett, D., Brill-Edwards, P., Gerstein, H., GIbson, J., Jaeschke, R., Kerigan, A., Nevile, A., Panju, A., Detsky, A., … Tugwell, P. (1992). Evidence-Based Medicine – A New Approach to Teaching the Practice of Medicine. JAMA, 268(17), 2420–2425. 

Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P. A., Clarke, M., Devereaux, P. J., Kleijnen, J., & Moher, D. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Journal of Clinical Epidemiology, 62(10), e1–e34. https://doi.org/10.1016/j.jclinepi.2009.06.006 

Meira, E. (2020). Understanding evidence-based medicine using a funnel analogy. Society for Transparency, Openness and Replication in Kinesiology. https://doi.org/10.31236/osf.io/kr6aq 

Oxman, A. D. (2004). Grading quality of evidence and strength of recommendations. In British Medical Journal (Vol. 328, Issue 7454, pp. 1490–1494). https://doi.org/10.1136/bmj.328.7454.1490 

Sacket, D. L., Rosenberd, W. M., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: what it is and what it isn’t. BMJ, 312, 71–72. 

3 thoughts on “Introduction to Evidence-Based Practice part 2: Framework for Understanding

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s