Dr. Onil Bhattacharyya, Associate Professor, Evaluation Lead for the Institute for Health Service Solutions and Virtual Care and Frigon Blau Chair in Family Medicine Research, both at Women's College Hospital in Toronto, draws on his personal experience as a guideline development panelist, family physician and information scientist, to offer his ideas on how we can leverage technology and individual patient experiences to bring EBM into the 21st century.
Dr. Jamie Meuser: You're listening to "Clinically Speaking," a podcast that explores the past, present and future of evidence based medicine in primary care. Brought to you by the Centre for Effective Practice. I'm Jamie Meuser, a family physician practicing home palliative care in Toronto.
Christine Papoushek: I'm Christine Papoushek, a pharmacist practicing at a family health team in Toronto.
Dr. Meuser: As we've established, evidence based medicine has been seen as the root through which conscientious practitioners could achieve optimal practice through the application of the best available evidence to their clinical decision making. The idea of best has been seen to include notions of quality of evidence, comprehensiveness, absence of bias and clarity around to whom and in what circumstances the evidence applied.
Christine: Best hasn't been that simple. As we've touched on in previous episodes, there are a number of critiques, limitations and qualifications when it comes to practicing EBM.
Dr. Meuser: Today we speak with Dr. Onil Bhattacharyya, the Frigon Blau Chair in Family Medicine Research at Women's College Hospital in Toronto. Onil's an associate professor in the Department of Family and Community Medicine at the University of Toronto. He's also the evaluation lead for the Institute for Health Service Solutions and Virtual Care at Women's College.
Christine: He advocates for some radical solutions to make evidence more meaningful to practice, including less rigid reliance on published studies and more attention to the values, preferences, and experiences of individual patients.
Dr. Meuser: Is Onil's approach a harbinger of EBM 2.0? Only time will tell, but it's certainly an exciting option.
Onil, as you know, as we discussed, the general theme for this is around use of evidence in clinical decision making in medicine. This is an area that you're well versed in and well familiar with. We'll probably get into some very specific aspects of that in a bit.
I wonder if we could spend a little bit, at the beginning, talking about where you think we are with evidence based medicine. Maybe starting with what forces you think have driven us towards clinical practice guidelines as they currently exist.
Dr. Onil Bhattacharyya: Going back to the early days of evidence based medicine, I think it was originally framed as a clinical skill. Critical appraisal is a clinical skill. If you remember there was a JAMA series on how to critically appraise articles. That was taught, when I was a medical student we learned about critical appraisal.
But I think at some point in the process of training people to do critical appraisal we realized it's extraordinarily time consuming and extremely difficult. There's a ton of nuance that is hard to capture. When is a clinician going to review enough clinical articles to answer every question they might have?
I think that was a first transition is to say, some learned group needs to summarize all that is known on a specific topic and then lay it out in a format.
I think that really motivated some of the clinical practice guidelines as a concept. The idea that somebody is going to filter the information, condense it, and put it into a usable format.
The quality chasm kind of report, the Institute of Medicine, and this focus on qualities, measured against what, right? What is quality? This gap is really between what is known to be effective and what is routinely done. How do you codify that? That has to be summarized in a document or somewhere. I think that gave another use for clinical practice guidelines.
The other piece is around, how do we use this in a clinical context? This body of information has to be presented in a way that's usable. You're summarizing all the published evidence in a series of steps versus answering questions that clinicians might have.
Guidelines are currently structured to summarize what's known. Many questions that people will have are not covered by guidelines because they're too specific, they relate to patients that have well studied, etc. You have decision support tools like UptoDate and DynaMed that serve that purpose but they're distinct from guidelines.
I would say that's some of the broad forces.
Dr. Meuser: I think what you've described is two pieces of the guideline construction and dissemination puzzle. One is what content goes into these evidence packages that we hope that people will use for making decisions about what is the next right thing to do with their patients. Then there's the content, what's in it, and then there's the package.
Can you talk a little bit about how packaging affects usefulness and what opportunities exist in that context for making guidelines more useful?
Dr. Bhattacharyya: Sure. Making guidelines that accurately represent the evidence generally results in paragraphs of text that read like a legal document, tons of hedging, deliberate ambiguity, because they're trying to be faithful to the evidence.
To go from being evidence based to information that is actionable requires a departure from the evidence, to some extent, extrapolation around two different groups or simplification to make it easier to communicate. That's the basic thing, is that information if it's actionable, is more likely to be acted upon, but actionable information is often less evidence based. So, there's one compromise.
The other thing is, use of visual information. Communicating a risk, understanding risk benefit, is challenging for doctors, but even more challenging for patients. If we think back to the many original definitions about evidence based medicine, it was about information that is incorporated with patient preferences to informed decisions.
If incorporating patient preference is an important part in evidence based decision making, that evidence has to be transparent, or easily understood by the patient. I think using visual communication tools is something that we're seeing more of, but there's an important part of guidelines.
The other piece is algorithmic decision making. Many guidelines will have decision trees, but having worked on somebody's decision trees, they get really complicated.
Just think of the 2013 diabetes guideline, and the more recent one, on the pharmacotherapy. Now, you can input data with a very simple decision support system, that orders and organizes the information. You'll see the most relevant to your patient. You've got actionable language, you've got visual communication that is clear, and then you have ways that embed algorithmic thinking in an interface that's easier to engage with.
Dr. Meuser: I guess that each of those stages, there's a potential third doing much better, but there's also a potential for doing very wrong if you stray too far from the science then, the guideline is less of a conveyor for truth, however, if you do it right, you're going to take it in a direction that allows for more people to use them more of the time.
Can you give some examples? I'm intrigued by the engagement of patients in the decision process, and especially, developing a guideline that reflects values and preferences of individual patients. Can you give some examples of how that could be done within the context of a guideline?
Dr. Bhattacharyya: Sure. I did a whole literature on decision aids, and decision aids, I think, put a lot of energy into making things clear in eliciting preference, and all that, but they're like focused on one decision often.
A guideline could have a bunch of decision aids embedded in it, which I haven't seen, actually. I often feel that decision aids are built for one thing, and that's it. The system addict incorporation disintegrates into guidelines would be a great progress, but at least that's an established thing.
The other option is something called an option grid, and that is like a table with a bunch of questions, and key information on questions. Should I screen for PSA? Should I take stance for prevention of cardiovascular disease? And so the risks and benefits, the number needed to treat, the number needed to harm, all these things are presented in a graph that frames the conversation between a physician and a patient.
As a provider, when I use those tools, I'm often learning at the moment, especially if it's a condition that I don't do very often. Who remembers all the numbers to treat for every condition?
It's an opportunity to structure conversation, it creates transparency, and also allows the physician to learn it at the same moment. I think we over estimate how much physicians can retain and process in a clinical encounter. Building tools that are for patients will also create tools that are useful for doctors.
Dr. Meuser: It sounds like this is also tailor made for an application for EMR's capacities, if only we have an EMR that capacity, of course, but is that the kind of thing you're talking about, that you could share the patient record with the patient, and the clinical practice guideline that applied to their particular condition, and then learn together what the options are, and what the basics of an informed choice would look like?
Dr. Bhattacharyya: Absolutely. A simple application would be, you have the Framingham risk score, that's automatically generated in the electronic medical record, so that stratifies your risk, and then you present a series of options, right?
Dr. Meuser: Yes. So, that's certainly one way that it's easy to imagine, would make guidelines more useful for the primary care audience. Can you think of any other ways that guidelines could be formatted or constructed differently to allow for them to be more useful to people in making patient care decisions in primary care?
Dr. Bhattacharyya: We've just talked about present stratifying patients, but embedding decision support in an EMR would be another obvious application which hasn't been that well developed.
Aside from a couple of alerts, like you've got an interaction or this patient needs a Pap test because they haven't had one for three years, that's not a very robust function in most EMRs.
There'd be a lot of work to be done there. Just as a caution, the back end algorithm, in order to program the algorithm, you're moving away from the evidence. You're making a lot of assumptions in decisions.
Dr. Meuser: The assumptions would have to be as transparent as they could be. How many assumptions would apply to all patients? Would the assumptions have to be different depending on the patient?
Can you think of any other ways in which technology could be harnessed to serve the interests that we're talking about?
Dr. Bhattacharyya: Yeah. There are many applications. First of all, we've talked about decision support for clinicians but also decision support for patients. Guidelines could be made accessible to patients for basic decisions around trilogies.
Is this important? You can imagine a simple model that already exists as an action plan for asthma, for COPD. It's based on a guideline. It's individualized to a patient in a support decision making during an episode of acute illness.
The other application, which I think is just not part of guidelines is...We've got the application of population averages generated through trials that we apply to an individual.
For an individual in primary care, we are applying sequential treatments to people over time to try and manage their condition. Many of the drugs that we prescribe in family medicine have very small effects. We're talking about 10 percent reductions in some parameter that we actually have difficulty capturing.
An obvious application of technology would be to be measuring, if you've got migraines, your headache frequency, intensity, duration, what medication you used. This would allow us to know if you tried a particular approach to migraines prophylaxis. Is it working? Do we need to try something else? The evidence as applied to the individual is not something that we do that well.
Dr. Meuser: If I understand you correctly, then it would allow the patient to be the generator of his or her own evidence for his or her own treatment.
Dr. Bhattacharyya: Yeah.
Dr. Meuser: In fact, you wouldn't have to use population norms to decide what the next right thing to do would be, but use the actual experience of that patient with whatever combination of treatments and results of treatments that will apply to that patient.
Dr. Bhattacharyya: That's right.
Dr. Meuser: Fascinating. You know of any place where this is actually happening?
Dr. Bhattacharyya: I don't know of groups that are systematically applying it. To do this in a reliable way, you would need...You have five time points to establish a baseline for some symptom, then you apply a new treatment and you look for a change in the slope of that symptom in the next five time points.
For blood pressure, that would be relatively easy to do, but I'm not aware of people doing it systematically.
Dr. Meuser: This is something that an app on somebody's phone could be easily applied to. In fact, it wouldn't even necessarily have to be specific to a condition. You specify the treatments that you're using, you specify the symptoms that you're tracking, and then the report spits itself out essentially from visit to visit. Very interesting.
This is end of one stuff that we're talking about. End of one tracking and planning based on the result. In a slightly different direction, we've spent a fair bit of time in our survey of clinical practice guidelines and how they're useful and not useful in 2018.
Looking at the degree to which clinical practice guidelines these days are aimed at specific audiences and especially a family physicians and other care providers in family medicine. Can you talk a little bit about your experience on these panels?
There must be something about that process that is able to produce a result. But also seems to produce a result that many family docs, at least, wouldn't recognize reflects their world very often.
Dr. Bhattacharyya: In having worked on the previous diabetes guideline, there's 20 people in the steering committee. There was three family doctors, but two of them only saw people with diabetes. I was the only family doctor with a general practice on the committee.
They had methodologists, they had other folks, but the vast majority were cardiologists, endocrinologists, there were dietitians. People whose whole world for the most part is vascular health or diabetes. They value outcomes in their area above other outcomes.
They have a huge amount of knowledge of every trial, and every subgroup, and every P value. To some extent, it takes them away from the big picture. As one of 20 people who would say, "I'm not sure how this would fit with the competing demands of the average patient in my practice."
I think that was listened to but it didn't drive the conversation. [inaudible 15:01] make a big issue about conflict of interest. At that panel, there was definitely a fair amount but it was declared. In recent panels, this is addressed more.
If you think about it, if you had a guideline on surgical CABG, appropriate management of cardiovascular disease surgically, and most of the panel were surgeons, you would get a certain [laughs] result.
There are conflicts that are just so deeply embedded that it can't be overcome. If I think in contrast in Holland, where the vast majority of people constructing a guideline for a common primary care condition would be primary care providers, the result will be very different.
Dr. Meuser: Is it? To your knowledge to those guidelines that are produced by family doctors presumably fewer of whom would have over conflicts, at least, monetary conflicts. We all bring our conflicts based on the nature of the practice that we have. Do you think the outcome is that much different?
Dr. Bhattacharyya: That I'm not sure. I haven't seen a head to head comparison. I would say the [inaudible 15:58] these guideline and some independent review of the quality of the evidence and transparency has scored quite well. It's like you said, you're in the room. You're going to make a decision. There are tons of compromises made.
The direction of the compromise, sometimes there's an explicit process, sometimes it's just the way the conversation is going and who's driving it and what's the mood of the room. My sense is that it would be different.
Dr. Meuser: It's hard to believe that any guideline that's constructed by family doctors would come out of 160 recommendations.
Dr. Bhattacharyya: [laughs]
Dr. Meuser: We'd get tired of it long before that [laughs] and would have caused the question of usefulness of any document that tells us there's 160 things we need to do differently. That's it.
Dr. Bhattacharyya: Keep in mind, if you had to read that document and retain it, it's impossible. If you have in the course of a year 2,000 questions, you might want 160 points of information that you may want to refer to.
Dr. Meuser: Comprehensiveness on the one hand and usability on the other. Tell us what else you're working on here at Women's College that might be useful to the practice of evidence based medicine.
Dr. Bhattacharyya: I'm at the Institute for Health System Solutions and Virtual Care. We are evaluating a wide range of virtual tools. I think of them as the building blocks of modern ambulatory care. If you think in family medicine primary care, as we say, it's the cornerstone of health services, but as a service, it's somewhat under powered.
There's a wide range of things that are available but not used. If you think of building modern building blocks, this could include virtual visits or digital on demand services, remote monitoring, care coordination platforms, specialist decisions support for primary care, and point of care diagnostics.
These are all tools that we are trailing in a range of settings, and every one of our studies has a vendor, a clinical site, a payer, and we are the third party evaluator.
I think there are a wide range of things that could transform primary care, but for which there's not a ton of appetite at the front lines.
Dr. Meuser: Interesting.
Dr. Bhattacharyya: It's just the value propositions don't align, so often the patients love them. Increased convenience for patients, increased workload for provider, increased liability for institutions and increased costs in the short term for someone who doesn't have a line item that's called patient facing digital tool.
There is enormous potential, but under the current system with the current incentives, it's a bit of a slog to sort it out.
Dr. Meuser: Taking us back to the evidence, so there's a usefulness presumed with this. Where does evidence fit in the assessment of what tools you would trial, at least? Is there a phase of evaluating whether the assumptions behind it are evidence based, for instance?
Dr. Bhattacharyya: Yes. Our approach is in the early stages, something we call value proposition design. What is the target user, the clinical model, the technology and the outcome that you're proposing? Is that feasible? Is that appropriate? Then we have a series of different payer, provider, patient, caregiver, institution, and how do we get those interests to align?
We often do very rapid implementations where we iterate on the target user, on the clinical features, on the technology or features of the technology to find a use case where there's a fit between a problem and a solution and then we trial it.
Dr. Meuser: Cool.
Christine: Thank you for listening to this episode of Clinically Speaking. It was produced and edited by Pippy Scott Meuser at the Centre for Effective Practice.
Dr. Meuser: Special thanks to our guest this episode, Onil Bhattacharyya, for taking time out of his busy schedule to talk with us.
If you like what you heard today, we encourage you to share with your friends and colleagues. Your support goes a long way.
Christine: Don't forget to subscribe on iTunes, our website, or wherever else you get your podcasts.