Heidi M. Crane, MD MPH  University of Washington School of Medicine

Heidi M. Crane, MD MPH University of Washington School of Medicine

AUDIO AT 11:20 [Michael Stirratt] Hello, and good afternoon.
Thank you for joining us on our webinar today. This is our NIH Adherence Network webinar.
My name is Mike Stirratt, I work at the National Institute of Mental Health, and I’m a part
of the NIH Adherence Network, and I want to thank you for joining us for this – our
next talk in our Distinguished Speaker webinar series, Dr. Heidi Crane, from the University
of Washington. I think you’ve picked a great talk to tune into. You know, there’s persistent
questions out there about, you know, what are the best ways that we can help to assess
medication adherence through self-report questions? And are there ways to integrate those kinds
of questions into our routine clinical care? And these are the kinds of things that Heidi
is going to help us think about a little bit today. As I noted, Heidi is at the University
of Washington. She is a physician with a master’s degree in epidemiology, and she’s a recognized
leader in the area of assessment of patient-reported outcomes, or PROs, in the context of medical
care. Heidi wears many different hats; among her many endeavors, she has helped to develop
and integrate a clinical assessment of socio-behavioral measures that are now systematically collected
at multiple HIV clinics across the United States, as a part of the Centers for Aids
Research network of integrated and clinical systems, or CNICS. And you’ll be hearing
some more about that here, but this has resulted in over, I think now, 22,000 completed clinical
assessments to date. So the scope of that work is really staggering. Heidi is also leading
an NIMH R01 project, which is helping to improve our understanding of how best to assess medication
adherence; and she’s working as a co-PI on an NIH roadmap initiative under the PROMIS
initiative, which is also focused on measurement of patient-reported outcomes; and we’ll
be hearing much more about these activities here in the course of her talk. Among the
other roles that Dr. Crane is currently undertaking, she is the Associate Director of the UW CFAR
in the Clinical Epidemiology and Health Services Research Corps, which pursues research comparing
the effectiveness of management strategies for HIV-infected patients in routine clinical
care. And she also sees patients in clinical care, at the Harborview Madison HIV Clinic
in Seattle, where she leads the clinic—metabolic clinic. So, all good endeavors, and we’re
very fortunate to have Dr. Crane with us today, so thank you, Heidi, for joining us, and we’ll
turn it over to you. [Dr. Heidi Crane] Thank you Michael so very
much, for giving me this opportunity to present some of these findings, I greatly appreciate
it. One of the first things I want to talk about, in terms of patient-reported outcomes
assessment, such as measuring medical adherence in clinical care, is, why do we do that? Why
do we try and measure depression, substance use, adherence, etcetera, in clinical care?
And there’s 3 reasons that are recurring themes of why this is important. The first
is to improve or enhance patient-provider communication. The second is really improving
clinical care and outcomes, and finally, a lovely side benefit of these kinds of efforts
is how well it facilitates clinical research. So, digging into these ideas a little bit
further, focusing first on the ‘enhancing patient-provider communication,’ it allows
clinic session time to be devoted to the elements that patients and providers deem important,
rather than an actual assessment of these domains themselves. It’s a way of clarifying
patient concerns. And a key point is that patients are more honest when they’re asked
about adherence, substances, etcetera, when it’s done by the—by a clinical assessment,
like a computer or like a CASI or other tablet or touchscreen assessment method. There’s
less social desirability bias, which can be a concern, particularly with domains such
as adherence, substance use, and depression. And it addresses some of the limitations of
other methods of collecting this information, which we’ll talk about more later, but providers
are repeatedly been demonstrated to be consistently unreliable estimators of patients’ medication
adherence. So this is one of the ways we can address this. It improves clinical care, and the idea behind
this is really, to some extent, based on the chronic care model, which has been shown to
be a way of thinking about or visualizing care improvements for a number of chronic
disease conditions. There was an IOM report that emphasized the emphasis of these kinds
of approaches to chronic diseases. It’s not just enough to give the providers educational
sessions or educational interventions to improve these kinds of outcomes, or to improve them.
These have been demonstrated over and over again that the educational interventions are not sufficient. To really have an impact on clinical care requires a system-based approach, such as
that advocated through the chronic care model, where you have clinical information systems,
delivery system design, decision support – all of these are key elements of the chronic care
model, and come together when you think about PRO assessment in one sort of systematic intervention.
So, it could be sort of thought of as a way of listening to the patient’s voice, but
doing it in a systematic, standardized way, so that that information is usable. And then
feeding the data back to the provider so that they’re using 21st century informatic tools,
and they’re using them in real time to impact care. With the ultimate goal – you could
sort of imagine where this goes – is these tailored, personalized, evidence-based recommendations for clinical actions to the provider in real time to impact any clinical care visit. So, following up on this idea just a little
bit more, this is focusing on the chronic care model, but really specifically, if you
think about medication adherence as the example here. What you see in the top half of this
figure is a not uncommon situation, unfortunately, in many clinical care settings. There’s
any number of reasons why a patient might not have ideal adherence, and I suspect many
of us could list some of these – substance use, depression, etcetera – all contribute
to poor adherence among our patients. Now, there’s no—if there’s no systematic
assessment of that, then what happens is the patient continues to have poor adherence,
until ultimately they develop poor HIV outcomes, such as detectable viral loads. So that’s
the top half of the figure. Now, think about it again in a clinical care setting, where
PROs or other systematic adherence assessments have been integrated into clinical care, such
that, while there are a number of barriers to this – structural barriers, provider
barriers, etcetera – once you’ve addressed those, the figure changes, and what you see
is that adherence assess– as adherence is assessed. This leads to a system that is aware
– and I don’t just mean the provider here. This is bigger than that, this is the system,
the clinical care system, which can improve—which can include social workers, pharmacists, etcetera.
It’s not just about the medical provider – but the system is aware, which allows
the opportunity to intervene before poor HIV outcomes have occurred. And regardless of
what intervention that is, whether it’s pillboxes, pharmacy interventions, case manager
interventions, assisting with financial issues, whatever – we can come up with 92 different
interventions, I think, that we can list in our heads that could possibly be useful here,
but without knowing that adherence is an issue, none of these are applied. And if you follow
through the figure, what you see next is that if you apply the intervention, and adherence
is then reassessed. You therefore know, either that there’s poor adherence and you go through
the cycle again, because the system is aware, or adherence is not, adherence is great, in
which case you’re much less likely to have an undetectable viral load and increased mortality
rates and all of the other outcomes that go with poor adherence, using HIV as an example. And then, the final reason I have listed on
that initial slide about why to do patient-reported outcomes such as adherence, is the way it
facilitates clinical research. Incorporating PROs, such as adherence, into clinical care
settings facilitates research that represents all of the clinic, not all of patients in
care, not just those willing to sign up for trials. You have much better generalizability,
you really understand the patients in your clinical care setting and what the barriers
are and what the issues are. It makes for much better studies, in terms of really applying
to everyone and not just those who meet trial or other clinical study criteria. It is not—it
includes people with depression, and substance use, and other issues, who are often not included
in trials. So, these are just an example of some of the
studies that we have done, based on our—on incorporating patient-reported outcomes, and
we’ll talk about some of these more later, but one of them I wanted to show you, just
to emphasize clinical research—the clinical research, or the third reason to do these
PROs is not only fun, but it’s also incredibly important. And so what I’ve got demonstrating
here on this figure, the figure on the left is—the outcome for that study was patients
who are at risk for transmitting HIV. And we defined that as patients who were reporting
sexual risk behavior currently, and incomplete condom use, and at the same time they also
had a detectable viral load. We used the patient- reported outcomes to look at the odds ratios for this
particular outcome based on a number of different behaviors, and adjusted the malacies that
included all of these behaviors, as well as a number of clinical and demographic characteristics,
and what you can see there is a much greater impact, for example, of amphetamine use on
this outcome than some of the others, but all of these really had important contributions
to this very important clinical outcome, and it’s important not only for the patient,
but also – as we’ve been learning over the last years – as a public health outcome
as well. And you can see that inadequate adherence is the far column on the right of the first
figure, and it’s also one of the factors associated with this outcome. Now, if you look at the second figure,
here we’ve switched the outcome, and specifically what we’re focusing on is in fact inadequate
adherence, so there is, of course, one less column in the figure. But if you’ll notice,
although the scale is slightly different, the pattern of findings is very, very similar. So this
is just a reminder that as we understand and address some of the barriers and contributors
to inadequate adherence, we are likely also contributing and facilitating understanding
and improvement in a number of other key outcomes, including some that have pretty important
public health implications. Now, I hope I’ve painted a picture of why
this is incredibly important, but unfortunately, it is not without difficulty or barriers.
There are actually a number of barriers to collecting patient-reported outcomes, such
as adherence within clinical care settings, and I’m sure that you can sort of think,
in your mind, of several of these steps – burden, data collection, data entry, how do we get
the report—the results to the provider in real time, there’s patient burden, language
issues, space issues, budget issues, etcetera – so that there are a number of decisions
and factors that have to be considered and addressed, to be able to address—to implement
patient-reported outcome collection in clinical care setting, and I’ll dig deeper into some
of these in just a minute on the next slide. But what you should consider, as important
as this is for PROs to be able to benefit outpatient clinical care, what you really
need is instruments that are relevant, valid, reliable, and easy to interpret. You want
something you can put in the provider’s hand, and the rest of the health care team’s
hand, and the system’s awareness, and have that be meaningful to them. And it has to
be something that you can integrate with into your clinical work flow; if you blow clinic
flow, you’re not going to be able to continue to do this for any length of time in a clinical
care setting. And finally, I think we’ve seen a number of different studies focused
on PROs, not just in HIV but in other chronic disease settings, that show you really want
this information at the point of care in real time. It’s not useful to get on the day
of your clinic visit an assessment that was done three months earlier; it’s no longer
going to help you or improve the patient’s clinical care. So, focusing on some of these practical considerations
– I’ve mentioned a few of these already, but there’s a number of decisions to consider
– and I think one of the most important, so I’ll talk about it more in a minute,
is generating clinic personnel buy-in. But also the space, the work flow, there’s some
decisions to be made about eligibility time period – do you do it every day? Every time
they come in? Or windowed? – These are all very addressable issues, but these are decisions
that need to be made. So I’m going to talk about a couple of them in a little bit more
detail here. The first one I mentioned was generating clinic personnel buy-in, and I
have to say, I can’t emphasize how important this one is, because it helps you address
many of the others. So getting support from the clinic leadership, the medical director,
and the Heads of Nursing and Program Operations, whoever it is that has management oversight
in the clinic is tremendously helpful in ensuring that you can address space issues, scheduling
issues, appointment issues, etcetera, and ensure a seamless integration into clinical
care. But I don’t think it stops just at leadership. You clearly need leadership buy-in
to be able to implement PROs such as adherence in clinical care, but I think we should be
reminded that the provider support – even though you often don’t need the permission
of providers to be able to implement in clinical care – it’s incredibly important when
you’re changing clinic flow patterns, etcetera, so I highly advocate doing things like educating
providers on the goals of the project, and including and emphasizing that the goals are
improving patient care, and that the information is provided to them to help them with their
patients… these are the sorts of things that really result in buy-in from providers,
and much less pushback when clinic flow patterns and other procedures end up being modified.
You really need to sort of, in the early stages, consider, sort of, buy-in from all levels,
because pushback from any level can cause tremendous, tremendous chaos. And then I mentioned this one before as well,
the space/location/privacy things. These actually have gotten much easier. Well, privacy is
obviously easiest with the paper-based kinds of patient-reported outcome collections. It’s
mandatory for interview work collection, which is just one of a number of reasons why that
is often not the recommended approach, but we’ll talk about that more in just a minute.
The electronic collection has gotten so much easier in this era of touchscreen tablets,
where you don’t need to have a kiosk, you don’t need to have a special space built
up for it, you can lock it to any wall, any chair, any eyehook anywhere in the clinic,
and so there’s no dedicated space needed, it just has become so much more faster than studies
trying to do this electronically 5, 10 years ago. And you’ve also—there’s a number
of ways that you can put on privacy screens and other approaches, so that even if they’re
doing the patient-reported assessment in the waiting room, you don’t have to be concerned
about some of the privacy issues that were so important 10 years ago. This, I think, managing
clinic workflow is probably the most important issue in terms of considering how to do this,
and we’ve worked with a number of clinics across the US and taken several different
approaches. The clinic I work in, we actually schedule patients 15-20 minutes early for
their appointment, and we use this as an opportunity to do a number of things, including our clinical
assessment. It’s also when the nurses do a fall assessment if the patient is over 50,
and there’s a number of our—we use this as our quality improvement window, and so
we do a number of different things in it, including this clinical assessment. Other
clinics that have longer wait times in the waiting room have found that they didn’t
have to schedule patients earlier to complete these kinds of assessments. It really depends
on the clinical flow of your clinic, whether—whether those sorts of approaches are needed. Clearly,
just capturing people as they walk in is much easier, logistically, to initiate, but in
the end when you have large clinics and low waiting times, it can result in an impact
on clinic flow that needs to be avoided. And then, this is determining the follow-up
eligibility time period, I actually think is kind of an interesting decision, but think
with me, what the implications are, if you will. So, visit frequency varies dramatically.
Some patients, maybe getting wound care, or having acute issues, may come into the clinic
4 times in two weeks, while other patients only come in every six months. If you implement
a system where they have to complete an assessment every single visit, the benefit of that is
the patient expects to complete it, the expectations are clear, and it maximizes the availability—the
availability of the data. However, for the patient who’s coming in for wound care,
it’s incredibly unlikely that having the assessment on day 2, 4, and 8 is going to
provide you with additional meaningful information than on days 1, 3, and 5. It adds to the patient
burden and induces additional assessment fatigue, if you will, not to mention crankiness. So—so
there’s some definite disadvantages of that approach. I would highly advocate for set
a preferable way — approach to this if you will – is having a predetermined eligibility
period. For example, if it’s been two to three months, you can—you can do the assessment,
but if it’s been less than that, you don’t have to. The strength of this is it greatly
decreases the impact on clinic flow, while still providing the maximum amount of clinically
useful information. However, this requires eligibility period to be tracked. We’re
fortunate in this era of electronic collection — rather than, sort of, the older, much more
cumbersome, patient-based and interviewer-based collection — that this can often, and should
be, tracked within the PRO assessment platform. This is – this is not rocket science, and
it doesn’t need a person to do it. These are the kinds of tasks that computers do oh
so well, oh so easily. And so, these are the kinds of things that can be done to decrease
the impact and burden on staff. So, you know, I’ve mentioned a few of these
points. There are—there are several different approaches to collecting patient-reported
outcomes and, ten years ago, probably these were sort of equally divided among them – you
saw paper-based, you saw electronic collection, and you saw interviewer collection. Interviewer
collection has clearly become less and less commonly used, and appropriately so. It’s
incredibly time-consuming and expensive, there’s always privacy concerns, and finally, there’s
societal bias issues. It’s much harder to tell an interviewer that you haven’t taken
your meds for three weeks, or that you were using crack last weekend, than it is to tell
the computer or the paper. The interviewer is probably by far the least best approach
for implementing routine patient-reported outcomes such as medication adherence in a
clinical care setting. This leaves us with two reasonable options: an electronic collection,
and paper. Now, paper-based collection is clearly the cheapest to initiate, and the
easiest to start. However, in the long run, it results in much more patient burden, and
the reason for that is paper doesn’t ever allow you the opportunity to use skip patterns.
It doesn’t ever let you focus an assessment on just what’s clinically relevant. So I’m
going to take an example from substance use; if the patient has already told the assessment
that they’ve never used cocaine or crack, you don’t then want to ask them if they’ve
used cocaine or crack in the last three months, you want to skip the rest of the cocaine/crack
items. And paper-based doesn’t give you that flexibility. When you try and do those
kinds of skip patterns on a paper-based format, you always end up with missing data. Patients
don’t understand, they go to the wrong place, there’s always data quality issues. And
I think the most important downside to mention for the paper is it doesn’t score it in
real time. It’s much more difficult and cumbersome to get paper-based collection into
the hands of the provider as they’re going in to see the patient. So, while paper-based
is cheap and easy to implement and can be used for clinical research despite some data
quality concerns, it is much less useful if one of your target goals is improving clinical
care… which leaves us with electronic collection. Clearly, electronic collection is more expensive
in terms of start-up costs; however, it’s already been demonstrated that, although the
initial costs are more expensive, you’re doing any kind of numbers at all, it becomes
just much cheaper over time than paper-based, because again it has less staffers and there’s
no one who has to score it. There’s much less impact on flow, it’s much more efficient
– even without the skip patterns it takes less time for patients to complete an electronic
than a paper – and finally, and I would say not unimportantly, patient preferences
have repeatedly demonstrated that patients prefer it. So I think I’ve sort of mentioned these,
but—many of these points, but I think a key point to remember is how much more feasible
electronic collection has become in the last few years, not only because the cost of the
hardware has dropped a large figure in the last ten years, and the use of touchscreens
has addressed some of the early studies that demonstrated concerns for older patients who
were uncomfortable using the mouse, but we all have been using computers more in our
everyday life, including checking out at the grocery store, at the ATM, etcetera, so this
is just becoming more and more routine, and for that reason, patients prefer this over
other modes. Not do it, at least in part, at least in some studies, because it’s faster
and patients like that, there’s less patient burden. And there’s—certainly there’s
advantages over interviewer, both in terms of cost and in terms of reporting potentially
stigmatizing behaviors. And there have now been a number of studies that have demonstrated
the feasibility of this type of approach, even in a number of mentally ill or other
difficult-to-reach patient populations that this works, and it eliminates all of the problems
with the data entry step, in terms of scoring, delays, cost, errors, etcetera, it just takes
the human error out of many of those aspects. And it allows conditional branching and complex
skip patterns that has a tremendous reduction on the patient burden. So I think those are
some of the key points, the key advantages of electronic collection. And I would actually
add one more that we have found that our providers really like, and that’s the safeguards that
can be added. So for example when a patient reports that they have been feeling, over
the last two weeks, like hurting themselves most days, hurting themselves or others most
days, which is an item on the PHQ-9 depression inventory, my pager goes off in real time
and – I would argue, more importantly – the head case manager’s pager goes off in real
time, and that patient is assessed for safety before they ever make it out of the clinic.
And we’ve found that those sorts of interventions are incredibly valuable to providers, and
do a great deal to endearing support from clinical care, and I would argue, really have
an impact on improving care in our clinics. So, one question about this is, what is it
we should measure? And we did—we were interested in what some of our key stakeholders felt
we should be measuring. So we had both patients and providers do a number of ranking exercises,
and these are providers from eight states and clinical sites across the United States,
as well as about 75 providers and about 65 patients. And we said, you know, “What is
it important for your provider to know? What is it that we should be telling the provider that
you are already or that’s getting missed?” And one of the points I’d like to point
out is that both patients and providers consistently – and this was across a number of subgroups
– consistently ranked medication adherence and depression as key priorities for improving
clinical care. Interestingly, I thought, some of the barriers to adherence were ranked more
highly by providers – Some of my patients didn’t think I needed to know whether or
not they were using crystal, and I would disagree with them on that point, I would argue that
I do. But that said, providers consistently ranked the—many of the adherence barriers
such as substance use quite highly as well. So, one question we’re going to talk about—we’re
going to move into for just a minute is, then, how do we measure adherence in clinical care
settings? And so we’ve actually been doing a little bit of work that is focused on how
adherence has been measured across a number of chronic diseases, and this is an interesting
area, in that adherence measurement work has often been siloed. There’s systematic reviews,
there’s studies that focus on this disease or that disease, but there’s—there’s
not as much as I would have expected that’s cross-disease, looking at these, and one of
the findings is that, well, HIV has the widest array of instruments and items, formats. It
also has the greatest evidence supporting the use of self-reported adherence items,
but it is definitely not the only chronic disease that has demonstrated the potential
clinical benefits. So, we’ve been looking at the various ways
adherence behavior items have been addressed in a number of chronic diseases, and there’s
really a vast array, a complex array, of items that have been used. And incredible complexity
was noted in some of the chronic diseases and how it’s been measured, and a real lack
of consistency across diseases, in terms of what different diseases are being approached.
So I think that’s interesting in that it is in contrast to HIV, which has been taking
more and more simple, brief approaches to measuring adherence, so I think when we’re
thinking about clinical care, we really should emphasize some of these briefer, more focused
measures. And so what I’m showing you here is—this is a slide demonstrating some analyses
with viral load as the outcome, and looking at different brief adherence items. And so,
if you look at the top one, that’s a single-item measure of adherence called “The Self-Rating
Scale,” and if you look across – let’s take the “adjusted analysis” column, this
is adjusted for clinical and demographic characteristics – what you see is the odds ratios march
nicely up the adherence severity, based on that single item. It behaves exactly as you
might want it to. And we did similar analyses for single 4 day item that has been commonly
used by ACTG in a number of studies; a last-weekend item that, similarly, has often been used
in ACTG items; as well as missed dose items, and a visual analog scale. And, you know,
I think one of the key findings from this study was that the single self-rating item
worked incredibly, incredibly well, in terms of measuring and predicting viral load – current
and future viral load. But even that said, some of the less-well items still worked quite
well. Now, while I really like these very brief—the single-item self-rating adherence
item, I think it’s fabulous – I do point out that the disadvantage of that, to be fair,
is that it can miss some of the complex patterns of adherence behavior. It is not the end-all-be-all;
on the other hand, it is shown incredibly brief, that it is often easy to facilitate
implementing that kind of a measure into a clinical care setting. And the reason you
do that, the goal here is really to focus on a more targeted discussion of patient needs
with real-time adherence interventions, before the patients have virologic failure, and therefore,
you know, potentially develops resistance and all of the complications there are. And
I would argue that, essentially, virtually any of these measures can help you reach that
goal, but I—but we repeatedly demonstrated that the data from the self-rating item really
was a single item that was able to help do that in a very nice way, consistently across
a number of disease populations. Another commonly used measure of assessing
adherence is asking patients about their—the number of doses they missed. The four-day item
is, again, frequently used in older studies, and recent studies have really shown that
maybe some of the longer time frames – the two-week and the one-month items – might
be a better way to go. So we took a look among a thousand patients who answered all
of them on the same day, in that randomized order. We were curious how this looks. And
what we see is that, there may be greater over-reporting with the four-day; if you take
a look at the figure on the left, the four-day item really appears to be above where the
fitted line is, and relative to all the others. We think that’s because there is some over-reporting
with the four-day item that we’re not necessarily seeing with all of the other items, which
I think is a fabulous thing. I think there’s a lot of concerns about societal bias and
other issues, and I suspect we’re putting just a little too much emphasis on that. When
we give patients a longer window, they have—they feel the freedom to report that they’ve
missed doses. So we’ve ended a nested regression where the outcome, again, was undetectable
viral load, and we looked at this a number of different ways, and in adjusted analyses – adjusting
for age and race and a number of clinical and demographic factors – the fourteen-day
item had the highest r-squared, or was the most highly correlated with this—with our outcome of
undetectable viral load. We did this several different ways, including starting with other—with
other measures and adding the fourteen-day item, and it consistently ended up that that
was the fourteen-day item was the most strongly correlated with our outcome of undetectable
viral load. So, I think I’ve presented a lot of information
and a lot of studies, most of which were not mine. So I thought maybe I’d spend just
a minute talking about what we’ve been doing in CNICS, which is a cohort collaboration
of a number of clinical sites across the United States. We implemented the—a patient-reported
outcome or a clinical assessment into clinical care at a number of different sites across
the US, including adherence, depression, anxiety, a number of the key domains that we thought
was either important for clinical care, clinical research, or, ideally, both. We used a—the
PRO platform we use is a web-based survey software application that’s open-source,
so there’s no cost to use this. Patients complete the assessment on touchscreen. Patient
results are typically given to the provider before they enter the clinic appointment on
the same day. We do that in several different formats, depending on the clinic flow and
the electronic health record of that individual clinic. We do that at some sites on paper
and some sites electronically, again based on whatever the clinic flow is at that site.
We do this in English and Spanish, and the platform clocks the time since they last did
it, whether they’re eligible or not, as well as notifying patients and notifying case
managers of suicidality, eligibility for studies, and a number of other—a number of other
very helpful electronic applications that have been built into it. This is what an item might look like. It’s
a single—they see one item at a time, there’s large radio buttons that they can use, if
they don’t want to answer an item they can click “Next,” if they mark in the answer
incorrectly they can easily change it. And we work very hard and have really focused
on avoiding disrupting clinic flow, by “pockets of visit time” – time when the patient
is waiting. And, in a number of our clinics, we’re able to implement it without changing clinic
flow, because there was that much time in the waiting room. In others, we had to rethink
the appointment scheduling and make some changes, but it really depends on the clinical flow
setting; it has to be individualized for the clinic. No two sites have exactly the same
clinical flow approach. Even with sort of thinking this out, this
was a—definitely a moving target. This is an example from one site of the modifications
we made. This is from my site, the University of Washington, we implemented it… 4 months
after we implemented it, we did some reordering of the items, because we wanted to make sure
that the most clinically relevant were completed soonest, to make sure that the results were
in the providers’ hands. And after we made that change, our—we tracked how often we
had the feedback in the provider’s hand before they see the patient, and we’re consistently
over 95% of the clinic visits at this point. We—by the time we’d been doing this 5
months, we were being harangued by the case managers who said, “this is our terrain,
we want this information,” and so this particular clinic had a paper-based feedback system because
they weren’t using the EHR in the appointments as often as we’d like, so it automatically
prints a second feedback for any patient who is at-risk on the domains of the case managers
or social workers we’re interested in, and we give that to the patient’s social worker
on the same day as the clinic visit before the patient leaves, so if there’s an action
item that the provider can take, they have that opportunity while the patient is still
in clinic. And this is—we’re in the realm of 70% of our patients have at least one of
these at-risk criteria, including inadequate medication adherence. And we continually expand
our options – adding Spanish and other things as appropriate – for our clinical care settings. I think I talked about many of these barriers
in terms of technical issues, clinic flow, etcetera, so I think I’ll just skip this
one. One of the points I made earlier was really emphasizing how important it is to
get clinic buy-in, and—so one of the studies we did early on with our very first clinical
site where we implemented adherence is, we started it as a waiting room study, where
we had the patients complete it. Before we implemented it in clinical care, we had 500
patients complete the assessment. At that point, we didn’t give the feedback to the providers,
we were just working out the kinks in the system. However, of those 500 patients, 62
of them reported incredibly poor adherence, missing multiple doses in the prior four days
– clearly, poor adherence. And so we reviewed those charts to determine if the providers
knew about that, or what the providers said about adherence. And among those 62 patients,
one of—only 17 of them did the provider document that the patient didn’t have ideal
adherence in any way. 25% of those charts had no mention of adherence at all, and probably
most concerning of all, at least for me was, a third of the time, there was documentation
from the same day that the patient had poor adherence, from the provider that the patient’s
adherence was good, missed no doses, 100% adherence. Some other documentation in the
chart from that same day that the provider thought they were adhering perfectly. In addition,
among those patients who the provider noted the inadequate adherence, we also found that
a huge number of them had barriers to adherence that were not acknowledged by the providers.
So even when the provider caught that the adherence wasn’t ideal, they were often
missing some of the key barriers, such as substance abuse, relapses, and depression
that were not known about. So we implemented this in CNICS at a number
of sites, and then we did a great deal of qualitative and observational work flow studies
to determine the impact, and determine that we could do this well and sustainably, without
interfering with clinic flow. And—and this included interviews with provider and staff
about the system, and how well it worked, and we were heartened by how often we heard
certain themes from the providers, such as “it really serves as an icebreaker,” that
“it was not disruptive to clinic flow,” and that “it promotes an awareness of these
under-recognized or unrecognized issues.” That came up recurrently, consistently, from
providers, and particularly for both medication adherence and at-risk alcohol use; providers
just didn’t know how often patients weren’t taking their meds, or weren’t taking them
after drinking, or were drinking much more than they thought. Those were common recurring
themes from these interviews. We—we—here are some of the quotes that you can take a
look at if you’re interested, and again, they sort of—it’s that idea of the recurring
theme, “he drank a lot more than I realized,” “I didn’t have any idea he was not taking
his meds,” “It was a real nice way to start a conversation, it was a nice tool to
engage around real issues.” We did—similarly, we did patient usability
tests, and the three recurring themes from those qualitative interviews was that it was
useful, relevant, and important. It was a valuable experience by the patients to have
this implemented in clinical care. Now that said, it’s not without challenges, and some
of the key challenges are completion time variability. The patient on the tablet—the
same patient may – Patient A – may take much, much longer than Patient B, and that
has to do with inherent speed of the patient as well as skip patterns and risk behavior;
some patients get asked more items based on the skip patterns. That is difficult to manage
in terms of clinical flow, and is a key issue that’s hard to address. Other ongoing challenges
are, every clinic we work with has their own look, their own electronic health records,
and really trying to work well with implementing in seamless ways where the information gets
fed into their medical record in a way that’s most useful to providers, gets put in the
provider’s hand before they see the patient, and we’re doing—we’re in early work
doing some interesting pilot work, working on some of the patient web portal and other
electronic implementation systems. So the—the next study we conducted was at
the same clinic, I presented some results from before we implemented feedback, which
we used to get buy-in from the providers. We therefore had PRO data both before the
provider feedback was started and PRO data from patients after the PRO feedback was started.
And we, again, reviewed the charts from those patients, and the chart reviewers didn’t
know whether or not the provider received feedback to look at whether patients—whether
providers were documenting these issues and/or whether they documented any action in regard
to these issues. And what you can see on this slide – and there’s a lot of information
on here, but I’m just going to highlight a couple of columns – the white columns
are before feedback, and the purple columns are the data after feedback, and this is among
patients who have each of these behaviors, so among the patients who have moderate to
severe depression, etcetera. And what I want to point out is first that, in the top figure,
wat you see is that providers were in some way acknowledging adherence pretty often,
even before feedback. But if you look at the third pair of columns, what you see is, among
patients with clearly crummy medication adherence, providers were documenting it inaccurately.
So, those—the third set of columns is how often providers were documenting excellent
adherence or missed no doses, among patients who weren’t taking most of their meds. And
so you can see that this is a place that the feedback had a really substantial impact,
essentially greatly reducing how often providers get it wrong, how often they think the patient
is doing great, but in fact, are taking basically very few, if any, of their medications. And
similarly, alcohol use, as it came out with the qualitative work — at-risk alcohol use was another
place where the feedback had a real impact on provider awareness and documentation – the
provider just wasn’t aware of how often the patients were drinking large amounts of
alcohol. And so, you know, in summary what we found is that—that implementing the feedback
for PROs in clinical care in an HIV clinic really improved the accuracy of providers’
assessment of adherence, and identified a lot of at-risk alcohol use that was being
missed. It had some impact on actions to address alcohol use and adherence, and identified
a larger amount of substance use. Now that said, I guess I would argue that this is a
necessary, but not necessarily sufficient, approach to improving clinical care, in that
many of those columns I’d like to see much higher, much closer to 100%. And so I would
argue that in some ways, this is a first. It improves the provider’s awareness, but
doesn’t necessarily tell them what actions to take. For some domains, it was more effective
– for example, we had very little impact on sexual risk behavior, responses, actions,
or documentation for providers. The feedback really wasn’t a good intervention there
– but it really was an important and useful intervention for some domains, such as adherence and depression
and alcohol use. And one last point I wanted to make here.
Although the data I’m presenting is really about provider behavior and action, this should
in no way minimize the importance of case managers and other members of the health care
team. There are other studies and other aspects of that that we find very interesting and
are working on, but the fact that I’m presenting the provider behavior should not minimize
the importance of those other health care team members. We found that we could collect
the instruments and domains that I mentioned in the earlier slide in CNICS in a median
completion time of 12 minutes. Now, that 12 minutes not only includes all of the domains
and instruments we’ve already talked about, but it also includes additional instruments
for three extra studies. So I think it’s entirely feasible to do these sort of comprehensive-type
clinical assessments, that not only includes medication adherence, but also many of the
key barriers to medication adherence, such as substance use, depression, etcetera, and
it’s reasonable to do that in an 8 or 9 minute sort of -type assessment. The fact
that we’re longer than that is, again, because we have not only clinical care, but clinical
research pool is included in this assessment. So I think, in terms of thinking about
some of the lessons learned… we’ve demonstrated that we can implement patient-reported outcomes
in—in HIV clinics; these are large, busy, multiple—multi-provider clinics. And we’ve
now had 22,000 assessments completed by patients with HIV at the time of routine clinical care
appointments. So these are not patients coming in for studies; this is part of care, implemented
as part of clinical care. We found a huge prevalence of poor medication adherence, depression,
active substance use, and high symptom burden that suggests there’s a real need for measuring
these—assessing these kinds of outcomes, and then addressing them. We found that providers
like this, and particularly, they liked some of the additional features of the electronic collection, such as the pager notification of suicidality – making sure, when our patients
are reporting those sorts of things, that they get assessed in real time, and that they
get resources, and—and it gets addressed. I think it’s incredibly valuable to providers
that the qualitative approaches really validated some of the quantitative findings, and—and
really was very useful in clinical care settings to make sure we understood the impact we were
having on clinical flow, and that we were taking the best approaches we could. And that—what
we see is dramatic differences in adherence rates, when we compare the results of the
PRO assessments to provider progress notes if they don’t get the feedback. Again, we
were able to initiate this project because of—of very supportive administration across
a number of clinical sites, but also because we demonstrated repeatedly the clinical need
at each step of the way. And once we integrated in each of these clinics, we worked really
hard to maintain provider support, so working very hard to make sure we didn’t disrupt
flow was a key point, but then also making sure when—when providers and staff were
conducting other studies or wanted to be notified for enrollment, any additional feature we
could add to this to make it not just ‘another thing in the clinic,’ but a thing that everyone
valued, was a key target. And then—and then maybe most importantly, we frequently update
our clinical sites as to the findings. How much depression did we detect after the first
100 patients? The first 500 patients? Etcetera… making sure that the providers and clinic
staff feel like an involved, integrated parts of these approaches. And so this leaves, if you will, sort of,
recommendations/next steps. We’ve demonstrated the feasibility and, I would argue, the value
of incorporating these kinds of clinical assessments, including adherence into HIV clinics. I think
that there’s enough available data that, to some extent, you can measure adherence
in any number of ways, and it will be of value to you, but that you don’t have to have
the long, complex measures incorporated in your clinical assessment to have real value.
I think there’s enough data on these single-item and very brief adherence measures to be comfortable
with a self-report, with the self-rating item, these very simple, not complex, not high patient
burden instruments provide a whole lot of really valuable information that can be used
to improve clinical care. And I think one of the very fun things we’re doing right
now is, we’re taking what we’ve done at these academic health centers and we’re
integrating clinical assessments into some community health centers. Many of these are
rural clinical care settings, with not a lot of resources, and—and we’re implementing
these clinical assessments for both their HIV-infected and uninfected patients, and
again, sort of replicating and demonstrating that, again, we’re having an impact and
that it can be done in these low-resource, non-academic non-specialty clinic kinds of
settings, and have an important outcome, both the clinical care – most importantly – as
well as clinical research. And then we’re also—currently what we’re doing is a lot
of quantitative studies – evaluating or examining different ways of giving the feedback.
So how do we provide this information to providers, as well as the rest of the health care team,
to maximize impact? To maximize the likelihood of an action or an intervention? How do we
have the greatest impact with the feedback, and where do we go from there? And again,
I – I think I said this already – but, I would argue that what the study shows—these
studies show together is that these kinds of clinical assessments are incredibly important
to understanding where your clinic is, and improving clinical care, but in many cases
they’re not sufficient; they’re a starting point, and you then—you then have the information
necessary to develop and determine where you need additional interventions and additional
efforts to improve clinical care in your patients. And then I—I didn’t really talk about
some of the next steps, but some of the work we’re doing right now through the PROMIS
network is computer-adaptive testing. So, ways of shortening various instruments, ways
of really focusing on what you need to know, without losing the information available on
those instruments. And we can talk more about that if people have questions, I’d be happy
to explain it. This is—many of these studies were not done by me, this is the work of a
fabulous group of colleagues. We have incredibly supportive clinics that we’ve worked with,
and incredibly supportive networks – CNICS, CHARN, and PROMIS have all been lovely – and
we’ve had an amazing support from the NIH in terms of NIMH, as well as OBSSR. We are
just incredibly fortunate. I think that’s more than enough. Can I take any questions? [Stirratt] Yeah, just have one more—there
we go. [Crane] Oh, there we go. [Stirratt] Thank you so much, Heidi, for that excellent
talk, and for sharing this very impressive and very exciting work, you know, with our
group. For our webinar viewers, as you can see here on the screen, we do invite your
questions, and if you do have a question, feel free to send it to our email address
here, or you can tweet a question to us as well on Twitter, using the hashtag “#NIHAdherence.”
We’ll do our best to funnel those questions here to Dr. Crane. And so, please, you know,
feel free to contact us. I think while we’re waiting for the questions to come in, I’ll
go ahead and, you know, take this opportunity to get our questions started. And so, Heidi, just
to be clear, this system is remarkable in the sense that you’re using this downtime
that patients might have in the waiting room systematically collecting these patient-reported
outcomes around substance use, adherence to medications, and then feeding that in real
time into these electronic health records. So that’s available for not just the clinicians,
but the entire comprehensive care team to access and then, to utilize. And you know,
I guess I’d follow up on just one point that you were making at the end there. So
it seems like, from the work today, we’re seeing good evidence that this system is changing
some of the documentation of, you know, in terms of the clinic providers, and whether
they’re recognizing that there might be a problem with substance use, or there might
be a problem with medication adherence. And—but in terms of that “next step,” you know,
to see if the information, which is actionable, is producing action that then affects things
like adherence, or reduces substance use, or improves treatment outcomes. That’s work
that is still coming? [Crane] That’s right. So, at this point,
we have demonstrated that we can—that we have and can change provider action, so we know for alcohol use, for example, that patients—providers who get the feedback for patients with at-risk
alcohol use are more likely to refer the patient to the case manager, health educator, more
likely—there’s more likely to be a referral to AA… there’s—we know that—we know
that for certain domains, with the exception of sexual risk behavior that we have no impact
on, that for most of the other domains, we know we have demonstrated provider action.
They are referring more, they are, you know, different actions for different domains, with
alcohol actually being a really nice example of one where we’ve demonstrated an increase
in referrals—an important increase in referrals. So the next step, next studies are, can we
then demonstrate that having providers more involved, having case managers more engaged,
having them do some of these interventions, does that then change patient outcomes? [Stirratt] Yes. Yeah, I know the—so you
already have interest here, from the provider teams, the care teams. It seems the patients
are responding well to this? [Crane] Yes. [Stirratt] Changing the way that people are
responding? It’ll be fascinating to see if you can affect those actual patient outcomes,
those clinical and treatment outcomes. [Crane] Right. [Stirratt] Also wondering, if you have any
sense or evidence to date about how patient use of this system might, you know, if there’s
any kind of reactivity to this system, in the sense that—you know, so there’s an
idea that computer-assisted assessments are superior to an in-person interview – you
talked a little bit about that. But here, you know, these patients must realize that they’re
reporting data that’s then going to get seen by their doctor, going to be seen by
their social worker. And in some cases, it sounds like that’s prompting a response.
You know, a mention of suicidality brings a swift response from the care team, right?
[Crane] Sure. [Stirratt] So I’m just wondering if, you know, those kinds of things have the
potential to affect how people then self-report, you know, things like substance use or suicidality,
you know, down the line, you know. [Crane] Right. [Stirratt] Do people learn, “oh gosh,
I shouldn’t report this, because they’re going to come and get me”? [Crane] Right. Well, it’s an incredibly
insightful question. So I have two answers to that. One is, when we started it at the
first site, we didn’t give the feedback to the providers. And so we compared rates
of reporting in the eight-month period before feedback and the eight-month period after,
among people who took it in both windows, and we get similar rates. So, the first thing
is, I don’t think there is any evidence that when we changed the system so that providers
were getting feedback – and we told patients, you know, “providers are going to get this
information,” – we did not see a dramatic drop in sexual risk behavior, in substance
use, etcetera. So I don’t think just the idea that “the provider may get this”
had a big impact on our reporting rates. [Stirratt] Okay. [Crane] I think—so that’s the first question.
I think your question about the suicide is incredibly insightful though, because, you
know, I presented our qualitative work and our findings, and it’s been incredibly positive,
but the one place I have gotten pushback from patients, the one place I have
ever gotten complaints or crankiness is really about the suicidal intervention. Patients—the
ONLY place I have ever gotten sort of really seriously cranky patients is the, you know,
the immediate “are you safe at home? Do you know the numbers? Do you have a plan?”
You know, those sorts of interventions. And I guess, I am comfortable with the idea that—that
this—I guess, I think that this is important in making sure that the patient knows they
have resources, etcetera, that I’m willing to live with a little crankiness factor over
that one. And I don’t get it from anyone else, even when we are in their face asking,
“Do you think aloud? Tell me what you’re thinking” after every item, etcetera, we
don’t get a lot of pushback, it’s really well received. And so that’s the one place
we get any, and I think I’m just willing to live with that, because it’s such an
important thing to know. I really, really want to know, and so if it annoys them a little,
I may be okay with that. [Stirratt] Yeah, now you’re airing on the
side of safety there—[Crane] Yeah. [Stirratt] –and looking out for people’s safety and
well-being. We do have a question here from Twitter, and of course you were talking earlier about different approaches towards assessments—[Crane] Yeah. [Stirratt] — paper, electronic, and then
interviewer-administered. And our question is about the potential to use mobile devices
to make some of these inquiries. And here you have a clinic-based system—[Crane] Right.
[Stirratt] You got the tablets in the clinics. But suppose that one could do this, you know,
maybe in the real world, through people’s text messages, or some kind of app…? What
are you thinking—what’s your thinking about that? [Crane] So, I think that’s a fabulous idea,
and we’ve played with various versions of it. We chose—it was a very purposeful decision
to do this as clinic-based, and the reason for that is we have many of the patients who
don’t have access to a computer, who don’t have a smart phone, are many of the patients
that we’re most trying to make sure we capture. These are the patients who don’t make it
into the trials, these are the patients that—that fall through the cracks, if you will. So we
purposefully did it as a clinic-based system, to make sure we were all-inclusive; everybody
in our clinic eventually shows up in clinic. Now that said, I actually think one of the
future directions, if you will, is to have a combination. So if you have a clinic-based
system as your starting point, you’re generalizable, you’re comprehensible. But if you then add
some of these mobile technologies, if you then add the opportunity for patients to complete
it at home, if they wish, before they come in, etcetera, that has the ability to impact—to
decrease the impact on clinic flow, but as long as you also have a clinic-based system,
you don’t then start excluding those who might otherwise fall through the cracks. So
it’s—I think it’s fabulous as an add-on; I don’t think it should be the primary. [Stirratt] Okay. Interesting. Here, I mean,
so you’re primarily, then, working in the context of HIV treatments and these comprehensive
clinics, and I just wonder if you could comment a little about—I mean, you don’t just
work in HIV. And I wonder if you just could make a special comment about the relevance
of this for other care settings. [Crane] Right. Well, so, we started with HIV
– and so these were academic-specialized clinics that focused on chronic disease patients
– but, one of the things we are currently doing that we currently have this fabulous
opportunity through OBSSR, is implementing clinical assessments in a number of community
health centers. And so, this is occurring right now in South Carolina, in very rural
clinics in South Carolina, and Baltimore, and, you know, and a number of sites. And
that’s going incredibly well. I don’t have a great deal of data on impact because
this is a more recent project, but I can tell you that we have gotten a lot of patient and
provider support and buy-in, that they think this is clinically relevant, clinically important,
and going to improve clinical care for their HIV-infected and uninfected patients. Substance
use isn’t just restricted to those with HIV; many of these key factors are not just
those with HIV, and there’s no reason that it has to be sort of segregated that way.
And in fact, you know, from a platform standpoint, it’s actually very easy to have the very
same platform, the very same system, and it does a skip pattern – so a patient is HIV-uninfected,
it skips certain domains that are only relevant to those with HIV – and if it’s a patient
is HIV-infected, it can add additional domains. So we can actually target instruments to those
that are most relevant to them, and we can do it at the level of the platform, again,
to make things as seamless as possible for these community health centers, which have
been so gracious to allow us this opportunity. [Stirratt] Nice to see, and to hear. There’s
another question here, and it’s about your self-report approach. You know, it’s all
about collecting data from the patients, you know, it—although it’s done electronically
over the tablets, you know, it’s asking them this question. And it’s certainly been,
you know, questions raised about the validity of that approach, relative to other approaches
for assessing adherence. Might be looking at trying—drug level, or some kind of, you
know, biological outcome. You’ve got pretty good evidence here, the utility of self-reports,
right? [Crane] Yeah. [Stirratt] What—just, remind us what you think about the utility
of that. [Crane] Right. So, I think there’s a couple
of issues. The one—one is feasibility; while there are objective measures of adherence,
many of them have their own issues, but most of them are not feasible in wide-scale, across
clinical care settings. So that’s Issue 1. I think Issue 2 is that self-reported,
for example, medication adherence has really gotten a bad rap. Yes, there is some over-reporting,
but I think there are things we can do to minimize that, such as not doing it interviewer-based,
but letting them tell the tablet that they missed doses. So while there is a little bit
of self-reporting, I actually think when a patient tells you they’re not taking their
medications, that is incredibly useful information. And it is information that Dr. Bangsberg and
other incredibly brilliant colleagues—investigators have demonstrated repeatedly that providers
don’t get, that they don’t have, that they don’t know, unless you do a systematic
approach. So I would say that’s the second thing. And then I guess the third thing is,
we have another study that I didn’t present that actually looked at self-report in clinical
care, and actually we did some unannounced pill counts within a clinical care setting,
which is a feasible nightmare that I would probably never do again, but I think the take-home
point from that work is that- -that we sort of demonstrated, again, that the over-reporting
of adherence has really gotten too much of a bad rap, but this is an incredibly useful
measure. It’s so much more cheap and feasible than these pill counts, and many less data
nightmares, and it’s available in real time. But more than that, they really are incredibly
highly correlated, that you don’t have to go through some of that incredibly cumbersome
and unfeasible measures to get incredibly useful information. [Stirratt] Well, I think that we will have
to close there. And I know that all of our viewers will join me in thanking you, Dr.
Crane, for sharing your work with us today. Very interesting, very important work. And
if anyone has a request for the slides, you can send that to the email address here. You’ll
also find the webinar posted in time to our website – that’s if you Google the NIH
Adherence Network, you’ll find that – and, otherwise, we will look forward to seeing
you when we have our next scheduled webinar, which I think is under—we’re currently
getting under- it’s being scheduled as we speak. Anyways, alright. Thank you, and thanks
for joining us. Take care, bye.

Leave a Reply

Your email address will not be published. Required fields are marked *