Advanced Topics for Implementation Science Research: SMART/Adaptive Designs

Advanced Topics for Implementation Science Research: SMART/Adaptive Designs


>>Thank you very much on behalf of the National
Cancer Institute. I wish to welcome everyone to the October Advanced Topics in Implementation
Science webinar. Today we are delighted to welcome our presenter Dr. Amy Kilbourne who
will be joined our own Dr. David Chambers to moderate the session. A very brief word
about logistics and we’ll be off. We ask that if you are not already on mute to please keep
your phone on mute for the duration of today’s presentation. As mentioned this session is
being recorded and muting all lines will help us to avoid any background noise. We encourage
questions, they can be submitted using the Q&A feature on the right hand side of your
screen. Type your question in the provided Q&A field and hit submit. Feel free to submit
your question at any time but we’ll be opening the session for questions when the presentation
is finished. Without any further ado, I’m going to turn it over to Dr. David Chambers.>>Thanks Sarah and thanks to everyone for
joining in. We’re pleased to welcome you to another wonderful session in our line and
our series of webinars about different research designs for dissemination and implementation
and research, so thank you all for being a part of that. And we really do see these as
much as possible as a chance to hear from experts but also to engage in questions, comments,
experiences that you have. So echoing what Sarah said, we Amy will kick us off in a minute
or so and then we would absolutely love to have your thinking on this. In addition, after
the webinar, this does link nicely to our online sort of learning community, community
of practice around implementation research to reality and so we love to have you there
as well. So you’ll see, I think if you have access to the slides, there’s Amy right there
in her likely office at the University of Michigan where she is. Just a couple of notes,
I’ve known Amy, I think I, we met in 2002 at an HRQ organized meeting around organizational
change, organizational factors and healthcare. And that point, I know Amy was working on
a career development award and we were even talking about a potential submission to the
National Institute of Mental Health without actually any help whatsoever from us at NIH.
Amy was well on her way, she got a VA career development award. She started to make such
incredible strides and incredible contributions, not only to mental health but to the broader
implementation science field. In a just a few years after that we see her rising to
even more and well deserved prominence heading the VA quality enhancement research initiative.
And so it’s wonderful to have her sharing what’s been a really unique I would say or
at least early adopter part of dissemination and implementation of research and that’s
thinking about the adaptive designs and how, while they’ve risen within clinical trials,
how they can be used to great effect as we think about testing implementation strategies.
And even just more broadly how they may fit very nicely are challenges of complexity of
sequential role out of implementation and trying to figure out the optimal design. So
with great pleasure we introduce Amy Kilbourne and turn it over to her. You’ll see the title
slide right in front of you and away we go, so thanks Amy take it away.>>Great, wonderful, I hope everyone can hear
me. Thanks so much David for the really kind introduction and I’m really delighted to be
able to present on the sequential multiple assignment randomized trials and adaptive
designs for implementation studies and like anything, I know I really need to thank, you
know you work with great colleagues and I really want to thank Danny Almirall and Susan
Murphy in particular whose [inaudible] in particular for their contributions, too much
of the science I’m presenting today. And this really dovetails a workshop we did at the
Society for [inaudible] Research meeting we did last week as well. So this has really
been a great opportunity to talk about some really fun designs. So, so just a couple brief
disclosures, these are my views, you know what I present is my views and my funding
sources are from NIH and the VA as you can see listed here. So I going to talk about
the role of SMART and adaptive designs, their applications to implementation studies and
testing and actually using these types of designs to test different implementation strategies.
And what I mean by implementation strategies are really interventions that you would be
delivering to providers or healthcare systems to help enhance the uptake of effective programs
or practices. So what are SMART designs? What they are and I’m going to give a brief background
of sort of some examples from the clinical science literature, so you’ll see some outline,
you know outline based on what’s been done in clinical trials research but they’re essentially
multiphase trials where you use the same subjects throughout the duration of the trial. Each
stage corresponds to a critical decision point with prespecified measures of responsiveness.
In addition, the treatment options that randomizations are restricted depending on the history of
responsiveness, so you basically predetermine a what someone would get next. And then finally
subjects are then randomized to a set of treatment options. So in essence, the key difference
between a SMART and adaptive intervention design, is that SMART designs are used to
inform the development of an adaptive intervention strategy. So SMART designs are often used
when you don’t know which series of treatments work first and then adaptive studies are often
used to for you know, compared to well established types of treatments and augmentation and so
forth. And I’ll go into more detail on how these are done in a sec. so SMART and adaptive
designs, oops I’m going, let’s see Yeah sorry about that. I just want to make sure I have
the right slides here. So basically you know we really just have 2 or 3 critical decisions
to address in a SMART design sequencing decisions, which treatment to try first or which treatment
to try if there’s a sign of nonresponse, which treatment to try if someone’s not doing well
and then timing of that treatment. So for example it’s critical to figure out at what
point do you declare nonresponsive and basically based on that point of time you would offer
an [inaudible] treatment. So you know in asking why you want to do a SMART trial, you know
you would want to think about you know, what are some of these decisions that are most
controversial or need investigation and which decisions would have the biggest to impact
on outcome. So one example here, this is sort of a basic example of a SMART trial where
you have randomization to treatment A and treatment B. and then basically you look at
a certain point and time usually in a matter of weeks or maybe even in a couple of months
and you look at responsiveness. And so essentially what you would do is that you would take the
ones who are nonresponsive and then randomize them to either switching to a different treatment
or augmentation with the different treatments. So you would take away different treatment
A and switch to treatment C for example or you would just augment treatment A and add
treatment A, treatment B. And the same thing for treatment B, you would just do the same
thing as well. So this is a very sort of basic way of thinking about a SMART design. And
then one of the things though, is that you know it’s kind of complicated when you draw
out all of the arrows and you think oh my gosh you know, you have these different [inaudible]
of analyzing it. You really have to sort of keep the process simple in terms of what you’re
trying to look for. So in designing a SMART trial you want to power for this simple important
primary hypothesis. Oftentimes it can be just you know, you’re just asking the questions
does augmentation within additional treatment work better than switching a treatment. That
may be just a question you want to ask or is just the comparison of let’s say treatment
A or treatment B that’s essentially augmented versus switching to treatment C for example.
So you really want to make sure that to not get too bombarded by all the different options
but to sort of make sure that you’re actually powering your study too is the key questions
that you have. So for implementation strategies, it’s often you know for thinking about this
we’re often thinking about an outcome that might be under the provider’s control. So
for example if we’re going to design a SMART trail based on an implementation strategy,
you want to actually design the question around responsiveness based on clinical process measure
and we’ll get that as well. But the key is you want to be specifying at the very beginning
what are your responsiveness measures versus what you’re outcome measures are and not to
actually use the same for both. So SMART and primary hypotheses, a couple of examples here,
if you have a sort of a [inaudible] sample size, let’s say you have only a certain number
of patients to work with, you may just want hypothesize that you’re initial treatment
A, results in better outcome than in initial treatment B. Now if you have a larger sample
size to work with and some more options then you might want to say you might want to hypothesize
that to switch to treatment B you know when you originally started with treatment A might
results in better outcomes than in augmentation of that treatment A with a different treatment
such as treatment D. So again the more, the more sample, the larger sample size you have,
the more ability you have to answer key questions you know, but for the sake of actually just
answering one made question you just want to be careful about what you’re ultimately
trying to accomplish in the design. So here’s an example of really a way of thinking about
another you know, sort of the same example that we presented earlier which is to sort
out outline this a bit further. So again you have treatment A and treatment B and then
in terms of just randomization at some point, you take ones who are nonresponsive and then
you look at whether or not switching versus augmentation would be more effective than
treatment A and then the same question for treatment B among the nonresponders which
you switch to treatment C or treatment D. And then also you know, you can think about
this as you know, you can also maybe include a question about switching versus augmentation,
so you can combine the two arms that look at switching from treatment A to treatment
C and then switching from treatment B to switch to treatment C. And what that does is eventually
answer the question to hypothesize the switching a treatment works better or worse than augmenting
a treatment regardless of what that original treatment that you started with. So that’s
also a very interesting way of using a SMART design to address you know really important
question as you know and essentially if you think about this from an efficiency standpoint
would it be better to just take away one treatment and switch someone to a new treatment or is
it better to actually just pile on additional treatments. And that question again it depends
on what your clinical goals are and what your research goals are for that. So let me talk
a little bit about adaptive interventions and then really talk about some examples from
a standpoint of implementation research. So adaptive interventions are sequence of individually
tailored decision rules that specify whether, how and when to alter the intensity, type,
dosage or delivery of a treatment at critical decision points in a medical care process.
So basically they often are referred to as dynamic treatment regimens, adaptive treatment
strategies, treatment algorithms. Sometimes in psychiatry in particular, the idea of stepped
care, it’s really one way of thinking about an adaptive intervention. You start with one
type of intervention and if there’s not much a response then you would essentially add
on to that additional intervention or you would essentially change it. But now I’m going
to talk about the application of these designs to implementation studies and this is really
going to be the bulk of the discussion in part because it really feels as though SMART
and adaptive designs were really meant to be done in implementation science. You know
given how complex healthcare is, how that there’s many moving parts to what can work
in certain situations and so forth. And so we wanted to really think about the application
of these types of designs to the implementation science world. So the reason is and then why
would we do SMART designs and to be used to inform adaptive interventions is that you
know we know that there’s this persistent wide gap between research and practice. We
also know that adoption of evidence based practice is often slow. And oftentimes if
not all sites or practices, you think about the unit of implementation being a clinic
or a healthcare practice. There are even individual providers, you know not everyone or every
site is going to be necessarily ready to adopt a new practice or a program and that could
be anything from a new treatment for mental health for example or sometimes a new special
social treatment for a cancer or it can be something even a little bit more complicated
such as the design of a collaborative care model for diabetes treatment or for chronic
illness for example. So it can be practices or programs but not all places are ready necessarily.
In addition to that, the adoption of complex interventions may also require a stepped implementation
approach and that’s because in general it often, you can often turn some sites off if
you basically give them a lot of implementation support early on when they feel that they
can take the ball and run with it. Or other places make you feel like they need more support
because they’re struggling in different reasons. So reason why we want to sort of think about
implementation strategies and adaptive designs to actually test them is that they’re really
a number of reasons why the translation of the research findings into clinical practice
into improved health outcomes and so forth really gets blotched by a number of barriers.
And if you can think about these barriers they often fall you know, maybe in three or
four different types of buckets. So you first have the key barrier and often, we often see
is that we have these efficacy trials and new interventions of treatments but they’ve
been tested in highly selective patient populations. So the limited extra validity is often a big
barrier in terms of further implementing them in routine practice. There are also barriers
in terms of the quality capped across systems and the variation and sites and terms of their
ability to adopt evidence they’ve practiced. And oftentimes providers will say they have
other demands on their plate and that preclude them from adopting or using new evidence based
practices and then oftentimes and especially we see this in a VA, is that we have a lots
of wonderful interventions and practices but there’s often, they’re often really not the
priorities at the moment. Now there’s a lot of you know shifting priorities in VA and
the same can be said for any healthcare system depending on what they’re focused on in terms
of their bottom line. But there’s always this question about you know what is, what needs
to be done right away versus watch and wait. So there are also opportunities in terms of
closing this research to practice step. There’s really a number of studies out there that
are trying to classify and test different implementation strategies that really are
designed to address these barriers and in many ways just to facilitate the uptake. And
it could be intervention that they are focused on provider sort of helping and supporting
their provider’s fees in let’s say practices or national program mandates or evaluations
or implementation strategies for what we determined later a doctor or lower resources sites that
I might need extra help. So, this is really just in general framework of the why in terms
of why SMART and adaptive designs could be particularly useful because there are many
ways in which sites or providers may not be able to adopt evidence based practices. So
I’m going to talk a little bit about the notion of implementation strategies and again this
is really the bulk of what we would test in a type of SMART or adaptive design. Implementation
strategies are highly specified systematic processes used to help promote the use of
treatments and practices often at the clinic or provider level into usual care settings.
And there’s a wonderful review paper by Byron Powell that came out this past year I believe
on implementation science that sought to classify different types of implementation strategies
to really try to specify what they are and how or whether or not they’ve been tested.
This is sort of very simplified diagram of examples of current implementation strategies
that are used primarily in large healthcare systems such as the VA but also are increasingly
being used in community practices. So we had the traditional initial types of implementation
strategies such as audit and feedback and training which focused mainly on provider
sort of encouraging, helping providers to implement evidence based practices. And then
there’s also been some more heavy handed or I would say more expensive types of implementation
strategies that require more people power such as learning collaborative to cross providers
and teams to providers to facilitation or coaching. And more recently unlearning in
terms of changing provider behavior to de-implement low value practices and then if you’re thinking
about major changes to the way the system works for [inaudible] there’s also [inaudible]
engineering as potential implementation strategies to really take on larger issues in terms of
an overall organization’s function. But they really fall under the spectrum of not only
relative cost but also the skillset that you would need as far as implementation is you
know, someone who would help with the implementation process would need. So it could range from
technical to you know sort of content expert of a particular clinical intervention that
you’re trying to implement to more of the interpersonal or adaptive types of coaching
and these thinking of what might be required for some of the more intensive implementation
strategies that rely more on improving interpersonal skills or leadership skills. So these are
again just examples and a general way of thinking about implementation strategies and the relative
cost and level of expertise that would be needed to use these. So the practices and
providers are going to differ in their barriers to implementation of evidence based practices
which is why we have a different step of implementation strategies to potentially choose from and
maybe test in an adaptive design. Sites are often heterogeneous in terms of their culture
and climate. Further, the needs of sites may evolve over time. So they may face new challenges
that may not have been known at the time. There’s some wonderful organizational surveys
that look at organizational barriers and facilitators or organizational readiness to change. But
in terms of collecting those data and getting data across many sites can often be difficult
to do especially at a baseline and especially given an opportunity to rapidly implement
evidence based practices. So sometimes just rolling out in sequence using a SMART design
might be the most efficient way of actually engaging the sites and really implementing
evidence based practices. Sites may differ in whether how and how quickly they will adopt
a evidence based practice that may be based on features that are not apparent in the organizational
surveys that you’ve collected at the beginning. Even for sites that are successful, they many
need maintenance interventions to stay in the implementations of an evidence based practice
depending on a situation. So the sequences of implementation intervention is often needed
for those purposes and the reason is to address this sort of variation that you see in addition
to that not all these barriers and facilitators are observable. It’s often more efficient
to deliver the more intensive implementation strategies where needed. And to really think
about this as a way of reducing the implementation burden and cost so that you can use your resources
more efficiently for the places that need it. You know maybe some places to get from
point A to point B to adopt a new evidence based practice they just may need a Chevy
versus a place that might need a bigger car to get to point A to point B for example.
There are really different, these are really different reasons for conducting these types
of implementation studies using a SMART or adaptive design. So but we’re going to sort
of look through and maybe walk through some key examples of how SMART and adaptive designs
are used to test different implementation strategies. So again a working definition
of an adaptive implementation intervention, so this is you know thinking about this broadly
from a standpoint of conducting SMART trials to figure out what you’re adaptive implementation
intervention would be. Is that a sequence and decision rules that specify whether, how
or when and based on which provider organizational measures to alter the intensity type or delivery
of implementation interventions to improve the adoption or uptake of an evidence based
practice. So you know it tends to be a SMART designs and adaptive interventions for clinical
treatments, you also, you’re looking at responsiveness, responsiveness from a standpoint of is this
site or practice or group of providers using your evidence based program or practice. If
they’re not using it you know what other types of implementation strategies to be deployed
to engage with that provider and improve the uptake of those providers using that particular
clinical intervention that you’re trying to implement. So what it does is it really provides
a guide for sequencing implementation intervention you know, may from the cheapest and most expensive
versions to the evolving status and needs of providers. And really the ultimate goal
is to improve the adoption of evidence based practices by greater numbers of providers
and then really to design it in a way that can be REPlicated, so that other organizations
can use it and other researchers can approve upon it. And this is key, so this is really
where again, this is a very much a match, a positive match between this type of study
design that is an implementation science. If you think about it this way, formative
evaluation which is a tool that is often used in implementation science, this is one way
of quantifying, in many respects or operationalizing those key steps used to formative evaluation
to improve upon the implementation of an evidence based practice. So in many respects the use
of these adaptive implementation designs and SMART designs is really a way of making sure
of what you’re doing it can be REPlicated and can be used elsewhere and then ultimately
it can be contributed to the science of implementation. So some examples I’m going to go over are
really 3 studies that involve the use of a particular implementation strategy called
Replicating Effective Programs. So we used it, and I’ll talk a little bit about the origins
of REP, how it was used in a cluster randomized adaptive implementation trial and then how
it’s currently being deployed to be used in a NIH funded clustered randomized SMART trial.
So we went from sort of a traditional study that used randomization and clustered randomization
to an adaptive implementation trial because we weren’t certain of what we wanted to test.
But then we had new questions about new types of implementation strategies so we designed
a SMART trial. So we went in that order essentially from the simplest design to the more complicated
design. And you’ll see shortly what I mean by that. The REPlicating effective program
was originally developed by the Centers for Disease Control to rapidly translate HIV prevention
interventions into community based studies. It was based on social learning theory in
Roger’s diffusion model and it really what it emphasized was treatment fidelity and roll
out. So it had a lot, most of the resources were really around technical aspects of the
intervention itself and were really front loaded to make sure that whatever was produced
to help providers use this intervention, it was produced in a way that really helped with
the, really maximized the chances of the program being used. So what it was was really a series
of three main phases, the first phase was pre-implementation where essentially there
was work with community providers to figure what were the priority goals, what were the
needs of those providers. There was essentially an identification point and in terms of looking
at what sites should be involved and then essentially picking an evidenced based practice
and then adapting it for the sites you know, based on feedback from frontline providers
and consumers was there was a feedback process. And then using that trial process and that
feedback not just a manual but a package and a package included in essence not just instructions
on how to implement an intervention and these were mainly behavioral interventions that
involved group sessions and involved active discussions and counseling and so forth. But
it also provided a step by step guide of talking points of how to sell the intervention in
a routine practice, what was needed to actually setup the group sessions, what was needed
in terms of publicizing the intervention and so forth. So it was a self-contained package
of everything you needed to know to run this intervention at your clinic. Now the implementation
phase included these inseminations of this package to a frontline provider who was responsible
for delivering it. And it also included the training of those providers and then really
some brief technical assistance that, in essence varied in intensity but was really not meant
to address anything beyond the technical aspect of the intervention implementation. So it
was more about how to actually get it used right. There was an evaluation phase and a
further insemination phase as well that included the further evaluation of how to further spread
these interventions. So this is a very technical heavy implementation strategy that again mainly
focused on really the provider education and training in terms of using a packaged intervention
appropriately. So back in 2000 what was published in American Journal of Public Health, was
this study that really compared the different components and alone and in combination of
REP to the actual uptake of HIV prevention interventions and AIDS services organizations.
So what was natured was a percentage of clients actually receiving the actual HIV prevention
intervention and it was based and essentially what it was, was the, a comparison of when
a manual only versus a manual plus training in a REP style, a manual training and then
REP clinical systems. So you can see this, this curve in terms of this still response
term that the more you provided in terms of support for the implementation of these HIV
prevention intervention, some more likely people would actually receive them, so not
surprisingly. But I think one of the issues that we’ve found is that it’s probably relatively
straightforward to implement one intervention in these 8 services organizations, especially
when the services organizations tied funding to the federal AIDS programs in the field
as long as they were implementing evidence based practice. So they had a sort of a strong
[inaudible] actually they had a strong incentive to implement these pro — These types of interventions
in evidence based practices. But what’s revealing with a complex health services system, a health
system and then what you’re trying to do is trying to incorporate some additional parts
to a clinical program. So it’s not just an individual program but what you’re trying
to do is implement a clinical practice, a clinical evidence based practice. Let’s say
for example the collaborative care model. So ref you know can again have its strength
in terms of maximizing fidelity but it had to be enhanced. And in a study that we recently
within NIMH, we enhanced REP with adding what we would call more interpersonal coaching
which is known as facilitation. And the goal of facilitation was to actually work more
closely with providers and really help them set goals and almost used a very much techniques
of motivational enhancement to help those providers, individual providers implement
a more collaborative care model approach to treating bipolar disorder in routine care
settings. So we essentially added this facilitation component, this interpersonal provider coaching,
in addition to REP and then compared that with REP alone. And we found that, that actually
improved the uptake of a collaborative care model based on the number of care manager
contacts as well as a number of self-management sessions offered. It almost tripled the number
of self-management sessions offered and it pretty much had a 30% increase and a number
of care management and self-management contacts overall. So overall the facilitation helped,
this was an essentially a study of enhancing provider uptake and evidence practices. We
were relying on those providers to deliver the evidence based practice, this collaborative
care model but the question remains do all sites needs additional facilitation. So again
we spent a lot of time and effort to provide this coaching and facilitation implementation
strategy but in terms of whether or not all the sites needed it that was really unclear.
So again, what we ended up doing was we enhanced the internal implementation strategy with
this external strategy with this external facilitation. And again, it really was a provider
coaching model and we really wanted to you know, really ensure that what exactly were
we delivering with this facilitation, this added component of facilitation. Really what
involved a lot more time and intensity from our standpoint as the researchers conducting
this implementation. So it involved more time because of the barriers assessment of your
facilitation model, the provider coaching and problem solving and also just the sort
of motivational enhancement techniques to promote success, to promote or a small step
of changes in the way that the providers worked in day to day practice and so forth. So it
was a more intensive implementation strategy that we embedded into REP. So we weren’t really
sure how sites needed this. So we designed and adapted implementation intervention. This
is an example of a type of study that we could’ve designed but we designed it slightly differently
which I’ll explain later. But just to walk you through, this is an example of a type
of adaptive intervention where you have, you give a number of sights REP, the package,
the training, the brief technical systems for 6 months. For sites that are early responders,
that’s great, you can discontinue REP and then monitor for progress over time. For nonresponders,
that mean — In those situations and what we meant was the providers were not, essentially
not using the clinical intervention that you were essentially having you know, essentially
that you were provided in your package that then you would add, you would augment REP
with external facilitation and then you do that for 6 months. And then after 6 months
essentially you would then among the responders you would discontinue facilitation and then
you would probably add to nonresponders even augment it even more internal facilitation.
So this is ultimately an adaptive intervention that we think that it would probably end up
being something that could be done in more of the real world where you augment over time,
in sequence the more expensive, more intensive interventions, so the implementation interventions.
Well let’s walk through how we ended up, how we would end up getting to this particular
point. So we did a study called re-engage in the VA, it was a largely based on the work
we had done with REP in facilitation. But we were asked to essentially design the study
to really determine among VA sites initially responding to a standard implementation strategy
REP. the effective adding facilitation of what we would call enhanced REP immediately
verses delayed or later on re-engage taking patient use. Now re-engage was a brief care
management program that was operationalized and designed to have mental health providers
identify veterans with severe mental illness who had dropped out of care. And then have
those providers contact those veterans and bring them back into care as they needed healthcare
services. Now given that these are veterans with severe mental illness, many of them needed
to be seen at least every few months. And we were targeting a particular patient population
that had not been seen for at least a year. Many of them had risk factors for conditions
such as cardiovascular disease, risk factors for psychotic symptoms if left untreated and
so forth. So this was considered a highly vulnerable population more or likely to die
than a general population of unforeseen causes and therefore considered a priority by the
VA to do something about. So we designed this adaptive implementation strategy to essentially
answer to question what would it take to help and support providers to better implement
this re-engage brief care management program. So we designed it as a two arm clustered randomized
trial taken advantage of a natural experiment of a natural program roll out. We had 158
sites that REP was initially used at to implement the re-engage brief care management program
where we provided a package to providers, training and a tool kit to actually implement
re-engage. And then among, after 6 months, 89 sites or 89 sites where providers were
not utilizing the re-engage program to capacity which we will define shortly how we defined
that nonresponse. They were then randomized to receive added external facilitation or
continued standard REP. so the re-engage program briefly again was a brief care management
program where their local recovery coordinators. They were mental health providers who received
the list of veterans of severe mental illness who had dropped out of care with last known
patient contact information. The providers were supposed to contact the veterans to assess
their clinical status and relative need and then scheduled appointments that the veteran
needed to be seen and desired to have an appointment. And then they were supposed to document efforts
in a web based registry. That web based registry was the tool that was used to monitor response
and nonresponse. If there is no documentation happening, we assumed that the providers were
not utilizing the re-engage program that they essentially were ignoring it or if they were
doing it in a minimal fashion. If they were documenting information about updated clinical
status, updated contact information, any information they would get from either the internet, from
existing medical records and so forth, they would populate in this web based registry
then we would know if they were actively seeking out information of those patients to actually
eventually reach out to them. So this is a marker response or nonresponse, a good proxy
because essentially monitor active work, essentially providers who were actively working to figure
out what happened to these veterans so that’s why we have that web based registry. So these
limitation strategies that were tested were standard and enhanced REPs. The enhanced REP
included in addition to the standard REP package, training and clinical assistance as well as
monitoring reports needs assessments that and initially that covered what the providers
were up against, what barriers they were facing to implement the re-engage program. It also
provided local support and encouragement of garnering local support. And sometimes those,
the facilitators who are delivering the facilitation actually called leadership as well sometimes.
It also helped the frontline providers implementing re-engage identify problems and barriers and
problem solve around those whether it be a lack of space, lack of cooperation, you know
maybe interpersonal issues with better providers you know things like that. So they really
tried to work out some of what the problems were in terms of helping and empowering that
provider to essentially lead from the middle and work to implement this program and then
also provided feedback and encouragement as well. So this is the basic design of what
we did for the re-engage study. So again we started out with 158 sites. The sites that
the provider had sites that were not essentially entering information into the web based registry
were deemed as nonresponsive and those were 89 sites. Those sites were randomized. We
did randomization of stratified by region to 40 and 49 sites and then essentially we
looked at the responsive, we continued to follow them in 6 month intervals and then
we essentially provided enhanced REP for an additional 6 months and then for sites that
were randomized, that were nonresponsive and randomized to standard REPs, after 6 months
they were then given enhanced REP. So essentially at the end because our operations partners
are collaborators in the VA really wanted all sites to get some form of enhanced REP
via facilitation. We chose a design that’ll allow for all the sites to eventually get
facilitation. And so the study is really and adaptive design where we’re essentially comparing
the early, early facilitation versus later facilitation, so these are results of 12 months.
You can see in the first phase, this is where you have half the sites randomized approximately
to additional facilitation. The facilitation really kicked in in terms of effectiveness
after 3 months. The y axis tips the patient level percent of attempted contacts and that
was our outcome measure at the time because we wanted to measure and we were able to monitor
whether or not patients had an attempted contact, if they were actually if the provider actually
tried to reach them. In phase 2, that was when we switched and gave the site that was
originally randomized to standard REP to then get an enhanced REP with the facilitation,
you can see those sites quickly catching up in terms of uptake on it at least based on
the measure of attempted contacts of patients. So one of the key questions we got from the
study though is was it strong facilitation enough. And despite the progress it made you
know we still had at least half the sites not really responding to even external facilitation.
It wasn’t consistent across all sites as well. We also did not find any impact on patient
level utilization so that those downstream effects that are often really difficult to
gauge at least with administrative data, we weren’t able to find any changes in patient
level utilization based on comparing enhanced versus standard REP. External facilitation
is relatively low cost. It was done virtually by phone. One dose of a 6 months facilitation
cost about 7.3 hours per site of an essentially of a base facilitator of an implementationist.
But at the same time we also realized from the literature, from the work by Joann Kirchner
and others is that you often need to have, you often needed an internal agent who can
really help facilitate the process on the ground and know the local politics, knows
how to get things done and can really help those providers embed the intervention in
routine clinical practice. And she really coined that term of internal agent as the
internal facilitator. Also, something we learned while we were rolling out this re-engage program
once and we thought, gee there’s even another type of implementation strategy that we can
add to the mix and test. So then what we want to do is design a SMART trial on facilitation
and really try to get at to what expenses added internal facilitation, even though it’s
more expensive doesn’t make a difference, does it make a difference for sites that are
nonresponsive. So we had essentially, you know we had our existing external facilitator
rule which really provided coaching and technical aspects of the clinical treatment and really
some interpersonal coaching but really from a distance. What we really wanted to do is
test whether or not it’s more expensive internal facilitator on site would actually provide
even improve the uptake even better. So based on the work by Joann Kirchner and others,
an internal facilitator was defined as someone with direct reported line in leadership, so
they had to have some influence up the chain, some protected time to do this, that means
it was part of their job duties or something that they knew that they had to spend time
on. They were there to address unobservable organizational barriers that were not apparent
at the time of implementing a new intervention. And then they were also there to help develop
a sustainability plan with leadership. So they were really the conduit by which the
essentially the clinical intervention that you trying to implement kind of past toward
the sustainability by really being that conduit, that person who can get things done in a clinic
or practice and make sure that that could be essentially be something that could be
essentially sustainable over time, so that was the idea. So we designed the SMART trial
called the adaptive implementation of effective programs trial which conveniently spelled
out ADEPT and we’d had our primary aim as the among sites initially responding to REPs,
to implement a collaborative care program. Sites receiving external and internal facilitators
versus sites that were receiving an external facilitator alone and improved 12 month patient
outcomes and improved intervention uptakes which was based on the number collaborative
care visits that were recorded. Secondary aim and this is where you really want to parse
out and all the types of questions you could possibly address in a SMART trial without,
we wanted to look at the effective continuing REP and external facilitations versus adding
internal facilitation. It seemed to an extent if you just wait long enough for external
facilitation actually you know improved sites over a longer term that you just need to sort
of give some sites more time. And then you know again to see if to what extent you know
does, do you hit sort of a ceiling in terms of the ultimate effect of REP and internal,
external facilitation for a longer time period. So really oftentimes when we design SMART
trials, the first, the primary hypothesis is usually the augmentation versus switching
work yes or no. The secondary aims in hypothesis often talked about duration so the timing
is really I think some important questions as because it can get quite expensive to continue
implementation strategies over time. And it would really be interesting to figure out
how long does it take before you need to pull back. So again we’ve looked at further enhancing
the REP model, the enhanced REP model. We added facilitation and facilitation was based
in part on the work by Joann Kirchner. But also on the PARiHS implementation framework
as well which really focused a lot more on health system changes. And again, the external
facilitator was offsite, did phone calls to the providers, really walked through the technical
aspects of implementation but also did some interpersonal coaching as well as a re-engage
study. The internal facilitator is the onsite provider who is really doing more of the day
to day politicking and the operationalization of getting that particular program embedded
into the clinic or site. So that person is there to build relationships, address on barriers
to implementation and develop a sustainability plan. So they are really the, not really so
much the champions because you have your provider champions seek particular intervention but
they’re really the go to people well respected as people who know how to get things done
as clinicians and providers who really know leadership well and how to link and this is
important, how to link the actual program that the providers are trying to implement
like say collaborative care would be overarching goals of leadership, so really making this
a win-win situation. So the ADEPT design currently including 60 community clinics from Michigan
and Colorado as using a SMART design. Nonresponse is defined, this is our initial nonresponse
to randomize is less than 50% of patients enrolled that we voluntarily had the providers
enrolled to implement the collaborative care model and the enrolled patients are not completing
all the initial collaborative care sessions. So we set initial bar of having you know at
least 3 or 4 collaborative care self-management sessions completed and if they completed less
than 3 then they would be deemed as nonresponsive. So the strategies again, are here I just want
to recap them again so we wanted to just you know really spend some time on the differentiation
because one is more expensive and more intensive than the other. But in essence just walking
through a design is that you have this run in phase. We always have this run in phase
for a SMART trial. You have these initial sites. We had up to 80 at one point. We planned
and then we basically identified about 60 of them who are nonresponders. And essence
at 50 they’re in the process of being randomized to either the addition of external facilitation
or the addition of external and internal facilitation. And one thing to also point out too is that
all the sites that’s advanced had to identify a potential internal facilitator so we were
able to hit the ground running once we randomized to actually start the facilitation process.
What that meant was how many extra facilitator immediately contact that person and despite
who would be the internal facilitator. After another 6 months of phase 2 places, were continuing
to nonrespond would either be randomized to continue external facilitation in the top
part here or they would be added internal facilitation. Sites that were already receiving
external and internal facilitation were either randomized to continue to stop external facilitation
or continue it over time. So again we did the follow up assessments are being done independently
by independent assessors. And then at the end at month 18 and 24 that really is the
focus time at which you continue to collect information on utilization and in patient
outcomes. We really have 6 groups of sites you know sites that constantly just do not
get anything beyond REP. We have sites that got REP plus external facilitation. We got
sites that got external and internal facilitation either immediately or much later and then
we also were able to compare for the sites, we can do these different comparisons though
for example are primary hypothesis was again if you went to the boxes they’re labelled
by letters A, B, C, D, E and F you know it seems very complicated because you have these
6 screws of sites. But the primary hypothesis is really essentially combining you know essentially
a comparison of C and D with comparison with you know essentially box B. Because what you’re
really trying to answer a question doesn’t added internal facilitation improve patient
outcomes and uptake compared to added external facilitation alone. So there are a lot of
challenges and opportunities of doing these studies. You know I think it’s complicated
enough to do in patient samples. Imagine trying to do these when you need multiple sites because
the unit randomization is so key in terms of answering the questions you want to address.
You really want to make sure you have adequate numbers of sites. You also need valid and
feasible and regularly available measures of nonresponse and outcomes measures and so
those often need to be differentiated. A measure of responsiveness really ought to be a measure
such as a clinical process measure that providers have the most control over so that it’s a
more fair process as opposed to you wouldn’t benchmarking responsiveness on patient outcomes
because you’re not really measuring patient responsiveness. What you’re measuring in order
to do the SMART trial is provider responsiveness which could be the use of the actual program,
it could be fidelity measures, it could be quality measures as well. So obviously you
know the simpler and more readily available the measure is for nonresponsiveness the better
but it really needs to be sure that it’s validated and really hones in on real behaviors in terms
of responsiveness. In addition to that there’s really the timing of the responsive measures
is also going to be key as well because we found in our initial re-engage study that
it really took sites about 3 months to really get their minds wrapped around the intervention,
the facilitation and really start going off around month 3. So we designed the in depth
SMART trial to basically monitor every 6 months, knowing that there’s always this you know
period in which it just takes a bit for sites to actually come around. And I think finally,
you know in designing these types of interventions, you often are faced with opportunities when
a new program or practice is rolled out. These are wonderful opportunities to test different
implementation strategies but you often have to be right then, there and now to actually
design these implementation strategy designs. The beauty of the in depth trial which was
very interesting was that it essentially was deemed as quality improvement by our local
RIBs. And the reason was, was that they assumed that what was being implemented was being
implemented by 1 and 50 providers that we weren’t you know, hiring new teams and providers
to implement something new. And secondly the intervention that was being implemented which
was collaborative care was seen as effective clinical practice and something that should
be part of routine clinical care. So many RIBs were deeming the actual study as key,
why because they understood that this was more of a system level randomization of sites
and not particular, and not patients in particular and really was seen as more of a study of
systems rather than you know, from a clinical trial even though it had elements of randomization
and important elements of design that made it rigorous in many respects you know, compared
to a demonstration study that you would see or a demonstration study or trial. And then
finally, I think in terms of just the role of SMART and adaptive designs and you know,
we’re in a point at which we have this opportunity to understand what it really takes to implement
effective treatments into practice. And it’s going to take some rigorous study of the provider
and the system level levers that can be used to facilitate the uptake of these evidence
based practices. So in hopes of replicating what is being learned in current implementation
studies, the hope is that we can move away from you know, doing the essential elements
of formative evaluation but somehow systematize or operationalize these elements of formative
evaluation in the form of a SMART or adaptive intervention implementation trial to really
learn and see if you could further replicate in different settings. So I think that was
it, I’ll stop there and hopefully we have a few minutes for questions. Thanks so much
for your time.>>Thank you so much Amy, we really appreciate
your presentation. We’re going to now open up for questions as a quick reminder, they
can be submitted using the Q&A feature on the right hand side of your screen. Just type
your question in the provided field and it submits. And I think David has a question
we received via email, so we’ll start there.>>Yeah, so definitely keep them coming. So
a general question I guess in terms of advice I think you very nicely walked through some
of the challenges, just thinking about you know, for the group that’s assembled on this
webinar, advice you have for investigators who are considering the use of adaptive designs
whether they’re any key resources or supports for those who are interested in applying it
to their work.>>Yes, absolutely I think one of the best
resources or there is a, I think our resource website that Kent State has that I think is
also with, it’s also being led by Danny Almirall and Susan Murphy at the Institute of Social
Research. And so I think they published a lot of their designs and they’ve also published
some, I believe some tool kits based on their designs. But I would go to those individuals
first, but I think we definitely need to get more of a tool kit up there more formally
in terms of how to actually implement these types of designs. Hopefully we can do that
through QUERI as well.>>Okay wonderful so we’re looking to see
while people are still typing in, just wondering is there a follow up to that, from your experience
are there cases, it seems like adaptive designs are certainly becoming the rage. Just wondering
if there are cases in which you would see that adaptive designs really don’t fit well.>>Yeah, no that’s a really good question.
I think that probably, it’s probably not generally a good idea to use adaptive designs if you
know that you are dealing with a set of practices or sites when you’re doing let’s say an implementation
study you pretty much know that they’re, they have, they’re coming on a same level playing
field. So for example if you really don’t, if you’re not sure how the sites may differ
in terms of their responsiveness to a new clinical practice or program then it may be,
it would be appropriate to use the SMART or adaptive design. But if you’re working with
sites that you have a general sense that they pretty much all have the same resources or
lack of resources, it’s probably not the right type of design to use and you also just for
the science you don’t want to do a study just for the sake of doing the method. You want
to do a study to answer a question. So the second issue too is that if you are testing
a brand new implementation strategy or basically not a brand new but, one that hasn’t really
been empirically tested before then a standard type of clinical trial would probably be the
best starting point. We chose adaptive designs for a REP in facilitation, because we’ve had
a number of studies on those types of implementation strategies in the past and we felt that there
was a scientific question around you know, whether or not augmentation was necessary
to essentially improve upon the implementation strategies themselves.>>Sure no great example. I see we only have
about probably time for one question. I see one that Marian O’Brien has put in, what are
potential power issues given the multiple groups.>>Yes, that’s a great question. I think in
general you really want to make sure you had enough, you have enough sites to do this,
to basically power to your primary hypothesis. So at the very beginning you’d want to sort
of draw out your study design and figure out you know based on what you’d expect to have,
how many sites are nonresponsive, what’s the minimum number of sites to answer your question
and possibly you know, I would then augment those numbers of sites, assuming that you
would have some sites that respond and some sites that drop out. So no magic number but
there was a paper recently published that compared sample size for sites between stepped
wedged designs and cluster randomized designs that actually gave a very good primer on you
know, what to look for in terms of sample size.>>Great, wonderful.>>And so with that just to be cognitive of
time, I want to say thank you so much to David, Amy and to all of you for joining us for today’s
session. Your feedback is important to us and we encourage you to complete our online
evaluation. A link to the survey will be sent to you shortly in an email. Finally, as mentioned
we’d like to continue this discussion from the webinar online at researchedreality.cancer.gov
where you can engage with the speaker that [inaudible] post in forums and also in the
archives it will also be located. Thank you again for joining us and you may disconnect
at this time.>>Thank you.

1 thought on “Advanced Topics for Implementation Science Research: SMART/Adaptive Designs

Leave a Reply

Your email address will not be published. Required fields are marked *