TIDIRC: Study Designs in Implementation Science

TIDIRC: Study Designs in Implementation Science


>>Okay, so hello, this is Greg Aarons. I am a professor in the Department of Psychiatry
at UC San Diego on the West Coast of the US, and also director of the Child and Adolescent
Services Research Center. I have been involved in implementation science
and implementation research for about the past 20 years with funding and support from
our National Institutes of Health, the Centers for Disease Control, as well as some other
nonfederal funding to support my work in implementation science. And I have worked with different study designs
in different settings, including here in the US in public sector, behavioral health, mental
health, [inaudible] systems. I’ve done work in implementation studies in
women’s reproductive health clinics and HIV prevention in Mexico. And also involved in some projects in sub-Saharan
Africa with some wonderful colleagues doing some great work in both HIV prevention and
implementing evidence-based trauma care for adolescents. So enough about me. I think that, that the main point I just wanted
to share with you that my experience with implementation research is rather broad across
different settings and different interventions and different kinds of delivery platforms. So what we’re going to talk about today, and
we only have a short time to go through this information, but we’re going to talk broadly
about study design consideration. We’ll talk about observational designs, experimental
research, and quasi-experimental design. The idea of combining effectiveness research
with implementation research, this idea of hybrid design. And we’ll spend a little bit of design on
mixed methods, so bridging quantitative and qualitative design. So on this slide in particular I just want
to note that there is a recent, rather comprehensive overview of research and evaluation design
for disseminating of implementation so published in the Annual Review of Public Health, and
led by Hendricks Brown who is director of the Center for Prevention Implementation Methodology
at Northwestern University. And Hendricks really does have a wonderful,
broad perspective on the field, so while we’ll be touching on just a few of the issues in
research design for implementation science, I really do highly recommend this publication
at the Annual Review of Public Health. So you know, when we think about implementation
studies, it really does go beyond the scope of our typical efficacy or effectiveness trials
that we would be focusing on in more traditional intervention development research and intervention
testing. And when I say intervention here, I’m talking
about clinical intervention. But when we’re talking about implementation
designs, we really need to think about-what’s the best study to design to use when you’re
testing an implementation approach or an implementation strategy? And of course this is going to depend on your
particular research question and where you’re working. So if you are trying to you know engender
position behavior change, for example, you may use a particular strategy. If you are testing an implementation strategy
to change the way a surgical treatment team functions, then you may have a different implementation
question that has to be not with an individual’s behavior, but how a team works together to
implement best practices or evidence-based practices in surgical care for example. If you’re working with team-based organizations
or nongovernmental organizations in low-resource countries, you may have other questions where
the larger context comes into play. So as implementation researchers and implementation
scientists, we have a lot to think about that goes beyond what we deal with from many efficacy
and effectiveness trials. Fortunately, you have lots of options. There are many study designs that you can
use in implementation research, and these can be anywhere from observational to really
highly controlled, experimental to quasi-experimental designs. And often these have to do with you know where
the implementation process is. If you’re coming in on a, for example, a policy
change that’s in place and you don’t have a lot of control over the conditions, then
you may have an observational or a quasi-experimental approach. For quasi-experimental you may be able to
use some tools that capitalize on things like you know propensity score matching. So you can have your target groups where implementation
is taking place, but you may be able to observe what’s going on in other [inaudible] and compare
to other [inaudible] design. But most of these designs are from the larger
purview of research design both in medicine and in psychosocial and social research. So there’s a lot of literature on these, and
while some of these issues may seem new to implementation science, there have been and
there has been a long history of study of some of these different designs. But as we know here at the bottom, you know
a few of these designs have really come to the fore in relation to implementation research,
and we’ll be talking about one example that effectiveness implementation hyper design
a little later on. So you know when you’re thinking about, you
know, selecting the study designs, there are lots of considerations, not the least of which
is, you know-how much funding and support do we have to conduct our research? So we, you know, we have the overall research
question, but then we have to think about the feasibility of a design. You know, can we get it done? And often when we’re seeking funding we have
to deal with these issues and convince reviewers about the feasibility of our design. What we may have you know is our perfect design
that we would like to conduct, but that may be constrained by feasibility, by things like
the cost of conducting the research, the setting in which it’s taking place, and the infrastructure
that’s available. Something as simple as you know having broadband
connectivity in a study; if you plan to collect data in a field that relies on web resources,
that may be a factor. The funding opportunities that are available
often guide and direct the types of designs we may be able to use. The type of implementation strategy and the
type of clinical or public health intervention also can affect the choice of study design,
and also the target population. So if, as I mentioned, the target population
is individual behavior change, that’s one type of design. If it’s team-based or multi-level, kind of
a more complex context, that can affect your design as well. The timeline also becomes important, so how
long do you have to be bringing your implementation approach, test your implementation strategy,
and think about when you might expect to see change. Is that going to be immediate or do you expect
to see it three months down the road, a year down the road, two years down the road? And this is I think a great place to bring
up the issue of sustainment. Because I like to think–. So while we’re thinking about implementation,
we should be thinking about sustainment of those interventions, though that timeline,
may–. The optimal timeline may be, you know, the
long run. So how do we implement in a way that we really
get to that sustainability and we can test whether these interventions are sustained
as well as implemented with fidelity and implemented in the manner that we think they should be? And then of course another concern is the
ethical issues in implementation research. And for ethics, you know, we go beyond I think
considering ethics in efficacy and effectiveness research, because now we’re thinking also
more broadly, often at the context level, so if we’re looking at how people are implementing
or using innovation or evidence-based interventions in a particular work context. Do we have an ethical consideration at that
point to consider how this affects their work life, their day-to-day life in their work
context, and what that means for them personally and professionally? So it’s beyond thinking about ethics or our
patient population and those who would benefit from a public health innovation, and we now
think more broadly about the ethics of the work we do in the workplace, in clinical health
settings, in public health settings, and what that means for how we go forward. And finally, as we think about this data that
we obtain, we’re going to talk a little bit about mixed methods as part of this talk that
I’m giving today. But there are, you know, many types of data,
but there’s also things like how do we house our data, do we share our data? There’s more and more demands for data sharing
and making our data sets publicly available. So I think we need to be thinking about how
we design our studies in a way that they’re-you know, we have transparency, that we’re able
to really clearly discuss the quality of our data and how our data are housed, how our
data are collected, and what that means for the work that we do. So I’m going to go right into some questions
about different designs now. And so observational study designs can be
really useful in looking at phenomenon or changes as they occur naturally. So in this case there’s often no ability to
randomize or control how things roll out in an implementation process. Often may focus on one cohort or maybe cross-sectional. I think a good example of this is where you
have a policy change. So for example, here in the US when the Affordable
Care Act came into play, it had a lot of implications for behavioral health care for alcohol and
drug treatment along with changes in laws that preceded ACA on [inaudible] for mental
health and substance abuse services. So those are examples of large policy changes. And the National Institute on Drug Abuse funded
some studies to look at the impact of policy changes such as ACA on substance abuse treatment
services. So they funded a number of studies, some that
looked broadly across the US through large survey designs and others that looked at more
focal areas. So for example, in large, urban areas it looked
at the impact of different aspects of service delivery. So when this occurs, you know, our, our designs
really have to bend or flex to consider-to really consider how we can get at specific
issues. So for example, there might be issues of how
substance abuse treatment organizations respond to and deliver services in the face of these
policy changes, so it could be at the organizational level. Other studies might look at how service systems
or large service sectors. So if you take a state-wide service system
or a county-wide service system in states that have county-operated systems, how they
respond to those changes and how they use both federal and local dollars to provide
services. So the observational study designs really
can give us some good information about what’s taking place, how it’s taking place, and also
potential impact on service delivery, the types of services that are delivered under,
under mandate. And, and we can even build in-for example,
if we have good administrative data-some assessment of what that means for engagement of services
for patients and clients. So the observational study designs really
can give us some good information. And another example comes from Marisa Sklar-I
don’t have this down in the records as that. She did for her doctoral work-looked at implementation
of the patient-centered medical homes in San Diego County, California, where the county
was developing the patient-centered medical home. And adults with severe mental illness, as
capacity was building, some were able to be put into the patient-centered medical home
services, while many of the existing clients were in usual care services for those with
severe mental illness. And she used what’s called “propensity score
matching,” which is identifying characteristics of those who went into the patient-centered
medical homes and identifying similar individuals-or individuals with similar characteristics who
were not in the patient-centered medical homes and then compare the outcomes in terms of
their recovery in the process. And Marisa published that in the American
Journal of Public Health, that study showing that there were benefits in the recovery for
those in the Patient-Centered Medical Homes. So this is an example where this is-I believe
the sixth largest county in the United States or thereabouts-made a policy decision, made
a shift in services implemented, a model where we’re developing evidence and could compare
the impact of that model for patients in this observational study design. But having some control, so really we-we’ll
talk about this again when we get down to the quasi-experimental design. But it’s-these are a couple of examples of
how we can look at, at changes and observational study designs where large policy changes or
system changes are taking place. But we don’t have control, we can’t randomize,
we can’t really control what’s going on. In terms of experimental study designs, you
know they’re really characterized by randomization or-we say “manipulation” here, but it’s really
control. So you have the randomized controlled trial,
a cluster randomized controlled trial, in which you would randomize a cluster or a clinic
or a team, some other unit of analysis to a certain condition. Whereas in the randomized controlled trial,
you’re typically randomizing individuals. The cluster randomized, you’re randomizing
units. And in the pragmatic trial, you’re trying
to really randomize where you want to inform decisions about practice. And I’ll get a little bit more into the pragmatic
trials in just a moment, and then we’re going to talk a bit about the stepped-wedge cluster
randomized controlled trial, that allows you to randomize when different units or individuals
see your implementation strategy or clinical intervention. But then it allows you to provide that intervention
or strategy for everyone in the study, whereas in a typical randomized controlled trial,
you have your intervention group, or your implementation strategy group, and your control
group that doesn’t get that strategy. So there are methods and approaches to be
more inclusive in terms of, you know, providing either clinical or implementation strategy
intervention. So as I mentioned, pragmatic trial, broadly
defined as, as an RCT or randomized controlled trial’s purpose is to inform decisions about
practice. Where explanatory trials really measure efficacy,
that’s the benefit of a treatment produced under ideal conditions, not “idea” conditions
but ideal. Pragmatic trials focus on effectiveness, so
the benefit of the treatment produced in routine clinical practice, and there’s been a lot
of talk about this. You know, efficacy trials typically are done
with a very highly select, highly controlled example with the focus on reducing comorbidities. And pragmatic and explanatory trials really
focus on individuals in the community, so you know those who are receiving care. So as the example I gave of the propensity
score matching and the patient-centered medical homes, these are individuals receiving public-sector
services. It’s not a highly selected sample, but even
in a pragmatic randomized controlled trial, you can randomize in a way where you’re including
those individuals that have, you know concurring conditions that are more representative of
people in usual care treatment. So what we’re trying to do is maximizing both
internal validity, but really focusing even more on external validity while maintaining
the rigor of randomized controlled trials. So an example of this is the PRECIS, that’s
the study of Pragmatic-Explanatory Continuum Indicator Summary. So the goal of PRECIS is really to help us
design trials that match our intended use. And this is kind of a useful article that
I sited here at the bottom that you might want to look at by Thorpe et al., but you
can also look at the article by Loudon et al. in 2015, where they provide some tools
for thinking about characteristics of a trial. And so as we focus-you know, going from the
range of one to five being very explanatory to very pragmatic, thinking about these different
characteristics. So the eligibility, so who’s selected to participate
in the trial? Are they very narrow or do they really represent
a larger kind of public health population? Thinking about recruitment, how you recruit
participants into the trial, the setting or the context where the trial is being done. You know, if you’re in a-in a, a highly controlled
specialty clinic, that may be very different than a federally qualified health center. If you’re seeking to try an implementation
strategy in the latter setting, you know, it may be more representative of a larger
or broader public health population. And moving around the circle, thinking about
the organization-what expertise and resources are needed to deliver the intervention? In organization you have to bring in expertise
as part of your implementation strategy. Developing expertise-so for example we have
a project now funded by the National Institute on Drug Abuse where we have a strategy for
improving leadership and organizational development around creating a context or a climate for
implementation of evidence-based intervention. So we are working with expertise that exist
in the organization, but thinking about the implementation strategy to help develop also
appropriate expertise in supporting the implementation of evidence-based intervention in a service
delivery organization. You think about flexibility or delivery, how
the intervention should be delivered, and your implementation strategy may focus on
the modality of delivery. So for example, are you delivering the clinical
intervention in person, face-to-face? Is it a home-based delivery? Is it clinic-based delivery? Do you have technological support-apps, help
from mobile devices or other things that are helping with delivery? The idea of flexibility and adherence-so what
measures are in place to make sure participants adhere to the intervention? And when we say “intervention” here, it’s
not just the clinical public health intervention where we think about adherence, but also adherence
to implementation strategies. So in terms of design of studies, you know
we-it’s not just broadly fidelity, but it’s fidelity to our clinical public health intervention
and then fidelity or flexibility with our implementation strategies as well. And there’s been some confusion in the field
about this terminology. So it’s really helpful when you’re designing
a study and talking about your study if you’re very clear when you talk about fidelity, adherence
to the model, to be very clear about how you’re going to assess adherence to your clinical
or public health intervention. What are the measures in place for that? And how you’re going to assess adherence and
fidelity to your implementation strategy. So just please be very clear about that. And then in terms of follow-up, moving-continuing
to move around our circle, you know thinking about in your design how closely are participants
followed up? How long are you following them? Do you expect to see, for example, fidelity
to your implementation strategy? So you know, if your implementation strategy,
for example, uses leadership support to support staff in delivering an intervention, how long
are you following up your leaders to see if you have behavior change in how leaders interact
with their clinicians that, that they supervise? And do we expect that to take place in four
months, six months, eight months? And do we expect that to continue on, or might
it degrade-that leadership skill or behavior-degrade over time? Those are all things that we should think
about in terms of how will following up or the length of study. And then the primary outcomes, and again here
we need to think about-what are our clinical public health intervention outcomes? Are we going to be looking at that? And what are our implementation outcomes? And so there’s a very good article by you
know a proctor and colleagues in 2011 that lays out this difference between clinical
outcomes and implementation outcomes. So I encourage you all to take a look at that
article. So it’s a proctor and second doctor is Silmere-S-I-L-M-E-R-E-and
others in 2011, to really get that idea of what are implementation outcomes and how might
they vary. And how do we look at that in our, in out
design? And then for our primary analyses, what are
our really-our real targeted outcomes that we want to look at in terms of our implementation
strategy? So what do we expect to change? If it’s individual clinicians’ behavior change,
we can-you know, how do we measure individual behavior change-is it through self-report? Is it through observation? What are the metrics that we’re going to look
at in our primary analysis? Or if it’s at the team level or organization
level, we need to think about complex analyses that take into account the nested data structure,
which becomes very critical in implementation research. Thinking about how providers may be nested
or working within a clinic and how you may have common variance related to that clustering
of how people are working in organizations or in specific clinics. So you know, I think the more we can really
get at these different aspects in the PRECIS approach and get at these more contextual
issues and thinking about both implementation strategy as well as what is being implemented. And our trials will be more pragmatic, and
I think the other code word for this is that we would have good external validity as well
as internal validity. So really for implementation science, this
idea of external validity becomes really critical in thinking about the, the relevance and potential
impact of what we’re testing in implementation science. I mentioned earlier the stepped-wedge cluster
randomized controlled trial. And so what we see here is we have-participants
are clusters up on the left and time is going down on the bottom axis, the x axis. And the idea is that-so say you had five clinics
that were going to be involved in the study. They all wanted to get the clinical innovation,
and you were testing an implementation strategy in each. What you might do is have a period-if you
look at the time period one on the x axis on the left, where you’re just doing measurement
to get a baseline for the clinic operation. So again, I’ll use the, the example of the
leadership intervention. So you measure leadership, you measure climate
for implementation of a new innovation in all of those clusters, and then at time period
two you do your implementation strategy to implement your new innovation with clinic
one, or cluster one. And you’re measuring all of the clusters,
all of the clinics again. At time three, clinic two gets the implementation
strategy to implement the innovation, and all of the other-three four and five serve
as controls for clinic two and three. And at time four, and now you have three clinics
involved. Time five, you have your four clinics. And finally at time six, you have all of your
clinics who received the implementation strategy and the clinical or public health innovation-intervention
as well. And so one of the characteristics of this
is that you’re collecting data from all of your sites over the course of the whole study. So the real spark to the stepped-wedge design
is that all of the units receive the intervention and receive the implementation strategy. The downside is that you are then doing measurement
of all of these clinics over time, so there’s a little more burden, sometimes a lot more
burden in terms of the data that you’re collecting. And so that can be kind of a critical decision
point in busy clinics or busy organizations where you’re asking them to participate in,
in a research study, and hopefully they’ll be getting some benefits in terms of the implementation
strategy and in terms of in the clinical or public health innovation. But then there’s that balance of kind of burden-do
our staff have the time to participate in measurement while we’re really still providing
clinical or health services? So it does become an issue in balancing that,
and there may be an increased cost in doing a stepped-wedge design. So you know in that kind of laundry list on
selecting a study design on-I believe it was slide number four, you have the cost and practicality
have an impact on how you select a particular design. So let’s talk a little bit about some quasi-experimental
design, and I already alluded to that a little bit in terms of the propensity score matching
design, which was observational, but puts some control–. I think we consider that both observational
and quasi-experimental. But some of the common quasi-experimental
designs that you’ll hear about and may be interested in using are regression discontinuity
design, where individuals or groups are assigned to an intervention or control based on an
a priori score or metric. So we evaluate, then assign, and then look
at change over time. A non-equivalent control group design, where
one group receives an intervention, one group is control, but they’re not necessarily predicated,
so they’re, they’re not randomized. And again, I just want to say one thing about
randomization is that randomization is based on, you know typically based on the law of
large numbers. So randomization works well when we have a
large number of units. And whether that’s an individual person or
an organization or a service system or a state or a country, randomization works well when
we have large numbers. Often in implementation science we’re interested
in unit performance or unit change and a unit could be an organization, it could be a team. So when we go to that kind of design we typically
have fewer units than we would have individuals. So we need to think carefully about, you know,
how well a randomization will work versus a quasi-experimental design. So for example, you may want to match groups
and then randomize to conditions. If you–. Which will help overcome some of the issues
with a pure randomized controlled trial. So the non-equivalent control group design,
bit of a tough design because you know you can compare your groups at baseline, but you
can’t control for unmeasured differences. So in this design it’s important to try to
identify key factors that you believe will be important across groups so you can control
for those in your statistical analysis. In terms of an interrupted time series, this
is a nice design because you have multiple assessments prior to and following the introduction
of an intervention or even an implementation strategy, so you can get more accurate assessment
of outcomes or behavior than single pre-post assessments. And you know, an example is you know where
you’re measuring over time outcome on the left axis, time on the-on the x axis. So you’re looking at the slope or change over
time prior to the intervention. You have a set time when your implementation
strategy or your clinical intervention is introduced, and you can determine whether
there’s a change in slope over time on your outcome of interest. So this could be an implementation outcome,
so if we expect leadership, for example, to lead to change in the team climate or use
of an evidence-based practice, we would expect to see you know change after the leadership
development training occurs and leaders were changing their behavior. We would expect to see a related change in
climate for implementation if they’re leading their teams well. So I’m going to jump over to this idea of
adopted design, and in adaptive design-this is a really I think novel and interesting
development that Hendricks Brown again had written about back in 2009. The planned modification of characteristics
of the trial based on information from data already accumulated. So as data are collected, they’re used to
make anticipated choices about prespecified alternatives, and Hendricks talked about three
types of adaptation or adaptive [inaudible] relating to specific elements of study design
of an ongoing trial, relating to the design of an upcoming or next planned trial, or related
to the intervention or delivery. And we see this in this idea of what’s called
a “SMART trial.” A Sequential Multiple Assignment Randomized
Trial-big fancy name, but it can be really elegant in terms of you know being adaptive
designs and adaptive interventions for implementation research. And so there are a number of kind of premises
for these designs that there may be heterogeneity of practices and characteristics of providers. Not all barriers or facilitators to implementation
or use of practices are observable. That we want to deliver implementation strategies
where needed. That we need to react to nonresponsive or
limited uptake. And that we can have sequential randomization
as we learn about how whether individuals or organization units are responding or not
responding. We can reduce implementation burden using
more specifically what is necessary. And then it helps us kind of understand or
sift through available implementation strategies. We could be more site-specific and hopefully
improve sustainment or sustainability. So I think a really nice and elegant example
for the smart trial is the Adaptive Implementation of Effective Programs Trial, ADEPT, led by
Amy Kilbourne. And primary aim for this is to look at sites
not initially responding to replication to implement collaborative care looking at sites
receiving external and internal facilitator versus external facilitator alone. So there are a number of implementation frameworks
that talk about the role of facilitation and whether that external or internal facilitation
alone or combined may improve uptake of evidence-based interventions, in this case collaborative
care visits. And whether these might be related to patient
outcomes. And so for this study, the secondary aim of
looking at this continuing, this two different approaches to facilitation, and looking at
continuing with facilitation for a longer period of time. So in the ADEPT trial, you know there’s study-start
or run-in phase where all sites are offered the facilitation to implement the evidence-based
practice, and then patients start at month three, so that’s [inaudible]. And then some of those sites may be quote
“non-responders” so you know, where they’re not responding to the facilitation. So what you can see is we move over to the
right that R in the blue circle randomizes non-responders to add external facilitations
or add internal and external facilitations. And then as you go forward, within those different
facilitation conditions, there may be responders or non-responders. So again the study is adaptive to randomizing
again to just assessment continuing the types of facilitations or adding additional facilitations. So as we go forward, this is a really nice
example of how you can learn something about how sites are responding to different types
of facilitation. And then learn more about how adding different
components to the implementation strategy-and here the implementation strategy is the different
types of facilitation-how you add to that and learn characteristics of these clinics
that may be related to whether they respond or not. And then testing whether if adding components
has an effect in terms of improving the implementation of, of care. So just a really nice, elegant design I think. So next I’m going to talk about effectiveness-implementation
hybrid design. And this came about I think because-and I
have talked to the authors of this paper about this a little bit. Implementation researchers were starting to
do some of this, but there wasn’t a name for it-this idea of looking at clinical or public
health intervention effectiveness at the same time that we’re looking at implementation
issues or implementation strategies. So the focus-there’s a dual focus a priori
on assessing both intervention effectiveness and implementation strategy. There’s three types, but the overall goal
is to accelerate the transition from effectiveness to implementation trials. So we want to kind of shorten the time that
we go from testing clinical and public health interventions to getting these in the field
through those sort of implementation strategies. And these really are unique to implementation
research. So in effectiveness trials we’re really testing
the clinical intervention. It can be behavioral, psychosocial intervention,
it could be a, a, a drug or pharmacological intervention, it could be a device, it could
be use of different approach to surgical care. The unit of randomization is generally the
individual or the patient, unit analysis is usually the patient, and outcomes are usually
health outcomes. And implementation trial, we’re focusing on
the implementation strategy. It could be facilitation, it could be training
models or training approaches, it could be the use of clinical reminders, it could be
different types of supervision. Unit of randomization is often a provider
[inaudible] or the clinic or even a service system. The unit of analysis then is also a provider,
clinic, or system. And the focus is on adoption and appropriate
use of the intervention, but it’s the implementation outcomes though it could be the fidelity with
which the intervention is being used. But as well we may want to focus on and look
at the fidelity of the implementation strategy and just use that affect the implementation
strategy as well. So the three types kind of going from focus
on clinical effectiveness on the left to implementation effectiveness as far as you. Know limitation strategy on the right, is
the hybrid type one, where the real focus is on the clinical intervention and we’re
just observing or gathering information on implementation issues. So we may some qualitative work and look at
factors that are believed to affect implementation or look at organizational context. For a hybrid type two, we’re going to be testing
both the clinical intervention and an implementation intervention. So that we’re not just observing the implementation
strategy, now we’re testing an implementation strategy. So if we have a leadership approach, we’re
maybe testing one group that gets the leadership development and another group that doesn’t.
in a hybrid type three, we’re really looking at a focus on the implementation strategy,
and we may be observing just the client-level or patient-level outcomes maybe through administrative
database. So we’re really moving from observation of
implementation to equal testing of clinical intervention and implementation strategy and
moving more to a real focus in hybrid type three on testing the implementation strategy
with less focus on the clinical outcomes. So that’s really the three hybrid types. And I have some more slides that describe
this in the intervention for you, but I’m going to move on now. Hybrid type one, type two, type three-so you’ll
have those slides available. And basically they just give examples of what
I just talked about. And so I want to talk briefly also about mixed
method design. And many of my studies incorporate mixed method,
so I really like this particular slide with a focus on a couple driving over a bridge
and she says, “Who would have thought that a structure of steel could be so beautiful,
strong and delicate at the same time.” With the qualitative perspective. And the quantitative perspective, “It shortens
my commute to work by 12.7 miles.” So really different approaches to the same
phenomenon, but we can learn a lot by really bringing those designs together. So mixed method designs have to do with collection,
integration of qualitative and quantitative data. And it’s-mixed method’s a really good approach
for understanding processes, context, and complexity. And we can embed these two together using
qualitative methods including interviews, focus groups, observation, review of documents
such as policy documents for example. With quantitative data including, you know,
surveys, including outcome data, including the data about the nature of the context. And, I am trying to advance the slides, so
just hold on a minute. Okay, so my slides are not advancing, so I’ll–. Okay, so I got my slides to work, so here
we go. So in terms of mixed method design, there’s
a number of different typologies out there. Dr. lerik Palinkas who’s at the University
of Southern California has done some nice work in, in it. Talking about mixed methods design and implementation
research. And so three larger approaches are thinking
about kind of, kind of current mixed methods. At the top here where you’re collecting your
qualitative and quantitative data concurrently at the same time as you go, and then bringing
your results together. And on of the benefits of this kind of parallel,
concurrent approach is that your, your data are happening and you’re collecting those
data and phenomena at the same point in time so people don’t have to remember back or [inaudible]
or do retrospective kind of reconstruction. That you’re really collecting at the same
time. But sometimes you know you may want to do
a sequential design where you’re collecting qualitative data and then quantitative data. And I’ll just say here when you see like the
uppercase letters versus lowercase, that just indicates what data are primary in that design. So in concurrent, but QUAL and QUANT are primary. In the sequential where QUAL comes first and
then quant, you may be more interested in your qualitative and at least your quantitative
data that’s going to lead to your results. Where you see sequential design with qualitative
and then quant often is in measured development. So you have kind of an inductive approach. Sometimes you may use your quantitative data
primary and then bring in your qualitative data to learn more about the phenomenon that
you’re measuring. Quantitative, so you can use your quantitative
data, for example, for purposes sampling that-so that you’re really using those data in a way
that informs how you’re going to ask the qualitative questions. And then you can have embedded sequential
where these really, you know, warp together as you go forward. Some other ways to think about these are our
types. Talked about the embedded design where you’re
collecting qualitative and quantitative, a broader or more comprehensive understanding
of a complex context and process. So you’re conducting one within the other
type of study, and that’s often related to a sequential design. So an embedded example, let’s say you’re conducting
a randomized trial to test different strategies for training community health workers to deliver
smoking cessation. So you might use that embedded design to really
get at the issues that community health workers are facing in delivering those implementation
strategies as you go forward. Because you need to be thinking about what
type of quantitative data you would collect over time, you know-how often? From whom? And then the nature of the qualitative data. Would you want to have observational data
of the training session of how workers are delivering the smoking cessation intervention? Would those types of observational approaches
be best or would you want to ask the community health workers about their experiences in
the training, with the training and how well that led to their being able to deliver the
smoking cessation? There are a number of different approaches
that you can use in thinking about the quantitative and the qualitative. In the explanatory model, you may move from
the quantitative to the qualitative, so help explain or build on initial quantitative results. And so you can use quantitative participant
characteristics as I mentioned to use purposes sampling, so you’re identifying those who
can tell you the most. Maybe those who’ve done really well with the
implementation strategy, or those who’ve done poorly and get those very diametric perspectives. And that’s a sequential design. So you may, for example, administer a survey
to health practitioners to understand their attitudes about the HPV vaccine and responses
could range across the spectrum from positive to negative. And so the explanatory design can help you
so you can think about how, how you would use the quantitative data. What specific questions in your quantitative
data would inform your sampling approach? And what types or approaches to qualitative
data collection you would want to collect and from whom? And if you wanted to do that just once or
in different places in the process. So for example, as health practitioners gained
experience with delivery of the HPV and engagement of patients. You know, would their perspective change? Would you expect their perspective to change? And if you did, that might lead you to a more
perspective design where you would maybe collect data early on and collect data later on to
see if there has been attitudinal change and what was related to the attitudinal change. And for exploratory, your quantitative data
can help explain or build on initial qualitative results, and this can have to do with something
I’m going to talk about in a minute about-around expansion of results. But your quantitative data can help explain
or build on initial results. And exploration may be needed because you
know we’re just learning about the phenomenon or thinking about the context of learning
to understand the context in which we’re implementing the new practice. So it may be beginning with the qualitative
and moving to the quantitative in a more sequential fashion. Though let’s take the example of HPV where
self-sampling is available in the community, but very few women are-you know, use the HPV
vaccine. You want to understand barriers towards correct
use of, of self-sampling and develop an instrument to measure that, so you might use the exploratory
design. You need to think about what type of qualitative
data, from whom you want to collect the data. And then how you would use those to identify
either domains or constructs that you would develop quantitative measures from that? So you would use the qualitative data to help
identify the domains or important domains that you would use to develop quantitative
assessments and specific items to develop measures. So another way to think about this is, you
know the design type, and in a way we’re already talking about you know convergence of methods:
complementary, exploratory, and developmental and expansion. So in convergence, in convergence approaches
or the function of mixed methods, we’re thinking about how our quantitative and qualitative
data come together regarding the same phenomenon. And we may want to collect our data concurrently
and use that for interpretation. And so qualitative and quantitative may be
equal in that approach. Or complementary, we may want to understand
a phenomenon more completely, so when you have a sequential data collection approach
and use analysis and interpretation in terms of our mixing of the methods and moving from
quantitative to qualitative. We just talked about a more exploratory and
developmental, so I’ll stick to expansion. And that’s more of an explanatory model where
we want to assess different phenomena using different methods. We may use an embedded sequential approach
and our analysis in interpretation will help us go a little deeper into our findings with
one approach. So our quantitative may inform our qualitative
or our qualitative may inform our quantitative. It really can vary as we’re thinking about
that. And so I just want to give one example of
a study that used those design types and I’m going to focus on convergence and expansion
here. So this is a mixed-methods study of statewide
implementation of one evidence-based practice, in this case it’s called SafeCare, which is
a child neglect intervention. And in the state there were six regions, and
the state agreed to allow regions to be assigned to either receiving the evidence-based intervention
or services as usual, and then teams within the region were randomized to fidelity monitoring,
which was done by either having coaching or noncoaching. So this study combines exploratory and confirmatory
approaches, and it’s longitudinal at the team level. So the data were collected from team members
both through-quantitatively through web surveys about organization context, and through focus
groups and surveys with service providers and system leaders as well. And so we’re examining reciprocal effect of
implementation of evidence-based practice on service system and organization as well
as how characteristics of the organization impact, impact implementation. And for this particular project, we focused
on the impact of implementation on staff retention in care, because turnover personnel and implementation
of evidence-based interventions can be a critical concern. So what we found quantitatively was that where
the evidence-based practice was implemented with ongoing monitoring and coaching we had
significantly better staff retention. So you can see that in the survival analysis
estimate on the left. And then on the right you can look at the
annualized turnover by condition. So SafeCare, yes, and having the ongoing consultation
and coaching if the turnover rate was annualized as 14.9% compared to more than double that
in all the other conditions in this experimentally controlled study. But when we looked at the quantitative and
qualitative, it tells us a little bit more about the study. So we had specific questions, and when you’re
thinking about mixed methods you need to be thinking about how you’re going to present
and mix your mixed method. And this is one area where grant proposals
can really fail if let’s say if you say you’re using mixed methods but you don’t talk about
how you’re mixing them, how your quantitative informs your qualitative or your qualitative
informs your quantitative, your viewers will really jump on that. So in this slide you can see that you know
we have a question, and the question is the same for quantitative and qualitative data. “Does SafeCare implementation increase the
risk of turnover?” Quantitatively, the answer is “no.” Providers in the SafeCare monitored conditions
had a greater likelihood of staying with their agencies, and qualitatively we found in the
data that many providers reported satisfaction with the structure provided by the evidence-based
practice. And none of the providers reported leaving
primarily with their involvement in the effectiveness trial of the evidence-based practice. The other question was “Does fidelity monitoring
or coaching increase the risk of turnover?” Quantitatively, the answer also is no. The home-base providers in evidence-based
practice SafeCare in monitored conditions had a greater likelihood of staying with their
agencies, and many of the providers qualitatively reported satisfaction with the support they
received from monitors, or their coaches. They’re answering these questions. And then we had additional hypotheses about
perceived job autonomy. And the relationship of job autonomy to turnover
was supportive that it’s not a function of the evidence-based practice. So in the interest of time, I’m just going
to move onto this idea of expansion. So as we mentioned previously, expansion tells
you more or takes you more in depth about a phenomenon. So here the question quantitatively was “does
SafeCare implementation and/or monitoring lead to increased turnover?” The answer is “no.” The home-based providers in the SafeCare condition
had a greater likelihood of staying with their agencies. Qualitatively though, we can say you know
“why are they more likely to stay?” And what our quantitative data told us is
that our providers like the structure, especially early career providers, that-they like the
structure that SafeCare provides the services. They felt that the support they received from
their coaches or monitors-they viewed that as “free” supervision. It was seen as supportive, and so the providers
also supported one another in applying the evidence-based practice and they developed
a professional identity around the delivery of the model. Whereas services and usual providers reported
decline in moral in factors unrelated to the practice, so for example conflicts with supervisors,
changes in leadership, and fewer opportunities for learning for having a team identity. So in the case of expansion, you know being
able to place your quantitative and qualitative findings side by side and really help illustrate
how you’re using and how you’re mixing our mixed methods. So we’ve covered a lot in this time, and there’s
a lot to learn about designs. I’ll refer you back to the Hendricks Brown
article in Annual Review of Public Health for a really nice overview of many designs
with a lot more detail than I could present here, but the best study design is to use
the ones that can answer your research questions, of course. But your selection of the study design, it
issues of feasibility, cost, resources, timing, the team you’re putting together, and being
able to answer your implementation research questions, and this is where those hybrid
designs come in as well thinking about that balance of focus on implementation versus
effectiveness. But on the plus side, there really are a variety
of designs to choose from and consider. So you know with your research team, really
sit down and say, “Here are our questions. We’re focusing this, you know, this much on
implementation, this much on effectiveness. What are the designs do we have the ability
to randomize? Do we randomize at the cluster level, the
individual level? Do we want to bring in qualitative methods? If so, what’s the best way to do that? How are we going to use the two methods together
most effectively so that we can really learn how to bring evidence-based intervention to
scale to promote better clinical outcomes and better population and public health impact?” So I’ll leave you with that. Thank you for staying with me for this talk,
and I have my contact information. If there are questions that-you can direct
them to me or to your [inaudible] mentors and other folks that are working with you
at [inaudible]. Thanks very much

Leave a Reply

Your email address will not be published. Required fields are marked *