Advanced Topics for Implementation Science Research: Hybrid Designs

Advanced Topics for Implementation Science Research: Hybrid Designs


Please note that today’s session is being
recorded. If you have any objections, you may disconnect at this time. Thank you very
much on behalf of the National Cancer Institute. I wish to welcome everyone to the July Advanced
Topics in Implementation Science Webinar. Today we are delighted to welcome our presenter,
Dr. Geoffrey Curran, who will be joined by our own Dr. David Chambers, who will moderate
the session. A very brief word about logistics and then we’ll be off. We ask that if you
are not already on mute, to please keep your phone on mute for the duration of today’s
presentation. As mentioned, it is being recorded and muting all lines will help us to avoid
any background noise. We encourage questions. They can be submitted by using the Q&A feature
on the-right hand side of the WebEx interface. Type your question in the provided field and
hit submit. Feel free to submit your questions at any time, but we’ll be opening the questions
— the session for questions when the speakers have finished. Without any further ado, it’s
my pleasure to turn the meeting over to Dr. David Chambers.>>Okay. Thank you, Sarah and welcome to everyone
who’s — who is joining us today. As many of you who have been part of these webinars
in the past have noticed, this has been our attempt to try and go beyond to some of the
overviews around dissemination and implementation research into more advanced topics. A number
of the training programs that we’ve been able to do, whether it’s Technical Assistance Workshops
or weeklong trainings, have been able to give I would say a smattering of different topics,
exposure to a number of different topics. But sometimes we don’t have as much depth
to be able to go into specific discussions, to have Q&As around specific models, around
specific designs, etc. Last month, we completed a whole series of webinars that were focusing
on different dissemination implementation research frameworks. And with Geoff Curran’s
presentation, we’re kicking off a newer series that delves into different research designs.
In recent years, at least from my — what I picked up, there’s been increased interest
in this idea of how might you be able to learn both about the effectiveness of different
interventions while also looking at the implementation of those interventions in practice. And really
Geoff Curran, who you’ll see a nice, smiling photo of right here, who’s the Director of
the Center for Implementation Research and a Professor at UAMS, also involved in the
VA, has been leading the field in helping us to think through different ways of combining
studies that are both looking at the effectiveness of a given intervention with a range of information
about how those interventions are implemented and with what strategies in community and
clinical practice. So the way that this will go, Geoff will give his presentation, as Sarah
had mentioned. We would love to have questions at all times. And we’ll get to those at the
conclusion of his presentation. And so he’ll start presenting and then we’ll use as much
time to answer whatever questions you have. As Sarah had mentioned, this is intended to
be archived, so if you want to tell anyone about it down the road, it will be there for
your pleasure, and we’ll also try and link to a discussion on our Research to Reality
website, where you can continue to ask questions or share your thoughts about this, as well
as other topics. So without further ado, let me turn it over to Geoff, and thanks in advance
for the wisdom that you’re about to impart.>>Sure, great. Thank you very much. Are you
hearing me okay? Okay. I see that it’s working. Okay. Well, thank you, David. I think it’s
kind of fun for you folks to sort of have the first episode of the designs focused webinars
from me on a nice, messy topic of hybrid designs. So I think that I will start with providing
some of the arguments that we originally proposed in the medical care paper on this topic, which
was published in 2012. And so I think that sort of the key argument that we proposed
was that — we thought that the speed of moving research findings into routine adoption could
be improved by considering these hybrid designs that combine elements of effectiveness and
implementation research. If indeed the goal of translational science is to speed that
process, we were looking, you know, at potential ways to merge traditional effectiveness trial
designs with implementation research. In other words, we might not need to wait for perfect
effectiveness data before moving into implementation research, and that it might be very possible
and worthwhile to backfill, if you will, effectiveness data while we are looking at and testing implementation
strategies. And we will talk about today sort of the numerous situations where there is
momentum from varying levels to move to implementation and to support rollouts of interventions that
really have not yet, you know, received a full look, a full attention, you know, from
the effectiveness standpoint. And we also argue that we can and should study clinical
effectiveness as it relates to the fidelity of implementation. We can certainly expect
in many cases that as fidelity to implementation to the uptake of intervention varies, we might
well expect the outcomes of that intervention to vary, and if so, should we be trying to
look at that? Let me offer some more introductory thoughts on this topic. Researchers were already
doing these kinds of blends of these trial designs and approaches, you know, really well
before some of my colleagues and I started to speak and write about hybrid designs per
se. In the paper in 2012, we were trying to bring some attention to the issues surrounding
these complicated combinations, along with trying to provide some direction, examples
in the published literature at the time, and also some recommendations for moving forward
into this area. Since then, we’ve seen more and more people trying hybrid designs. They’re
writing grants, turning those in. They’re being funded. They’re publishing protocols,
results and lessons learned. And as a result of that, my colleagues and I from the first
paper have recently begun a process to work on a couple of papers we think to sort of
revise and extend, you know, our remarks, if you will, from the earlier paper, and I
will present some of these sort of new ideas on the hybrid designs towards the end of the
talk. I think I need to try to cover a few of the terms and how I will use them today
to make sure that we are being clear. In the sort of hybrid design context, when we’re
talking about interventions really at the two different levels, the language can actually
get, you know, quite complicated. In the title today and some of the slides, I will, you
know, talk about clinical interventions, but we’re really referring to a range of prevention,
intervention, public health and other kinds of intervention. Sometimes, it gets, I guess,
hard to say clinical/ prevention/ public health at every mention of intervention. So I will
sometimes use only clinical, but I’m referring, you know, really to sort of a wider group
there. I will use the term intervention to refer to the clinical/prevention practice
that we have an interest in exploring. And today, I will largely use the term strategy
to refer to the implementation-support activities/tools that we have an interest in exploring. Both
are really interventions whose effectiveness we are interested in, in trying to learn about
or demonstrate, but in the hybrid design context, it can get confusing. In this talk, I will
usually use the term effectiveness when referring to the outcomes of clinical/prevention intervention.
Okay. So the notion of these hybrid designs really came as a reaction to what we were
seeing at the sort of traditional research pipeline, which is depicted here, where there’s
a period of time to be focused on efficacy studies on interventions. When there’s enough
positive evidence there, we then move to effectiveness studies on those interventions, widening their
scope, testing them under more real-world conditions, in less controlled conditions.
And once we see strong effectiveness data there, we then move on to implementation research.
And in the late 1990s, early 2000s, you know, folks like Ken Wells and Russ Glasgow and
others were talking about some of the ways that efficacy and effectiveness research designs
could be blended or combined to try to speed that process. And about 10 years later, some
of us were starting to ask similar questions on this relationship or gap, if you will,
between effectiveness studies on interventions and implementation research. And so our hybrids
then that we’ve presented here are sort of spatially fitting in this sort of gap area
here. Okay? So in the 2012 paper, we proposed three hybrid types that occupied that gap
in the previous pipeline between effectiveness research studies and implementation research.
We coined a Hybrid Type 1, which is more closely situated to effectiveness research, you know,
where here the main goal of the study is to test the effectiveness of a clinical/prevention
intervention, while at the same time spending some time and effort observing and gathering
information on implementation during the trial and trying to do some preparatory work to
understand the needs, the barriers that might be associated with implementation after the
study and sort of preparatory work towards developing potential future implementation
interventions. If I might skip to Hybrid Type 3, which is sort of the inverse, where here,
the primary aim of a Hybrid Type 3 study is to test implementation strategy in a randomized
or otherwise controlled manner, while at the same time, collecting, gathering information
on the clinical/prevention outcomes that are resulting from these implementation efforts.
In the middle, we have sort of the most complicated Hybrid Type 2 design, which we thought about
as a more equal look or test at both clinical/preventive interventions and implementation of strategies.
Okay? So I’ll try to go into some more detail on each. I just moved the slide, and there
doesn’t seem to be — oh, I guess it’s happening sort of one at a time. I’ve already sort of
done the definition, you know, here. It’s large — for a Hybrid Type 1 design, it’s
largely testing the clinical intervention with the secondary aim on implementation information.
And as we have seen in operationalizations, these projects are pretty much 80, 85% effectiveness
trials with an aim at the end which proposes some kind of process evaluation or other evaluation
of implementation during the trial in a preparatory way to finding out what might be necessary
in a future implementation strategy test. We — in the paper, we try to give some guidance
as to when we thought these hybrid types might be sort of best indicated. And for the Hybrid
Type 1, we were thinking that in the ideal case, that some effectiveness data were already
available, but that intervention would be more likely to, you know, move towards implementation
rapidly if key implementation factors were explored, you know, in a Hybrid 1 trial. So
a Hybrid 1 trial might not necessarily be the first effectiveness trial in a, you know,
in a certain area, but they might occur after some effectiveness data has already been indicated.
And we also thought that sort of the safety issues should, you know, already have been
dealt with and that we aren’t exploring implementation of interventions which might have some safety
concerns still. I will mention two examples briefly. The Zoellner et al paper, I — they,
sort of, they included with the packet of papers for today’s talk. So this is a pragmatic
trial of two interventions aimed at reducing consumption of sugar-sweetened beverages.
What’s really nice here is that the intervention planning for this trial involved discussions
and choices being made around future implementation issues. So they were trying to create intervention
elements that were more likely to be implementable, around low startup costs, for example. And
they chose a mixed method process evaluation of the reach that they achieved in the trial,
meaning how many eligible patients that they were actually able to reach and provide the
intervention to. But also, data on implementation and how often and how well the expected intervention
behaviors were provided. A second example comes from myself and other colleagues who
did an effectiveness trial of the CALM intervention, a multifaceted intervention for anxiety disorders
in primary care clinics. And in this case also, the intervention planning involved trying
to forecast implementability and to forecast some of the barriers that might be faced in
the development of the intervention itself. But there was a process evaluation that happened
towards the end of the trial and just after. Ours was largely qualitative, focused on multiple
stakeholder interviews — providers, nurses, clinic staff, patients — trying to get a
sense of how implementation went during the trial. Were the implementation supports provided
enough? What more might be necessary for these clinics in the study to keep doing the intervention
after the study was over? And also, to get their recommendations on what new clinics
who weren’t part of the study might need, relative to implementation strategies and
support. I will move Hybrid Type 2 designs. And so, you know, here, the expected focus,
you know, it is a dual focus. And it could come sort of in an equally focused factorial
type design where there could be a clinical/, you know, prevention randomized effectiveness
trial nested in a randomized implementation trial at the provider or clinic level, for
example. And there are some of those kinds of studies out there, but not too, too many,
for reasons that I might hit on briefly later. The other kind of Hybrid Type 2s that we talked
about in the initial paper do seem to be the kind that are most talked about and proposed
now, which are having a clinical trial, randomized patient level trial, nested in a pilot study
of an implementation strategy. So the implementation strategy that is being explored is being explored
from a pilot level — feasibility, acceptability. It’s a non-randomized element of the larger
study. For indications here that we offered for moving in to the Type 2 area was that
there should be a clinical/prevention effectiveness data available, though perhaps not in the
context or population of interest for your trial. And if there are some, there are two,
great. There also should be data already on barrier, facilitators to implementation, some
of that pre-implementation to help develop either in the pilot or more fully formed.
Implementation of strategy really needs to already have been done. And we also came up
with the term of implementation momentum. And with that, we were really trying to talk
about the context at the system or policy level and, you know, is there a situation
within a healthcare system, VA system, a national healthcare system or other sort of policy-deep
context, where there seems to be an appetite and plans coming up for mandates and rollouts
that, you know, will be possibly coming from more of a top-down approach. And in that context,
might it be possible to use a hybrid design, either a 2 or a 3, to try to contribute, you
know, to that rollout while also collecting the still needed effectiveness data? For this
one, I will give one example. It’s from Cully et al in 2012. They proposed a clinical trial
of a brief psychotherapy for treating depression and anxiety in the VA. Already good evidence
of these kinds of interventions, but not in the brief context. And they proposed a patient
randomized trial in the context of a pilot implementation strategy, which had three components
to it — online training, audit and feedback, and facilitation. And so the analysis for
the patient randomized trial was sort of the classic intent to treat analysis of those
clinical outcomes, but at the same time, they had an analysis arm around feasibility, acceptability
and what they called preliminary effectiveness of the implementation strategy. So they were
collecting data on, you know, knowledge, acquisition from their online training, they were measuring
and tracking fidelity of these providers to the manualized therapy, and making adjustments
and intervening among providers who were struggling with lower fidelity. And I’ll touch later
on sort of the complexities that this kind of design adds, you know, when fidelity is
not quote, unquote “assured,” as in effectiveness trials but in these hybrid models, where that’s
more of an outcome that’s being observed and how to best capture that while still trying
to get quality and good effectiveness data. They also had a qualitative component on implementability.
They tried to measure the time spent, you know, delivering the implementation strategy,
and they also measured sustainability of the brief CBT after the trial. And all of this
was openly framed as preparatory to a future implementation trial of this strategy that
they were pilot testing. So in their frame, sort of, you know, the next step was a much
higher focus on the implementation strategy and trying to test that in a controlled way,
whether that was a Hybrid Type 3 or not. So moving on to the Hybrid Type 3 designs, again,
what we’re talking about here is, you know, a primary focus on testing implementation
strategies while at the same time trying to collect data on clinical outcomes of the intervention.
And some of the Hybrid Type 3s that we’ve seen thus far are quite large, from the sample
size standpoint at the clinic or provider level, which often means that primary data
collection on the health outcomes prevention — clinical outcomes are very difficult and
so, one of the questions, you know, here is, you know, how much quality outcomes data can
you get from large databases or medical records, to best inform the goals of a Hybrid Type
3 design? Relative to some of the indications for when these designs might be appropriate,
the first listed here is in the context of a high level need for implementation, despite
a limited evidence base. So in other words, a strong momentum context for implementation;
implementation sooner rather than later. Some form of a mandate to implement something in
a, you know, large system, when the effectiveness evidence just really isn’t quite there. And
I can say from working in the VA, that there — that this is a context which actually happens
relatively frequently and my example coming up or one of the two coming up really exemplify
this situation. We also talked about Hybrid Type 3s being useful when the clinical/prevention
intervention data have been available and is strong, but the effects are suspected to
be vulnerable to implementation fidelity. And this might more be the case with complicated,
complex clinical/prevention interventions and/or if the implementation strategies are
also multi-faceted. And so, you know, our argument here is, you know, if it’s expected
to vary, try to collect as much of those outcomes data as possible, to try to track those outcomes
to the implementation variation. We also said too that for these Hybrid Type 3 designs,
that there, you know, really should be already data indicating that the implementation strategies
that will be tested are reasonably feasible, acceptable and supportive in the context of,
you know, notes. So having that pilot data around feasibility, etc., is really necessary
before moving into a Hybrid Type 3 design. So I have two examples here. One is from the
VA, from Kilbourne et al. So this was sort of exactly the context mentioned on the last
slide, that there was a system-level mandated rollout that was going to be supported by
the VA on Re-Engage, an outreach program for veterans with serious mental illness, who
had been lost to care. And so these folks were able to partner, you know, with this
rollout mandate to do a study comparing a standard implementation strategy with one
with enhancements. I won’t really go in to what those are, but one of the really cool
things about this trial is that it was an adaptive trial at the implementation trial
level. So sites started with the standard implementation strategy, and those that failed
under an initial attempt to reach a certain preset fidelity level were then randomized
to the enhanced strategy to then see to what extent, you know, that moved the dial among
sites who did not do well with the more basic approach. And, you know, hopefully for this
webinar series, a future topic will be devoted to these adaptive trials, because they’re
really very, very interesting and really very, very helpful. So in this study, the main outcomes
were certainly around implementation, the extent of the behaviors of locating and trying
to contact veterans that were part of the intervention. They also had a very cool opportunity
to collect at all of the 150-plus VA programs that were part of this, to collect context
variables to help, you know, sort of predict, you know, who might do better or, you know,
in the standard condition and what kinds of programs needed the extra implementation support.
And then they also had qualitative barriers from these providers who were charged to do
the intervention on, you know, sort of barriers, facilitators to implementation. The secondary
outcomes were focused on sort of the main goals of the intervention, you know, looking
at percentages of veterans who were re-engaged, measuring a range of service utilization,
with those really being used — those data really being used as proxies, if you will,
for clinical outcomes. So they didn’t have in these large databases the kinds of direct
outcome measures for serious mental illness. But they did have a lot of services use data,
and they could come up with service use profiles which seemingly, you know, can be argued as
indicating more positive outcomes and other service utilization profiles that might indicate
less positive outcomes. And they did have mortality measures. I will mention one that
colleagues of mine here at UAMS are working on right now for the October deadline, actually.
We’re looking at a regional implementation trial here in Arkansas of a standard versus
standard plus implementation strategies to promote the uptake of behavioral activation
and evidence-based practice for depression. But in this case, lay providers in churches
are being trained to do this intervention. And as of right now, the pilot data there
on that effectiveness is strong. So in this study, we are proposing, you know, main outcomes
around adoption and fidelity. Are folks offering the number of sessions? Are enough people
showing up to, you know, receive there? Are the lay providers providing the manualized
depression intervention sort of with good fidelity? But the secondary outcomes in this
case — because it’s relatively small — 35 churches with only five to 10 folks in each
group — we are actually able to do primary data collection around depressive symptoms,
activities and quality of life. Okay. So I will move into the last few slides here to
talk about some of the challenges that folks have been facing in trying to, you know, write
these hybrid protocols and to get them funded. And many of these points I will cover are
not actually in the current paper, but are new issues that we will be writing about more.
Perhaps one of the largest challenges is on this issue of evidence. And we are certainly
finding that the comfort level with these hybrid designs varies certainly around the
issue of evidence. And we are seeing, you know, debates, very understandable and very
good debates around whether doing implementation research and testing implementation strategies
before a solid record of effectiveness data is actually a good idea. And some folks, you
know, are predisposed in the negative on this. Even among people who entertain that hybrids
might be a good idea, we have seen debates on a case-by-case basis on what level of clinical
effectiveness data is enough to propose the hybrid design that you want to do right now.
So what level of indirect evidence is acceptable? And in some of my own proposals and those
of colleagues and other people, we have seen, you know, grant reviews varying at times widely
on this. And we also have cases too where, what do you do in cases where the quality
of effectiveness data may never be quote, unquote “strong” or when you can’t randomize
to interventions? For example, suicide prevention interventions. We’re seeing challenges too
around control versus flexibility and sort of what to do in cases, for example, in that
Cully Hybrid 2 trial, when fidelity to intervention is being allowed to vary and how do we measure
and try to control for that variation? And how does that effect the clinical outcomes
data? Certainly with power, if you’re adding implementation sort of aims and measures to
your effectiveness trial, you will likely have less time, money and budget to do those
trials and vice versa with Hybrid Type 3s and certainly 2s. For outcome measures, you
know, to what extent can we collect useful effectiveness outcomes in Type 3s when those
are not — when, you know, those patients are not being randomized, and we might not
have the time or money to collect more solid primary evidence? And also too with most of
these hybrid designs, we see a combination of quantitative and qualitative methods and,
you know, one has to have on your team the expertise, you know, to carry those out. I
have a few more slides, but I have a feeling I should probably wrap up. So as I mentioned
earlier, we’re thinking about some new papers now, trying to document the current use of
hybrids, the challenges reported; document, difficult study designs, trade-offs, measures.
We would like to come up with some reporting guidelines or minimum standards, if you will,
for each of these hybrids. We want to talk some about how models and theories of implementation
might relate to these hybrids. We are seeing a lot of people using the REAIM framework,
any hybrids to sort of guide the data collection. At the moment, we are pondering, arguing or
pushing maybe to include implementation concerns in more effectiveness trials. You know, if
most effectiveness trials by definition are precursors to implementation efforts, then
should we possibly be seeing more Hybrid Type 1? For Hybrid Type 2s, we’re not seeing a
whole lot of dual randomized, and in those, we are seeing pretty simple interventions
and simple uptake strategies being tested, sort of unique element models there. And we’re
wondering if, you know, most of these, you know, are or should lean more towards the
Hybrid 1 side or Hybrid 3 side. And I will close with this. At the 2014 D&I Meeting,
there was a session there on hybrid designs and hybrid studies, and at the end of the
formal talk, during the Q&A, somebody got up there and said, “Well, when wouldn’t we
want to collect clinical outcomes during an implementation trial?” In other words, like
flipping it on its head. Why wouldn’t we want to do this? And that sort of called into question
an assumption that we were making at the start of talking about these, that there likely
comes a time when we don’t need to worry about the clinical effectiveness data anymore, and
if so, good; that we should only focus on uptake. But then you read the papers like,
you know, David’s paper from 2013, where he and colleagues are talking about — and I
think quite rightly so — that interventions and implementation strategies are not frozen,
or they shouldn’t be, and we shouldn’t only expect that as we move to do implementation,
that their effectiveness might drop and that indeed adaptations that happen, you know,
during implementation and varying by implementation strategy could see improved outcomes. So,
you know, try to flip the question on its head and say, well, you know, when might we,
you know, when might we not need to look at these outcomes as possibly we should try to
look at them in these hybrid, you know, 3 trials and beyond, as much as possible. So
with that, I will stop, and hopefully there’s some time left for some questions.>>Geoff, thank you so much for your presentation.
We’re going to go ahead and open it up for questions. As a quick reminder, questions
can be submitted using the Q&A feature on the right hand side of your screen. Type your
question in the provided field and hit submit. And I see we have a couple dropping in, so
I’m going to go ahead and turn it back over to David.>>Yes. In case someone doesn’t have access
to seeing the questions in front of you, I will just ask it and Geoff, it would be great
to get your take on it. So here’s one from Margaret Farrell [assume spelling]. “I’m intrigued
by your discussion around evidence and appreciate your bringing it up. At what point do you
begin to determine that the original intervention was strong enough that the implementation
for any number of reasons is less effective and how could a hybrid design help tease that
out?”>>Can you repeat that for me, one more time?>>Sure. So at what point do you begin to
determine that an original intervention was strong enough but it’s the implementation
that’s less effective? And can a hybrid design help, again distinguishing between what is
the overall impact, I would think, on, say, individual outcomes stems from the effectiveness
of the intervention versus the strategy through which is was implemented?>>Right. Well, I mean, I think that you would
see that, you know, in a trial that is measuring implementation fidelity and outcomes, and
if you are seeing fidelity vary and you’re seeing outcomes vary, than I think that that’s
what you’re seeing. You might see fidelity vary, but you might not see clinical outcomes
vary. And in that sense, maybe that intervention is more of a robust intervention to its fidelity.
But I think that’s a very good question, and I think that these kinds of designs, you know,
could be very useful to try to explore that and actually have data to support it.>>And probably I would think might also be
helpful where you have an ability to look at the mechanisms, as to what were the mechanisms
that might have led the intervention to have an effect but also, what were the mechanisms
that might have underlined the potential effectiveness of the implementation strategy?>>Right. Right. And, you know, how the implementation
strategies might have been more or less effective at bringing to fidelity the activities or
key mechanisms of the original intervention.>>Sure. Right. That makes sense. It looks
like we’ve got a question and an example that’s coming. So we’ll let the questioner finish
putting what they’ve got in. I was wondering — we did have a question — your take for
the group in terms of what you might see as key decision points for investigators who
are interested in hybrid designs, and are there specific ways in which they might decide
either to opt in to this kind of design or opt out? Any advice that you have for folks
who would be saying, “This sounds really interesting. How do I identify what might be the right
circumstances?”>>You know, I think that the key issue or
one key issue is sort of your read of the effectiveness data that are already out there
and how they may or may not relate to the context that you have interest in studying
them in. and I would say generally that if there’s already a number of effectiveness
studies in some varying contexts that are showing, you know, similar or good effectiveness
results, and you might be contemplating a similar or same intervention in a new context
or in a new population, that that might be a good time to look at a hybrid design where
one might argue that the current strength in these other varied contexts are strong,
and we have strong enough reason to believe that we will see similar effectiveness in
this new context that we know or we expect will have some differing implementation challenges.
So we want to make progress and move forward quickly to look at these implementation issues
while still being sensitive to providing effectiveness data. But it might not necessarily be the
case where we have to do another only effectiveness trial of screening and brief intervention
for alcohol in primary care when maybe there’s enough data there, for one example, for, you
know, maybe we don’t need another sole effectiveness trial in this mini-variant of the population
or the context without looking at seemingly the key issue for that intervention at the
moment, which is implementation.>>So Geoff, thank you so much for those answers.
And I know we have more questions coming in on the WebEx platform, but to be cognizant
of folk’s time and upcoming 3:00 p.m. meetings, I want to go ahead and say a big thank you
to both David and Geoff. And to let folks know that your feedback is important to us.
We encourage you to complete our online evaluation, a link to which will be sent to you in email
shortly. As mentioned, we would like to continue this discussion in the webinar online at researchtoreality.cancer.gov
or you can engage the speakers and other participants through discussion forums and posts. And to
those who have posted question but haven’t got an answer, we will be following up with
you via email, and we will be going ahead and posting those on researchtoreality.cancer.gov.
But thank you again for joining us for this month’s webinar. We hope to see you soon,
and you may disconnect at this time.>>I also want to say too, you know, thank
you for having me here. For anybody who has questions that haven’t been answered, please
feel free to contact me via email, and I’d be happy to answer any questions.>>Great. And my thanks also, Geoff, to you,
and we’re very appreciative. We’ll keep watching, and I think everyone who’s still on the webinar
will keep watching this space to see the next steps that you’re taking with the hybrid design.
So thanks very much.>>Absolutely. Great. Thank you so very, very
much.>>Okay. Take care.

Leave a Reply

Your email address will not be published. Required fields are marked *