Can we improve processes for engaging communities and services in research?

August 2018
Professor Anthony Shakeshaft, National Drug and Alcohol Research Centre, UNSW Sydney

The impact of research that actively engages with communities, non-government organisations and clinical services can be fundamentally influenced by the engagement processes that researchers devise and implement.  The corollary of this proposition is that establishing a pragmatic, or even evidence-based, process of change is likely to improve outcomes for communities and clients of services. So is it feasible to establish an evidence-based process for engaging communities and services in research?

One useful framework for guiding the process of engagement is program logics.  While program logics are not new1,2, they can precisely identify the goals of the collaboration, ensure clarity about the proposed activities, articulate why those activities are likely to be effective in achieving the goals of the collaboration, and ensure the devised activities are strongly aligned with the goals, the outcomes, and measures of both processes and outcomes. In other words, they can be used to address one of the key weaknesses noted by Stockings et al. in their systematic review of community-based interventions summarised in this edition of Connections, namely, that “Efforts should be made to standardize outcomes and measures,…”. In terms of engaging communities and services in research, we have trialled utilising program logics in two different ways.

 What is a program logic?

  • A mechanism to align the goal to be achieved with   the program activities and the outcomes
  • Our version requires an articulation of the   mechanism of change – that is, stipulating why an activity is likely to   achieve the desired goal
  • Our version also splits the program activities into   core components that are evidence-based and applicable in any comparable   service or setting, and the specific activities that each service needs to   devise to give effect to those core components. This achieves both   standardisation of programs across services and tailoring programs to the   different circumstances of different services

First, we have used them to define prevention or treatment programs in research conducted in partnership with Aboriginal communities (a forthcoming paper led by Dr Mieke Snijder), with an NGO service for high-risk young people3 (a paper led by Alice Knight) and with a clinical service, namely, Aboriginal drug and alcohol residential rehabilitation services in NSW4 (a paper led by Dr Alice Munro).  One innovation we have applied to standard program logics is to stipulate the need to articulate the proposed mechanism of, or rationale for, change. That is, articulating why a program or defined activities are likely to achieve the proposed outcomes.  Another innovation we have applied to standard program logics is to define programs in terms of two separate concepts: core components that are standardised across similar services or communities; and flexible activities that operationalise, or give effect to, those core components.  This innovation of both standardised and flexible program components is aimed at solving the well-established, but as yet relatively intractable, problem frequently noted in the complex intervention literature between the need for standardisation (to provide adequate comparability across programs delivered by different services in different circumstances) and the need for sufficient flexibility to allow tailoring to the resources and circumstances of different settings5,6,7.

Second, we have used program logics to try to delineate and specify the process of change by which clinical or program innovations might be implemented into routine practice.  We have developed our process of change models while partnering with drug and alcohol and mental health clinical services in the Mid-North Coast Local Health District. Specifically, we have designed a process of moving from independent delivery of services for clients with both drug and alcohol and mental health disorders to a model of integrated care.  Essentially, this process has required both drug and alcohol and mental health clinicians to re-design their systems of care to ensure required services are brought to clients rather than expecting clients to attend separate services. Understanding this change process, and translating it into a pragmatic and replicable framework to guide its use elsewhere, is the subject of two forthcoming papers (led by Cath Foley, who will present this work at NDARC’s Annual Symposium on October 8th, 2018). 

Despite the promise of pragmatic program logics for more effectively facilitating partnerships for research between academics, communities and services, they could only be promoted and further developed if they enact three critical principles.

The first critical principle is that they have to manifest evidence-based practice. According to the original definition of EBP, this means that program logics must be able to integrate the best-available external evidence with the expertise of individual service providers or community-based key stakeholders8.  Typically, the best available external evidence is distilled from findings of systematic reviews or meta-analysis, such as the review of community-based interventions published by Stockings et al. in this edition of Connections. Program logics can be used to elicit the expert knowledge of practitioners and community members, both in terms of identifying additional standardised core components of a program (such as cultural-related components that may be identified in Indigenous contexts) and operationalising those core components to their own specific circumstances.

The second critical principle is that program logics must be co-created: they are unlikely to be effective if they do not emerge from genuine and respectful partnerships that seek to utilise the knowledge and skills of all of those participating in the partnership.  Clients of services, clinicians and service providers, community members, researchers and sometimes even funders of research have a legitimate contribution to make towards achieving co-designed outcomes.  Indeed, partnership research is so inherently co-created that we explored the concept of co-creation and found it to be unequivocally under-developed.  We currently have a paper under editorial review (led by PhD candidate Tania Pearce) that aims to establish a definition for what constitutes the co-creation of new knowledge.

The third critical principle is that the programs or clinical innovations that are co-designed and implemented using program logics, must be evaluated using rigorous evaluation designs and high-quality measures.  This requires more than pre/post evaluations in single settings (a requirement that underscores the importance of program logics promoting definitions of services and community-based programs that are able to be standardised across services or communities, as well as being best-evidence practice and tailored to different circumstances) but does not always require full-scale cluster randomised controlled trials (RCTs). We have previously detailed the limitations of RCTs9, and rigorous but pragmatic alternatives, such as multiple baseline designs10.

Professor Donald Berwick is one of the key figures in the process of change literature, at least as it has been applied to the field of health and medicine.  He is fond of quipping something to the effect that every system is perfectly designed to produce the outcomes that it produces.  The challenge to which this alludes is that improved client or patient or community outcomes will not simply happen because of new knowledge generated in isolation from the systems that allow services and communities to function: those who work in those systems or communities must be integral to the design, creation and application of new knowledge. One way to facilitate this process could be greater utilisation of pragmatic and evidence-based program logics.

Acknowledgements: while the views expressed in this opinion piece are my own, I wish to acknowledge the intellectual and pragmatic contributions of the large number of my collaborators in the research projects relevant to this opinion piece, including service providers, community members, higher degree research students and academic colleagues. Their insights and willingness to engage in research have strongly shaped this opinion piece.

 References

  1. Bauman, A., & Nutbeam, D. P. (2014). Planning and evaluating population interventions to reduce noncommunicable disease risk – reconciling complexity and scientific rigour? Public Health Res Pract, 25(1).
  2. Dalkin, S. M., Greenhalgh, J., Jones, D., Cunningham, B., & Lhussier, M. (2015). What’s in a mechanism? Development of a key concept in realist evaluation. Implementation Science : IS, 10, 49. doi: 10.1186/s13012-015-0237-x
  3. Knight A, Maple M, Shakeshaft A, Shakeshaft B, Pearce T (2018).  Improving the evidence base for services working with youth at-risk of involvement in the criminal justice system: developing a standardised program approach. Health and Justice, 6:8, https://doi.org/10.1186/s40352-018-0066-5.
    1. Munro A, Shakeshaft A, Clifford A (2017).  The development of a Healing Model of Care for an Indigenous drug and alcohol residential rehabilitation service: a Community-Based Participatory Research approach. Health and Justice, 5:12, https://healthandjusticejournal.biomedcentral.com/articles/10.1186/s40352-017-0056-z
    2. Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., & Petticrew, M. (2008). Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ, 337, a1655. doi: 10.1136/bmj.a1655
    3. Campbell, N. C., Murray, E., Darbyshire, J., Emery, J., Farmer, A., Griffiths, F., . . . Kinmonth, A. L. (2007). Designing and evaluating complex interventions to improve health care. BMJ, 334(7591), 455-459. doi:10.1136/bmj.39108.379965.
    4. Hawe, P., Shiell, A., & Riley, T. (2004). Complex interventions: how "out of control" can a randomised controlled trial be? BMJ, 328(7455), 1561-1563. doi:10.1136/bmj.328.7455.1561
    5. Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence Based Medicine: What It Is And What It Isn't: It's About Integrating Individual Clinical Expertise And The Best External Evidence. BMJ: British Medical Journal, 312(7023), 71-72.
    6. Sanson-Fisher RW, Bonevski B, Green L, D’Este C. (2007).  Limitations of the Randomized Controlled Trial in Evaluating Population-Based Health Interventions. American Journal of Preventive Medicine, 33, 155-61.
    7. Hawkins NG, Sanson-Fisher RW, Shakeshaft A, D’Este C, Green L. (2007).  The Multiple Baseline Design for Evaluating Population-Based Research.  American Journal of Preventive Medicine, 33, 162-8.
Example of a program logic for Aboriginal drug and alcohol residential rehabilitation services