Lifting the quality of 'evidence' for the youth foyer model

Evidence-based policy only works if the evidence base itself is robust enough to inform decisions. Joseph Borlagdan (@borlagdanj), Iris Levin  and Shelley Mallett of the Brotherhood of St Laurence started their review for ANZSOG's Evidence Base journal aiming to evaluate the evidence for the effectiveness of the youth foyer model. But after their literature search revealed an overall lack of rigour in evaluation studies, they realised they needed to take a different tack.

 

As the Assistant Editor of Evidence Base journal since it launched in 2012, I've read a lot of critiques of public policy evidence. We publish systematic reviews of the evidence on complex and controversial public policy issues, in a bid to help ANZSOG's key stakeholders (i.e. public managers) get a handle on whether there is any consensus on how best to address these issues.

But it's not always as simple as finding the evidence and systematically examining what it has to say. More often than not, our authors spend a section of their finished review lamenting the quality of the evidence base, and making suggestions for how to improve it. This often wasn't what they thought they were signing up for, but the lack of rigorous evaluation studies demands comment - and sometimes means our authors need to change direction completely to evaluate the quality of the evidence itself rather than what it says about an issue.

This happened to Jon Altman and Susie Russell when they set out to examine the evidence for the effectiveness of the Northern Territory National Emergency Response Intervention (NTER). Their task seemed relatively straightforward at first, but became much more complex when the authors came across issues with fluid program definition (what is or was the Intervention?), and the lack of policy logic and baseline data against which to measure its effectiveness.

While able to make some useful observations, Charles Livingstone and Angela Rintoul found it hard to determine whether several gambling harm minimisation measures were effective; Janine Chapman and Anjum Naweed found considerable methodological inconsistencies and limitations when they looked at obesity interventions for surface transport workers; and Kristin Carson and colleagues found that the evidence base for tobacco abuse prevention in Indigenous communities was very patchy.

Unfortunately, our latest journal issue ran up against similar problems. Read on to hear what Brotherhood of St Laurence researchers Joseph Borlagdan (BSL and UniMelb), Iris Levin (BSL), and Shelley Mallett (BSL and UniMelb) had to say about their experience of evaluating the evidence on the foyer model for addressing youth homelessness.

Sophie Yates (@MsSophieRae)

High quality evidence is crucial for all human services evaluations. Rigorous research with considered implications for the mainstream service system can inform policy formation and help develop innovative services. In the youth homelessness field, a promising evidence base is forming on the 'youth foyer' model, an integrated approach to tackling youth homelessness that connects affordable accommodation to training and employment. While there is growing support from government for the development and funding of foyer programs, investment in high quality research that evaluates the model’s effectiveness continues to lag behind. This has significant implications for establishing a strong evidence base.

Constructing an evidence base
We assessed the quality of 15 primary Australian and international studies that examined the effectiveness of youth foyer or foyer-like programs on the lives of young homeless people. We initially set out to determine whether such models were effective. However, the uneven quality of the evidence base (that is, the robustness of the evaluation studies we found) led us to re-think our assessment of the research. Our review instead explores two main issues with the evidence base. Firstly, the difficulty studies had validating claims of foyer effectiveness, and secondly, the limitations of research design and methodology.

Evidence of effectiveness is, of course, a slippery concept. In a recent post, Paul Cairney cautions that policymakers use a range of information sources to inform their decision making, including those that sit outside the hierarchy of scientific methods. Latour and Woolgar’s famous anthropological study of the scientific laboratory also demonstrates that facts aren’t simply uncovered, but are constructed through subjective decision making processes. In the murkier world of social policy, decision makers must make sense of conflicting and varied quality evidence.

While we acknowledge that there are different sorts of evidence, we focus on distinguishing between two particular uses of the term. The first use of the term indicates the strength of the evidence, and whether evidence is strong or weak. That is, whether findings show that a program or policy is effective or not. The second use of the term indicates the quality of the evidence, and whether the quality is high or low. This can be defined as the way evidence is gathered and reported and whether it can be trusted or not trusted by an external audience. If the research is robust, then the quality of the evidence is high. Here we would like to focus on the latter– the quality of evidence.

The quality of evidence
We found that there is a need to lift the quality of the existing evidence base in order to properly assess whether foyer programs are effective. During the process of selecting and reviewing studies that evaluated this type of program, the lack of high quality evidence prevented us from assessing the strength of the evidence. So our attention turned to understanding why and how the evidence lacks rigour, and the implications of this for further research.

While most of the evaluation reports indicate that Foyer produces positive outcomes for young homeless people, these promising claims must be validated with more rigorous criteria. Claims made by the studies reviewed were often difficult to validate as they did not differentiate between outputs and outcomes. This made it difficult to identify long term program effects. Compounding this problem, many studies did not properly document the programs they evaluated, while some studies failed to outline their own evaluation methods. That made it difficult for us to verify claims these studies made about programs’ effectiveness. While it is possible that the programs that were evaluated in the studies we reviewed have been effective, this could not be verified in the presentation of the research literature.

The second issue that prevented the validation of research in this area related to research design limitations borne from external constraints (e.g. funding). None of the studies reviewed were able to include a comparison group, and the majority of studies did not have a post intervention follow-up. Findings were typically presented from one time point of data collection along with other methodological limitations that impacted upon research validity. While this is not unusual in human services evaluation, they prevented the researchers of these studies from being able to conclude any causal inference that may be attributed to the intervention itself.

How can we improve the quality of evidence? Implications for future research
The lack of a rigorous evidence base is not unique to research on youth homelessness interventions. This problem is prevalent in most fields of human services evaluative research. There are three key implications for future research of human services evaluations in Australia. First, it is imperative to improve the rigour of studies and to lift the standard of evaluations. We know that the lack of high quality evaluations indicates inadequate funding and resources that otherwise may have enabled more rigorous research.

Second, in order to lift the quality of evidence, agencies and research bodies need to implement and embrace ground rules. These should include a system for ensuring high quality research design, appropriate documentations of programs and methods, use of a theoretical framework for the interpretation of findings, and a peer-review process.

Third, the lack of quality evidence also points to some service development gaps that are then mirrored in the corresponding research. It may be that some of the issues identified were a result of the lack of service delivery tools such as a program logic and theory of change for the program, or that the researchers didn’t have access to these documents. Therefore, the links between program outcomes and mechanisms couldn’t be identified in the existing evidence. This has important implications for not only the quality of evidence in human services research, but also for potential service development improvements based on evidence informed research.

As our review shows, there is a clear need for greater investment in research and evaluation on the foyer model, not only to enhance the rigour of research but as an integral component of program development.

Based on Levin, I., Borlagdan, J., Mallett, S. and Ben, J. (2015) A critical examination of the youth foyer model for alleviating homelessness: Strengthening a promising evidence base. Evidence Base, issue 4, 1-23. Available at https://journal.anzsog.edu.au/publications/33/EvidenceBase2015Issue4Version1.pdf.