The term “evidence-based practice” usually refers to a broad range of practices with varying levels of demonstrated effectiveness. However, there is no standard definition for this term, and it is frequently used to mean different things. This is also the case for other terms and labels, such as model programs, promising practices, best practices, and so forth, that are utilized in this field. There are a variety of rich resources, particularly websites, now available that provide analyses and summaries of practices and levels of research support. Each resource utilizes a somewhat different set of standards for evaluating and ranking practices, and usually organizes them into categories of varying levels of demonstrated effectiveness based on the volume and rigor of research that has been completed. Common variables that make up rating schemes include, but are not limited to:
- Strength of the study research design of supportive evidence
- Implementation resources
- Relationship to particular outcomes
Given this, a consumer of evidence-based practice information must have an understanding of issues pertinent to research and evaluation of scientific evidence. As a first step, CIMH recommends understanding and consideration of a rudimentary hierarchy of the quality of research supporting practices. At the foundation of most categorizations of practices you will find something resembling the following levels. The specific names, numbers and criteria of the levels will vary; however, these levels highlight a primary consideration, that practices are supported by bodies of research of varying rigor:
- High Research Support – The practice has demonstrated positive outcomes in controlled research (random assignment, matched between-groups comparisons, replications, etc.). We can have high levels of confidence in these practices. However, practices that have been studied in tightly controlled settings (for example conducted at a university clinic under the close supervision of the developer) may not demonstrate comparable outcomes in usual care settings where there is much more heterogeneity in both clientele and practitioners. Practices supported by strong research in usual care settings serving usual care populations have the advantage of demonstrated transportability and effectiveness.
- Moderate Research Support—The practice has some positive research evidence of success and/or expert consensus. The research may include case studies or pre-post evaluations but does not include studies of the strongest research design; therefore, one is less certain that the practice, as opposed to other intervening events, is responsible for the positive finding.
- Emerging Practice—The practice is well articulated and recognizable as a distinct practice with “face” validity or common sense test. The outcomes associated with these practices have not been evaluated and are unknown.
- Untested Practice – Thepractice is not clearly articulated and as a consequence cannot be evaluated as a distinct practice separate from the more general category of “mental health treatment or therapy.”
- Harmful or Ineffective Practice - There are some practices that have significant evidence of a null, negative, or harmful effect
CIMH maintains that it is important to consider the strength of the scientific evidence for practice effectiveness when providing services. However, there are other important considerations including consumer/family choice and community context. As a consequence, there is a place for practices with lower levels of demonstrated evidence, and trials with new and innovative practices.
Some resources evaluating and rating evidence-based practices include: