Findings from clinical and natural evaluations for the effectiveness of treatment interventions—especially cognitive and behavioral strategies—have led to renewed calls for transferring these “evidence-based” techniques into practice. This is a complicated task, however, which is itself in need of systematic study. Organizational climate and readiness for change are especially important, and the TCU Program Change Model provides a conceptual framework to summarize these and other sources of influence on this stage-based process. New analytic strategies and assessment instruments for studying organizational functioning have been developed at the IBR for this work.
Published evidence and references for organizational readiness for change
Overview of Evidence
by D. D. Simpson, W. E. K. Lehman, and P. M. Flynn
The TCU Program Change Model integrates related observations from the literature on “technology transfer” (see Fig. 1). Refinements for this model from its earlier formulation (Simpson, 2002) contain more explicit attention to Strategic Planning and preparation by programs as well as the distinctions between counselor-level and organizational-level influences (Flynn & Simpson, 2009; Simpson & Flynn, 2007; Simpson, 2009). In particular, the innovation and implementation process properly begins with consideration of program needs and resources, its structural and functional characteristics, and general readiness to embrace innovations. Guidelines for conducting agency self-evaluations and defining action plans are described by Simpson and Dansereau (2007).
At the core of this heuristic framework are action steps typically involved in the innovation implementation process (see review by Fixsen et al., 2005; Lehman, Simpson, Knight, K., & Flynn, 2011, Psychology of Addictive Behaviors). It generally begins with Training. Decisions by staff on whether or not to attend training opportunities are influenced by (1) how relevant it is to their needs, (2) training accessibility, including location, scheduling, and cost, and (3) if it is accredited and thereby offers sanctioned educational or credentialing benefits. In addition to these and related personal-level concerns by individual staff members, institutional needs and pressures for applying particular interventions can likewise influence training attendance.
Following training, the next crucial step involves Adoption, defined as a two-step activity involving decision-making and action-taking. Decision-making usually requires support from leadership, both at the formal and informal levels. The innovation being considered should possess overall quality and utility necessary for applications in “real world” clinical settings. It likewise should be viewed by frontline staff as having adaptability for addressing nuances of the local treatment applications and setting, and be compatible with other materials and fit with existing values or culture within the treatment program.
Intentions to adopt an innovation are followed by the development of a plan of “action,” including a trial period to allow test runs for potential adopters to form opinions about applications. Examples of prominent considerations include capacity and proficiency of the innovation in meeting expectations, and there should be satisfactory preliminary results and feedback from those involved. In addition, sources of resistance, including both the active and passive barriers to change, must be manageable. At the organizational level, staff capacity and a positive, supportive organizational climate are necessary in the adoption decision and action processes.
The next major stage of the process is Implementation, building on the brief trial phase discussed above. An intervention must be viewed by program staff and leadership as being effective, feasible within a program’s practical context, and sustainable. At the organizational level, factors such as motivation, resources from program management, staff attributes, and program climate all play a role in determining long-range implementation, along with financial resources.
Finally, innovations that successfully pass through these stages successfully may become part of standard Practice and presumably bring improvements in client care. Additionally, program costs related to innovations (e.g., cost of materials, training, supervision, loss of billable hours associated with training and supervision, etc.) are matters to consider by programs intending to adopt and implement new interventions or procedures.
An Enhanced Model
Each of these stages of change typically involves a series of smaller interrelated steps. Simple innovations often can be adopted and successfully implemented in programs with only minor tremors in organizational functioning. As innovations and new procedures become more complex and comprehensive, however, the process of change becomes progressively more challenging—especially in settings where staff communication, cohesion, trust, and tolerance for change are lacking. Systematic evaluations of these factors in an integrated framework is expected to help advance the scientific progress and practical contributions in this field, including development of assessment strategies to help resolve client, staff, and organizational issues.
Emphasis on context factors that influence the innovation implementation process and its sustainability is growing. In particular, the role of systems preparation has become more prominent (Simpson, 2009), along with the impact of maintenance-related resource and climate supports essential for innovation sustainability (Simpson, 2011). When applied within the behavorial health services field, a more explicit and integrated linkage between services process and innovation implementation process has been recommended (see Figure 2). This enhanced model provides an action-based framework for conducting and evaluating the dynamic system of change.
Figure 2. Stages of Implementation
An Assessment Strategy
Organizational-level assessments are challenging because they require data to be taken from individuals within an organization (e.g., leaders, staff, clients) and then be aggregated in ways that represent “the organization.” Selection of appropriate scales, data collection format, reliability and validity of measures, sampling of individuals to properly represent the organization, and methodological alternatives for aggregating data are issues that require attention.
TCU assessments of organizational needs and functioning have been created with these applications in mind. The TCU Program Training Needs (PTN) survey is used for identifying and prioritizing treatment issues that programs believe need attention (Rowan-Szal et al., 2007). Its items are organized into domains focused on Facilities and Climate, Satisfaction with Training, Preferences for Training Content, Preferences for Training Strategy, Barriers to Training, and Computer Resources. User-friendly feedback reports to respondents (see PTN version sample: PDF; 100KB/10 pages) based on this information can help guide overall training efforts as well as predict which innovations programs are most likely to seek out and adopt.
The TCU Organizational Readiness for Change (ORC) assessment focuses on organizational traits that predict program change (Lehman et al., 2002). It includes scales from four major domains—motivation, resources, staff attributes, and climate. User-friendly feedback reports to respondents (see ORC version sample: PDF; 91KB / 9 pages) are recommended. A companion to the ORC is the TCU Survey of Organizational Functioning (SOF) which includes the ORC as well as nine additional scales measuring job attitudes (e.g., burnout, satisfaction, and director leadership) and workplace practices.
The TCU Client Evaluation of Self and Treatment (CEST) is used to measure client-level and program-level needs and performance indicators in treatment. Comparisons of scale scores from the CEST and ORC assessments with other programs can be carried out by using Assessment Fact Sheets containing norms (e.g., 25-75th or 33-67th percentiles) based on large-scale TCU databases (see Figs. 3 and Fig. 4).
Figure 3. Means and Norms for CEST Scale Profiles
Figure 4. Means and Norms for ORC Scale Profiles
Studies show that organizational climate is predictive of treatment satisfaction and counselor rapport (see Broome et al., 2007, Greener et al., 2007, & Lehman et al., 2002). It is therefore important to address organizational climate issues, particularly in low-climate programs, as well as identify client needs for making changes in treatment regimens that can improve client functioning.
Special volumes of the Journal of Substance Abuse Treatment (Simpson & Brown, 2002; Simpson & Flynn, 2007; abstracts for 2007 special issue) each contain serialized studies on these topics. For instance, Rowan-Szal et al. (2007) found the PTN can be used as an efficient planning tool for programs beginning to explore organizational openness to innovations and how to initiate the process. It also helps staff feel they have been consulted about program needs and planning for treatment innovations, including training priorities. When programs have evidence of their own organizational deficits, based on feedback from ORC survey results, they can respond strategically with plans for taking corrective actions. For instance, high-need treatment programs—with relatively poor scores on their institutional resources, staff attributes, and climate—can become engaged in a deliberate change process (Courtney et al., 2007).
Based on a large sample of treatment programs participating in the NIDA-funded Clinical Trials Network (CTN), Fuller et al. (2007) has shown that greater needs for program improvement, more Internet access, higher influence on peer, better opportunities for professional growth, a clearer sense of organizational mission, and higher organizational stress are related to stronger support for adopting evidence-based practices (i.e., manualized treatments, medication, integrated mental health services, and motivational incentives). Furthermore, lack of professional growth by staff, weaker peer influence, low Internet access, and lower organizational stress are associated with heavier reliance on therapeutic confrontation and discharge due to noncompliance.
Attitudes of staff about adoption of evidence-based practice and treatment manuals in a statewide network of mental health and substance abuse sectors serving adolescents were surveyed by Saldana et al. (2007). They found motivational readiness and training needs scales from the ORC (measured both at the therapist and agency levels) are associated with higher appeal and openness to innovations. There also were organizational climate differences between substance abuse and mental health settings, with the later reporting more stress from higher caseloads and potentially greater barriers to innovation.
Because quality of training also is important in preparing counselors for change, Bartholomew et al. (2007) used the TCU Workshop Evaluation Form (WEVAL) and TCU Workshop Assessment Follow-up Assessment (WAFU) forms to examine counselor assessments of relevance and quality of training for specific innovations in relation to its subsequent “trial use.” Higher ratings for relevance to client needs as well as adequacy of program resource allocations predicted endorsement and applications of materials following training. Not surprisingly, major barriers that counselors face in making changes in their clinical practice include lack of time and redundancy with current practices.
Finally, Simpson, Joe et al. (2007) studied relationships between stages of training, adoption, and implementation across time using a long-range, cross-linked subset of program records. The findings fit within the overall TCU Program Change Model in that the original program training needs (obtained from the PTN survey a year before training) predicted subsequent staff responsiveness to workshop training. Next, it was shown that favorable organizational functioning scores from the ORC (collected 4 months before training) predicted more positive staff responses to training activities. Finally, and most importantly, more positive staff-level responses to workshop training as well as their progress in implementation predicted better client-level reports (from the CEST) of their counseling participation, rapport, and satisfaction completed 9 months after the counselor training.
All of the client-level and organizational functioning assessments used in the findings reported above have been revised (see TCU Short Forms). While most of the original scales are retained, assessment formats have been modified for use as a series of single-page forms suitable for optical scanning and scoring. Some new client-level assessment domains also have been added, and some forms (e.g., the PTN) have been consolidated into elements of the ORC scales. This reformulation offers greater flexibility in assessment selections and planning, and the TCU Short Forms Selection Matrix provides guidance in this process. Automated scanning hardware and software, preprinted assessment forms, and scoring programs with graphic feedback are also now available.
1. IBR Technical Report
2. TCU Research Summary: Focus on Organizational Change (September
3. Feature and PowerPoint© presentations (with PDF files of printed handouts)
4. Summary of applications and psychometrics (PowerPoint© presentations with PDF files of printed handouts)