What reasons would you give to your clients when explaining the importance of regularly reviewing and monitoring their progress?

  • Journal List
  • HHS Author Manuscripts
  • PMC5495625

Adm Policy Ment Health. Author manuscript; available in PMC 2019 Jan 1.

Published in final edited form as:

PMCID: PMC5495625

NIHMSID: NIHMS866704

Abstract

Numerous trials demonstrate that monitoring client progress and using feedback for clinical decision-making enhances treatment outcomes, but available data suggest these practices are rare in clinical settings and no psychometrically validated measures exist for assessing attitudinal barriers to these practices. This national survey of 504 clinicians collected data on attitudes toward and use of monitoring and feedback. Two new measures were developed and subjected to factor analysis: The monitoring and feedback attitudes scale (MFA), measuring general attitudes toward monitoring and feedback, and the attitudes toward standardized assessment scales-monitoring and feedback (ASA-MF), measuring attitudes toward standardized progress tools. Both measures showed good fit to their final factor solutions, with excellent internal consistency for all subscales. Scores on the MFA subscales (Benefit, Harm) indicated that clinicians hold generally positive attitudes toward monitoring and feedback, but scores on the ASA-MF subscales (Clinical Utility, Treatment Planning, Practicality) were relatively neutral. Providers with cognitive-behavioral theoretical orientations held more positive attitudes. Only 13.9 % of clinicians reported using standardized progress measures at least monthly and 61.5 % never used them. Providers with more positive attitudes reported higher use, providing initial support for the predictive validity of the ASA-MF and MFA. Thus, while clinicians report generally positive attitudes toward monitoring and feedback, routine collection of standardized progress measures remains uncommon. Implications for the dissemination and implementation of monitoring and feedback systems are discussed.

Keywords: Psychological assessment, Attitude measures, Evidence based practice, Therapists

Introduction

Routinely monitoring client progress during therapy has been identified as an integral component of evidence-based practice in mental health (APA Presidential Task Force on Evidence-Based Practice 2006; Dozois et al. 2014). Collecting session-by-session progress data using standardized rating scales and using feedback for clinical decision-making has been consistently found to reduce deterioration and improve outcomes, particularly among clients at risk for treatment failure (e.g., Bickman et al. 2011; Lambert et al. 2003; Reese et al. 2009). In addition to improving client outcomes, collecting ongoing progress data can also facilitate quality improvement at multiple levels within organizations (Bickman 2008; Chorpita et al. 2008) and provide useful data to researchers interested in characterizing “as usual” mental health services (Garland et al. 2010).

Unfortunately, despite compelling evidence that monitoring and feedback can improve client outcomes, available data suggest that this practice is rare in clinical settings (e.g., Gilbody et al. 2002; Hatfield and Ogles 2004; Ionita and Fitzpatrick 2014). These data highlight an important research-practice gap that has become the focus of numerous implementation efforts (e.g., Bickman et al. 2016; Borntrager and Lyon 2015; Higa-McMillan et al. 2011).

Understanding barriers to monitoring and feedback is important for the design of these efforts. Data collected from clinicians indicate that barriers include resource constraints and added time and paperwork (Gleacher et al. 2016; Hatfield and Ogles 2007; Johnston and Gowers 2005; Kotte et al. 2016; Meehan et al. 2006), lack of training (Batty et al. 2013), client willingness to complete measures (Kotte et al. 2016; Overington et al. 2015), and concern about the economic and political motives for use (Meehan et al. 2006).

While these studies provide important preliminary data, they are limited in a number of ways. First, many of these studies defined the use of outcome monitoring as administering assessments before and after treatment only (e.g., Batty et al. 2013; Johnston and Gowers 2005). While this is a useful strategy for monitoring overall treatment effectiveness (Hall et al. 2014), this differs from the type of ongoing routine progress monitoring found to improve treatment outcomes (Boswell et al. 2015). Second, they have primarily focused on barriers to data collection; however, barriers to using clinical data in treatment planning may differ from those impeding its collection (Borntrager and Lyon 2015). Third, much of this work has either been qualitative (e.g., Meehan et al. 2006; Unsworth et al. 2012), or focused on quantifying assessment use (Gilbody et al. 2002; Ionita and Fitzpatrick 2014); few quantitative studies have examined the relationship between these barriers and actual use of monitoring and feedback. Finally, studies vary in the types of barriers assessed and tend to measure them with scales that were not vetted through gold standard measure development procedures. To date, there is no psychometrically sound measure designed for the purpose of assessing clinician attitudes toward monitoring and feedback. The development of psychometrically validated implementation measures has been identified as a critical issue facing the field of implementation science (Martinez et al. 2014).

To address these limitations, the purpose of this study was to gather data regarding attitudes toward monitoring and feedback, as well as updated data regarding rates of monitoring and feedback use, within a national sample of mental health clinicians working in the United States. The first goal was to develop psychometrically-sound attitude measures. Negative attitudes towards evidence-based practice are linked with lower self-reported use of evidence-based practice (e.g., Jensen-Doss et al. 2009) and improving attitudes toward standardized assessment tools has been found to predict increases in their use (Lyon et al. 2015). Consistent with prior work showing value in assessing both attitudes toward the process of diagnostic assessment and toward using standardized diagnostic tools (Jensen-Doss and Hawley 2011), this study focused on two types of attitudes. The monitoring and feedback attitudes (MFA) scale assessed general attitudes toward monitoring and feedback, with a particular focus on incorporating feedback data into treatment sessions (e.g., whether incorporating progress data into sessions might harm therapeutic alliance) and was modeled after an existing measure of attitudes toward diagnostic assessment (Jensen-Doss and Hawley 2011). The MFA items do not refer to particular types of progress monitoring data. The attitudes toward standardized assessment scales-monitoring and feedback (ASA-MF) was a revision of the attitudes toward standardized assessment scales (ASA; Jensen-Doss and Hawley 2010), a measure developed to assess attitudes toward standardized diagnostic instruments but that has recently been applied in studies of monitoring and feedback (Lyon et al. 2015, 2016). The ASA-MF focuses specifically on standardized progress measures and their practicality and utility for clinical decision-making.

The second goal of this study was to gather additional data regarding use of routine monitoring and feedback in practice settings. To our knowledge, prior surveys assessing ongoing use of progress monitoring have only sampled clinical psychologists and doctoral students (Ionita and Fitzpatrick 2014; Overington et al. 2015), although some surveys that have not specified the frequency of outcome measurement have included master’s level clinicians (Ebesutani and Shin 2014; Ventimiglia et al. 2000). As psychologists engage in more assessment than providers from other disciplines (Frauenhoffer et al. 1998; Palmiter 2004) and have more positive views toward standardized assessment tools (Jensen-Doss and Hawley 2010), there is a need for work examining monitoring and feedback practices among the most prevalent providers of mental health services, who are often not psychologists (Garland et al. 2010). As such, this study utilized a national sample of social workers, mental health counselors, and marriage and family therapists.

Finally, to identify clinicians who might be particularly willing or unwilling to use monitoring and feedback, the third goal of this study was to examine professional and practice characteristics predictive of: (1) more positive attitudes toward monitoring and feedback, and (2) increased standardized progress measure use. Links between attitudes and use were also examined. We hypothesized that more positive attitudes and higher rates of use would be reported by doctoral-level providers (Ionita and Fitzpatrick 2014; Jensen-Doss and Hawley 2010), providers not working in private practice (Becker and Jensen-Doss 2013; Jensen-Doss and Hawley 2010), providers with fewer years of professional experience (Aarons 2004; Becker and Jensen-Doss 2013), and providers working primarily with adults (Ionita and Fitzpatrick 2014). Consistent with previous studies (e.g., Hatfield and Ogles 2004), we also expected providers with a cognitive-behavioral theoretical orientation to have more positive attitudes and higher rates of use and those with a psychodynamic orientation to have more negative attitudes and lower rates of use. Given the importance of organizational factors to evidence-based practice (e.g., Aarons and Sawitzky 2006), we also expected that providers would have more positive attitudes and report higher use if their work setting dictated their assessment practices. Finally, based on prior work on diagnostic assessment attitudes (Jensen-Doss and Hawley 2010, 2011), we hypothesized that attitudes would be positively associated with use, particularly attitudes about the practicality of monitoring and feedback.

Method

Participants

Participants were 504 mental health professionals recruited through mailing lists from three national professional organizations. The sample was largely female (73.9 %) and Caucasian (89.6 %). Participants were primarily masters-level clinicians (85.0 %). Table 1 details the demographic, professional, and practice characteristics of participants.

Table 1

Demographic and professional characteristics of sample

Age (years), M (SD; range)a 56.4 (11.69; 28–82)
N (%) female 369 (73.9 %)
Ethnicity [n (%)]
 Caucasian 413 (89.6 %)
Black/African American 16 (3.5 %)
 Hispanic/Latino 16 (3.5 %)
Asian/Pacific Islander 6 (1.3 %)
 Mixed/other 10 (2.2 %)
Professional discipline [n (%)]
 AMHCA 179 (35.5 %)
 NASW 143 (28.4 %)
 AAMFT 182 (36.1 %)
Years clinical experience M (SD; range) 22.2 (11.0; 2–55)
Highest degree obtained [n (%)]
Master’s degree 424 (85.0 %)
Doctoral degree 75 (15.0 %)
Theoretical orientationb [n (%)]
 CBT 209 (44.2 %)
 Psychodynamic/psychoanalytic 91 (18.1 %)
Family systems 115 (22.8 %)
Humanistic/client centered 42 (8.9 %)
 Eclectic 166 (32.9 %)
Other orientation 125 (26.4 %)
Work environment [n (%)]
Private practice 310 (67.7 %)
Mental health agency 85 (18.6 %)
Elementary, middle, or high school 21 (4.6 %)
Higher education setting 9 (2.0 %)
Hospital/medical center 27 (5.9 %)
Day treatment facility 3 (0.7 %)
Residential facility/group home 3 (0.7 %)
 Other 20 (4.4 %)
Workplace dictates assessment
Not at all 270 (56.4 %)
 Some 101 (21.1 %)
A lot 108 (22.5 %)
Clients who are a “major part” of practice [n (%)]
Youth clients 147 (29.9 %)
Adult clients 456 (91.4 %)

Procedures

The Tailored Design Method (Dillman et al. 2009) was used to develop the survey; as detailed below, survey items either came from existing measures, or were adapted from existing measures by experts in the implementation of monitoring and feedback. The survey was piloted with six mental health providers who completed the survey and completed a semi-structured interview about the clarity of the survey and suggestions for improvement. The survey was revised iteratively throughout these interviews; most revisions were related to the format of the survey and minor wording changes.

The final survey was mailed to 1200 mental health providers, 400 from each of three professional organizations (American Mental Health Counselors Association, AMHCA; American Association for Marriage and Family Therapy, AAMFT; National Association of Social Workers; NASW) who provided mailing lists of random, representative samples of their membership; only members of each organization who engaged in clinical practice were selected. Initial survey items asked about demographic, professional, and practice characteristics, and whether participants conducted or supervised intake assessments and/or therapy. If they did not engage those activities, they were asked to stop the survey at that point and return the rest blank.

Following procedures based on the Tailored Design Method (Dillman et al. 2009) and successfully applied in other clinician surveys (Becker and Jensen-Doss 2013; Jensen-Doss and Hawley 2010), clinicians received up to four separate mailings. The first consisted of a personally addressed, hand-signed, pre-notice letter informing clinicians of the upcoming survey. The second included a personalized, hand-signed cover letter, $2 bill (a cost-effective non-contingent reinforcer for recruiting clinicians; Hawley et al. 2009), survey, and pre-addressed, hand-stamped return envelope. The third mailing was a signed postcard that thanked those that had returned the survey and reminded nonrespondents to please do so. The fourth mailing was sent to nonrespondents only and included a second personalized cover letter, another copy of the survey, and a stamped return envelope. All study procedures were approved by the Institutional Review Board at the University of Miami.

Of the 1200 individuals selected for participation, 15 (1.3 %) had undeliverable addresses. Of the 1185 individuals contacted, 621 (52.4 %) responded to the survey [104 (8.8 %) declined participation, and 461 (38.9 %) did not respond]. Of the responders, 94 were not eligible for the study because they did not conduct or supervise intakes or therapy, and 1 was excluded from the sample because their highest degree was a bachelor’s degree. Finally, as this study focuses on monitoring and feedback during therapy, 22 individuals who indicated they did not provide or supervise therapy were excluded from these analyses. This yielded a final sample size of 504.

Measures

Demographic, Professional, and Practice Characteristics

Participants completed open-ended items describing their age, ethnicity, work setting, and theoretical orientation, and indicated their gender. These variables were categorized as listed in Table 1. For analysis purposes, theoretical orientation was coded into CBT = 1, Other = 0 and Psychodynamic = 1, Other = 0; as some providers fell in both groups, these were entered as separate predictors. Work setting was coded as Private Practice = 1, Other = 0.1 Participants were also provided a range of degree options and asked to check all that apply; highest degrees were grouped into Master’s (0) and Doctoral (1) for analysis. Because nearly all (91.4 %) participants said they worked with adults as a major part of their practice, the child client variable (hereafter referred to as “child work”) was used to test the hypothesis that working with adults would predict greater use; this variables was coded as A Major Part of My Practice = 1, Minor or Not at all = 0. Participants were also asked to indicate how much their assessment practices are dictated by workplace or funding policies (Not at All, Some, A Lot). For analysis purposes, this variable (hereafter referred to as “workplace dictates”) was recoded into 0 = Not at all and 1 = Some or A Lot.

Monitoring and Feedback Attitudes Scale (MFA)

To assess provider attitudes toward routine progress monitoring and providing feedback to clients about treatment progress, 20 items were generated. Item generation began by modifying two relevant items from an existing measure, the utility of diagnosis scale (Jensen-Doss and Hawley 2011), and additional items were generated by several experts in monitoring and feedback. Items covered possible benefits, (e.g., utility for supervision and facilitating collaboration with clients) and possible risks (e.g., whether negative feedback might harm the therapeutic alliance or be misused by clinic administrators). In the MFA instructions, participants were provided definitions of routine progress monitoring and providing feedback2 and were asked to indicate how much they agreed or disagreed with each statement on a scale from 1 (Strongly Disagree) to 5 (Strongly Agree).

Attitudes Toward Standardized Assessment Scales-Monitoring and Feedback (ASA-MF)

To assess attitudes toward administering standardized progress measures and using them for clinical decision making, 17 items were adapted from the Attitudes Toward Standardized Assessment Scales (ASA; Jensen-Doss and Hawley 2010). Wording about general or diagnostic assessment was replaced with wording about progress monitoring and language specific to assessment of children was removed to broaden the measure’s relevance. Seven additional items were generated to address issues unique to progress monitoring (e.g., Standardized progress measures help identify when to change the overall treatment plan). Participants were again provided with the definition of routine progress monitoring, as well as a definition of standardized measures, and asked to indicate how much they agreed or disagreed with the 24 statements on a scale from 1 (Strongly Disagree) to 5 (Strongly Agree).

Self-Reported Progress Monitoring

Participants indicated how often they administer standardized progress measures on average and how often they would prefer to administer them, using a scale of Never, Every 1–2 sessions, Every Month, Every 90 Days, or Other (describe).

Analysis Plan

Data were screened for invalid responses (e.g., reporting “strongly agree” for all items, regardless of the item valence). As a result, eight participants’ data were recoded as missing for one or both of the attitude scales. Data were missing completely at random according to Little’s (1988) MCAR test (χ2 = 5321.44, df = 5340, p = .57); full information maximum likelihood estimation was used to account for missing data. No item was missing more than 7 % of its values, except for work setting (9.1 %). Continuous variables were examined for skewness and kurtosis and all were normally distributed.

Factor analysis and Cronbach’s alpha were used to examine the psychometric properties of the two measures. Because the MFA consisted of new items, the factor structure was examined by randomly splitting the sample in half, using one-half to conduct an exploratory factor analysis (EFA) with Oblimin rotation, followed by a confirmatory factor analysis (CFA) to cross-validate the structure in the other. Determination of the underlying factor structure was done via examination of the comparative fit index (CFI), the root-mean-square error of approximation (RMSEA), and the standardized root-mean-square residual (SRMR), in conjunction with parsimony and theory. Hu and Bentler (1999) recommend approximate cutoffs of >0.95 for the CFI, <0.06 for the RMSEA, and <0.08 for the SRMR. Because the ASA-MF was hypothesized to have the same factor structure as the original ASA, those items were subjected to CFA only. All factor analyses were conducted using MPlus Version 7 (Muthén & Muthén 1998–2011).

After establishing factor structures for each measure, descriptive statistics documented provider attitudes and rates of progress monitoring use. To facilitate interpretation of the attitude scores, Cohen’s d effect sizes were computed by subtracting the neutral rating of 3 from the sample mean and dividing this difference by the item or scale score standard deviation. The directionality of these effect sizes indicates attitude valence [i.e., whether clinicians agreed (positive) or disagreed (negative)], while the magnitude indicates attitude strength.

Next, simple and multiple regressions examined potential professional (i.e., years of professional experience, CBT orientation, Psychodynamic orientation, doctoral vs. master’s degree) and practice (i.e., private practice vs. other settings, workplace dictates, child work) predictors of attitudes. Finally, logistic regression tested whether professional characteristics, practice characteristics, and/or attitudes predicted self-reported progress monitoring practices. Given our large sample size and the number of analyses conducted, a more conservative p value of p < .01 was applied.

Results

Factor structure of the MFA and ASA-MF

Exploratory and confirmatory factor analysis of the MFA

Exploratory factor analysis of the MFA in the first half of the sample (n = 249) indicated that a 4-factor solution fit the data better than a 3-factor solution [X2 difference test (17) = 53.67 p < .001]; however, because 4-factor solution included a single-item factor, it was rejected. The 3-factor model was also rejected because one of the factors did not make conceptual sense, so a 2-factor model was selected. Several items were removed due to poor loadings, and this model was subjected to a CFA in the remaining half of the sample (N = 250). The residuals for four pairs of items within subscales were correlated based on modification indices, and the final 2-factor, 14 item model had adequate fit to the data [X2(112) = 139.45, p < .001, CFI = 0.95, RMSEA = 0.06, SRMR = .05]. These factors corresponded to: perception of general benefit associated with monitoring and feedback (MFA Benefit, 10 items) and perception of harm associated with receiving negative feedback (MFA Harm, 4 items). Table 2 shows item loadings from the EFA and CFA analyses. Internal consistencies for both subscales were good (MFA Benefit α = 0.87, MFA Harm α = 0.87).

Table 2

MFA and ASA-MF scale and item scores and factor loadings

Scale or itemM (SD)dcFactor loadingsa

EFAbCFA

F1F2
MFA benefit scale score 4.07 (0.59) 2.20
Monitoring treatment progress is an important part of treatment 4.32 (0.90) 1.47 0.58 0.02 0.48
Monitoring treatment progress is valuable for supervision 3.99 (0.79) 1.25 0.52 −0.05 0.41
Providing feedback to clients about treatment progress helps to increase client motivation and engagement 4.05 (0.73) 1.44 0.67 0.04 0.60
Providing clients with feedback about their treatment progress empowers them to make informed decisions about their care 4.12 (0.68) 1.66 0.59 0.07 0.60
Providing clients with feedback about treatment progress facilitates collaboration between clients and clinicians 4.25 (0.59) 2.11 0.81 0.01 0.75
Clients want their therapists to provide them with information about treatment progress 3.89 (0.73) 1.23 0.58 −0.02 0.67
Providing clients with feedback about treatment progress can increase their insight 4.09 (0.66) 1.66 0.71 −0.06 0.65
Providing clients with feedback about treatment progress helps keep treatment focused on treatment goals 4.15 (0.69) 1.67 0.78 0.02 0.73
Providing clients with regular feedback about treatment progress creates an expectation for positive change 3.96 (0.74) 1.29 0.63 0.06 0.68
Providing feedback to clients about treatment progress (or lack thereof) can lead to better treatment outcomes 3.95 (0.72) 1.32 0.73 −0.07 0.73
MFA harm scale score 2.45 (0.69) −0.80
Providing feedback to clients about treatment progress (or lack thereof) would potentially harm the therapeutic alliance 2.27 (0.89) −0.82 0.28 0.42 0.62
Providing clients with negative feedback about their progress would lead to client deterioration or premature treatment termination 2.6 (0.88) −0.46 −0.01 0.88 0.76
Providing clients with negative feedback about their progress would decrease their motivation for and/or engagement in treatment 2.61 (0.90) −0.43 −0.02 0.83 0.69
Providing clients with negative feedback about their progress would make them think their therapist is incompetent 2.30 (0.85) −0.83 0.05 0.53 0.63
MFA items removed after EFA
Clients are able to provide accurate ratings of treatment progress 3.56 (0.81) 0.69 0.28 0.18
Clients do not provide honest ratings of treatment progress 2.46 (0.85) −0.64 0.06 0.16
Monitoring treatment progress is more important for satisfying administrative requirements than for clinical use 2.56 (1.23) −0.35 0.34 0.33
Information from treatment progress measures could be misused by clinic administrators 3.59 (0.96) 0.61 0.03 0.11
I have adequate training about how to discuss treatment progress (and lack thereof) with clients 4.01 (0.85) 1.19 0.32 0.05
Collecting progress monitoring data without providing feedback to clients is likely to harm rapport 3.29 (0.93) 0.31 0.13 −0.12
ASA-MF clinical utility scale score 2.98 (0.64) −0.03
Standardized progress measures don’t tell me anything I can’t learn from just talking to clients* 2.79 (1.03) −0.20 0.68
Using clinical judgment to monitor progress is superior to using standardized assessment measures* 3.22 (0.91) 0.24 0.72
Standardized progress measures provide more useful information than other assessments like informal interviews or observations 2.56 (0.81) −0.54 0.45
Standardized progress measures don’t capture what’s really going on with clients* 3.15 (0.92) 0.17 0.69
Clinical problems are too complex to be captured by a standardized progress measure* 3.25 (1.01) 0.25 0.67
Standardized progress measures gather information about the client that may not otherwise come up in session 3.68 (0.83) 0.81 0.58
Standardized progress measures are not able to detect meaningful changes as they occur* 2.91 (0.92) −0.10 0.63
Standardized progress measures don’t measure the outcome domains most important to clients* 3.07 (0.89) 0.08 0.70
ASA-MF treatment planning scale score 3.35 (0.70) 0.50
Standardized progress measures help gather objective information about whether treatment is working 3.45 (0.83) 0.54 0.69
Standardized progress measures help identify when treatment is not going well 3.31 (0.85) 0.37 0.70
Standardized progress measures can provide helpful information about whether it is time to terminate treatment 3.17 (0.72) 0.18 0.72
Information from standardized progress measures can help me plan for sessions 3.44 (0.91) 0.49 0.77
Standardized progress measures help identify when to change the overall treatment plan 3.38 (0.88) 0.43 0.78
ASA-MF practicality scale score 3.13 (0.73) 0.18
Standardized progress measures can efficiently gather information 3.52 (0.77) 0.68 0.62
The information I receive from standardized progress measures isn’t worth the time I spend administering, scoring, and interpreting the results* 2.90 (1.04) −0.10 0.79
Standardized progress measures interfere with establishing rapport during a session* 2.96 (1.05) −0.04 0.75
Completing a standardized progress measure is too much of a burden for my clients* 2.77 (0.91) −0.26 0.72
I do not have time to administer standardized progress measures on a frequent basis* 3.24 (1.08) 0.22 0.51
ASA-MF items removed due to inadequate factor loadingsd
Standardized progress measures are readily available in the language my clients speak 3.31 (1.00) 0.31
There are few standardized progress measures valid for ethnic minority clients 3.29 (0.72) 0.41
Copyrighted standardized progress measures are affordable for use in practice 2.70 (0.91) −0.32
Standardized progress measures are too difficult for many clients to read or understand 2.76 (0.89) −0.27
I have adequate training in the use of standardized progress measures 3.26 (1.06) 0.25

Confirmatory Factor Analysis of the ASA-MF

Specification of the ASA-MF model was done based on the original ASA three factor structure (Clinical Utility, Psychometric Quality, and Practicality). However, this model did not fit the data well [X2(227) = 691.66, p < .001, CFI = 0.89, RMSEA = 0.06, SRMR = .06]. Inspection of factor loadings and modification indices indicated 3 items originally specified as loading on the Psychometric Quality subscale loaded better on the Clinical Utility subscale. Review of these items indicated this deviation from the original ASA factor structure likely resulted from the reduced focus on diagnostic assessment tools in the ASA-MF. With this revision, remaining items loading on the Psychometric Quality subscale corresponded more with attitudes toward the use of assessment for treatment planning purposes; this subscale was renamed accordingly (Treatment Planning). Additionally, 6 items originally specified as loading on the Practicality scale did not load on any of the three subscales and were removed. The final model consisted of 18 items across three factors, all with acceptable internal consistency: ASA-MF Clinical Utility (8 items, α = 0.85), ASA-MF Treatment Planning (5 items, α = 0.85), and ASA-MF Practicality (5 items α = 0.81). Residuals of several items with similar wording were correlated. This model demonstrated adequate fit [X2(130) = 383.97, p < .001, CFI = 0.94, RMSEA = 0.06, SRMR = .05]. Table 2 shows item loadings.

Provider Attitudes Toward Monitoring and Feedback and Standardized Progress Measures

Table 2 contains the scale and item scores from the MFA and ASA-MF. On the MFA, providers reported positive attitudes toward gathering progress data and providing feedback to clients. The MFA Benefit scale and item scores were all positive on average, with large effects when compared to the neutral rating of three (d’s = 1.23–2.20). Participants disagreed with the MFA Harm scale items (scale d = −0.80), particularly with the idea that feedback could harm the therapy alliance (d = −0.82) or make clients think their therapist is incompetent (d = −0.83).

Responses on the ASA-MF were more neutral. On the ASA-MF Clinical Utility scale, attitudes were neutral on average (scale d = −0.03), although respondents did strongly agree that standardized progress measures can help gather information that might not otherwise come up in session (scale d = 0.81). Responses on the ASA-MF Benefit for Treatment Planning scale were somewhat positive (scale d = 0.50), with item score effect sizes falling in the small to medium range compared to the neutral value of 3. Finally, scores on the ASA-MF Practicality scale were neutral on average (scale d = 0.18), with most item score effect sizes falling in the small range.

Although overall scores were neutral to positive, the percentages of participants holding negative attitudes varied across scales. On the MFA scales, only 0.4 % (Benefit) to 6.8 % (Harm) held negative attitudes (i.e., mean scale scores less than 2.5 on the Benefit scale or more than 2.5 on the Harm scale). In contrast, on the ASA-MF scales, 11.4– 20.7 % of participants had scale scores below 2.5, indicating negative attitudes.

Standardized Progress Measure Use

When asked about use of standardized progress measures, 61.5 % of participants reported never using them consistently (including “other” responses such as “as needed”), 24.6 % reported using them on a regular basis, but less often than once a month (e.g., every 90 days, at the beginning and the end of treatment), and 8.7 % reported using them monthly, and only 5.2 % reported using them every 1–2 sessions. Participants also indicated how often they would prefer to administer them: 45.0 % said never, 29.5 % said some regular interval, but less often than once a month, 17.5 % said monthly, 6.8 % said every 1–2 sessions, and 1.2 % said they did not know.

Predictors of Provider Attitudes

MFA Scales

Table 3 shows predictors of MFA and ASA-MF scores. As hypothesized, providers with CBT theoretical orientations held more positive attitudes than those with other orientations for the MFA Harm scale (p < .01), although they did not differ on the MFA Benefit scale. Also consistent with hypotheses, providers working in private practice held more negative attitudes on the MFA Benefit scale than those in other settings (p < .01), but setting did not predict the MFA Harm scale. Contrary to study hypotheses, psychodynamic orientation, degree, years of professional experience, and child work were not related to MFA scores.

Table 3

Professional and practice predictors of attitudes toward monitoring and feedback

MFA benefitMFA harmASA-MF clinical utilityASA-MF treatment planningASA-MF practicality





UnivariateMVUnivariateMVUnivariateMVUnivariateMVUnivariateMV





BR2BBR2B BR2BBR2BBR2B
Years professional experiencea −0.021 0.002 −0.009 −0.001 0.000 −0.021 −0.058 0.008 −0.022 −0.079 0.000 −0.048 −0.033 0.002 −0.008
CBT orientation 0.098 0.010 0.087 −0.19** 0.018 −0.19** 0.25*** 0.038 0.19** 0.22** 0.018 0.16 0.23** 0.023 0.19**
Psychodynamic orientation −0.093 0.005 −0.045 0.042 0.001 −0.059 −0.34** 0.043 −0.21** −0.28** 0.024 −0.14 −0.22 0.013 −0.10
Doctoral versus master’s degree −0.053 0.001 −0.096 −0.11 0.003 −0.093 0.20 0.013 0.099 0.13 0.003 0.028 0.13 0.004 0.064
Private practice versus other settings −0.14** 0.017 −0.13 0.18 0.015 0.13 −0.16 0.015 −0.042 −0.22** 0.015 −0.068 −0.20** 0.016 −0.090
Workplace dictates 0.076 0.015 −0.001 −0.11 0.007 −0.039 2.91*** 0.024 0.13 3.45*** 0.040 0.22** 3.082*** 0.015 0.12
Child work 0.11 0.011 0.10 −0.12 0.006 −0.12 0.055 0.002 0.012 0.084 0.006 0.036 0.051 0.001 0.019

When all predictors were examined simultaneously (Table 3), private practice setting no longer predicted the MFA Benefit scale; the collective set of predictors explained 4.0 % of the variance in the scale. CBT orientation remained a significant predictor of the MFA Harm scale and the group of predictors explained 3.8 % of the variance in the scale.

ASA-MF Scales

As hypothesized, providers with CBT orientations also held significantly more positive attitudes on all three ASA-MF scales than those with other orientations and psychodynamic providers reported more negative attitudes on the Clinical Utility and Treatment Planning scales (all p’s < 0.01; Table 3). Also consistent with hypotheses, providers working in private practice had more negative attitudes than those in other settings on the ASA-MF Treatment Planning and ASA-MF Practicality scales (p’s < 0.01). Workplace dictates also were associated with more positive attitudes on all three ASA-MF scales (p’s < 0.001). Again, contrary to hypotheses, degree, years of professional experience, and work with child clients were not significant predictors.

When the predictors were examined simultaneously (Table 3), both CBT and psychodynamic orientations remained significant predictors of the Clinical Utility Scale and the group of predictors explained 8.6 % of the variance in the scale. For Treatment Planning, only workplace dictates remained significant; the predictors explained 8.0 % of the variance in the scale. For Practicality, CBT orientation remained significant but private practice setting did not; the predictors explained 4.5 % of the variance in the scale.

Predictors of Standardized Progress Measure Use

Clinician characteristics and attitudes were next examined as predictors of self-reported use of standardized progress monitoring. Use of any progress monitoring (i.e., those who endorsed any use of progress measures versus the “never” group) and frequent use (i.e., those who administered progress measures as least monthly versus those who administered them less often or never administered them) were both examined. Both variables were first predicted from clinician professional and practice characteristics and attitudes in univariate analyses (Table 4).

Table 4

Professional, practice, and attitudinal predictors of standardized progress measure administration

Use (any regular administration of standardized progress measures)Frequent use (administers standardized progress measures at least once a month)


UnivariateMVUnivariateMV




BOdds ratioBOdds ratioBOdds ratioBOdds ratio
Professional/practice characteristics
Years professional experiencea −0.35*** 0.71 −0.34** 0.71 −0.32 0.73 −0.36 0.70
CBT orientation 0.38 1.46 0.045 1.05 −0.04 0.96 −0.71 0.49
Psychodynamic orientation −0.65 0.52 −0.063 0.94 −0.88 0.41 −0.41 0.67
Doctoral versus master’s degree −0.002 1.00 −0.55 0.58 −0.089 0.92 −0.52 0.60
Private practice versus other settings −0.98*** 0.38 −0.63 0.53 −0.87** 0.42 −0.76 0.47
Workplace dictates 0.98*** 2.65 0.62** 1.85 0.83** 2.30 0.42 1.51
Child work 0.25 1.28 −0.033 0.97 0.17 1.18 0.026 1.03
Attitudes
MFA benefit 0.82*** 2.27 0.30 1.35 0.43 1.54 −0.56 0.57
MFA harm −0.37** 0.69 0.13 1.14 0.014 1.02 0.76** 2.13
ASA-MF clinical utility 1.14*** 3.12 0.44 1.55 1.45*** 4.26 0.37 1.45
ASA-MF treatment planning 1.04*** 2.81 0.36 1.43 1.27*** 3.57 0.77 2.16
ASA-MF practicality 0.95*** 2.57 0.52 1.68 1.4*** 4.08 1.51*** 4.54

As hypothesized, the likelihood of using any standardized progress measures at all was lower for clinicians with more years of professional experience (B = −0.35, p < .001, OR .71) and clinicians working in private practice (B = −0.98, p < .001, OR .38); use was higher for those with workplace dictates (B = 0.98, p < .001, OR 2.65) and those holding more positive attitudes on all of the ASA-MF and MFA subscales (all p’s < 0.01). All predictors were then examined together. In this model, years of professional experience and workplace dictates remained significant, but the other predictors did not. Although the attitude scales were no longer significant in this model, this was likely largely driven by the fact that the scales were highly correlated with one another (absolute r’s = 0.25–0.75). To understand the incremental validity of the scales, the R2 value for a model including only the professional and practice characteristics (R2 = 0.13) was compared to the R2 for a model that also included the attitude scales (R2 = 0.29); attitudes accounted for 16 % of the variability in use.

Next, predictors of frequent use were examined. The likelihood of frequent use was lower for clinicians working in private practice (B = −0.87, p = .002, OR .42) and higher for those with workplace dictates (B = 0.83, p = .002, OR 2.30) and those holding more positive attitudes on the three ASA-MF subscales (all p’s < 0.001). When all predictors were examined together, only the ASA-MF Practicality scale remained significant and the MFA Harm scale became significant via a suppressor effect, surprisingly in the opposite direction from what would have been predicted. The five attitude scales predicted 35 % of the variability in frequent use (R2 for the significant professional characteristics only model = 0.12; R2 for the professional characteristics + attitude scales model = 0.47).

Discussion

The first goal of this paper was to develop measures of attitudes toward monitoring and feedback in general and toward standardized progress measures with adequate psychometric properties. The resulting measures, the MFA and the ASA-MF, demonstrated adequate factor structures and internal consistencies in this sample. The MFA consisted of two subscales: one that measured perceived benefit of general monitoring and feedback practices and the other measured perceived risk of harm from negative feedback. Consistent with the original ASA, the ASA-MF had three subscales, but they differed somewhat from the original subscales. As with the original ASA, the ASA-MF had a Practicality scale that measured practical concerns about standardized progress measures. However, the change in focus from diagnostic assessment to standardized progress monitoring resulted in two new factors: the Clinical Utility score measured perceived general clinical usefulness of standardized progress measures, whereas the Treatment Planning scale measured perceived utility of standardized progress measures for planning treatment. Initial evidence of predictive validity for each measure was found through their relations to self-reported standardized progress measures use, and they demonstrated incremental validity in predicting use beyond professional and practice predictors. As such, these measures appear to be promising tools to use in future studies of monitoring and feedback, filling an important implementation science gap (Martinez et al. 2014).

The MFA data suggested that clinicians have overall positive opinions about the general practice of monitoring and feedback. They strongly agreed that this practice is beneficial and strongly disagreed that it could have harmful effects. In contrast, ASA-MF scores indicated that clinician attitudes toward using standardized progress measures were more neutral, particularly regarding their general clinical utility and their practicality. Taken together, it seems that clinicians feel it would be helpful to have frequent feedback about their clients’ progress, but may not have faith in the ability of standardized progress measures to meet that need. On average, this sample disagreed with the notion that standardized measures do not add anything beyond talking to their clients, but they also disagreed that standardized measures are more useful than other assessments, such as informal interviews and observations. Although the available data regarding the benefits of monitoring and feedback are based on systems that use standardized measures, there are examples in the literature of other strategies for monitoring progress, such as idiographic ratings of individual target problems (e.g., Elliott et al. 2016; Weisz et al. 2011) and some data suggest these measures may be more acceptable to clinicians (Landes et al. 2015). Should future studies indicate that these forms of progress monitoring also lead to improved outcomes, it may be that they will be more acceptable to clinicians.

A second goal of the study was to examine rates of use of standardized progress measures. Participants reported very low rates of use, with only 13.9 % of participants reporting the type of administration demonstrated to lead to improved client outcomes. On a somewhat positive note, when asked about how often they would like to administer these measures, nearly 25 % of participants said they would like to gather frequent progress data. However, only 6.8 % said they would prefer administering them every 1–2 sessions and 45 % said they would prefer to not gather any progress data. Thus, consistent with prior studies of psychologists (Ionita and Fitzpatrick 2014; Overington et al. 2015), these data indicate very low rates of progress monitoring among social workers, mental health counselors, and marriage and family therapists.

We also sought to identify clinicians who might be more open to engaging in monitoring and feedback. As mentioned above, there was a strong link between attitudes and use, with attitudes accounting for 16 % of the variability in any use and 35 % of the variability in frequent use. Specifically, both attitudes toward monitoring and feedback and toward standardized progress measures predicted whether participants ever used standardized progress measures. However, only attitudes toward the measures themselves (i.e., ASA-MF scores) were significant predictors of frequent use and, consistent with prior work on the original ASA (Jensen-Doss and Hawley 2010), practicality concerns were the only independent predictor of frequent use. It also suggests that efforts to get clinicians to engage in any progress monitoring could target both types of attitudes, but persuading clinicians to engage in frequent progress monitoring with standardized measures may require convincing them of the utility of the measures themselves.

Interestingly, despite these strong links between attitudes and use, they were not always associated with the same provider characteristics. As hypothesized, providers with CBT theoretical orientations were less likely to see monitoring and feedback as harmful, and more likely to find standardized progress measures clinically useful and practical. This finding is not surprising, given the concordance between monitoring and feedback and fundamental principles of CBT (Persons 2006) However, CBT providers were not significantly more likely to engage in progress monitoring. Related, psychodynamic orientation was associated with more negative attitudes on two of the ASA-MF scales, although it also did not relate to use. These findings suggest that positive attitudes alone may not lead to use, consistent with theories such as the theory of planned behavior (Ajzen 1991), which posits that, for positive attitudes to lead to a behavior, they also need to be accompanied by perceptions that there is a social norm expecting one perform the behavior (i.e., subjective norm) and beliefs that the individual is able to engage in the behavior (i.e., perceived behavioral control). It is also possible that CBT clinicians rely on other, non-standardized progress monitoring strategies not assessed here, such as progress through fear hierarchies or achievement of behavioral activation tasks. Alternatively, clinicians’ self-reported CBT orientations may not be reflective of actual delivery of CBT treatment components (Creed et al. 2016).

Also consistent with hypotheses, providers with more years of professional experience were less likely to engage in progress monitoring, although experience was unrelated to frequent use or to attitudes. It is possible that this sample, which had an average age of 56 and over 20 years of professional experience, did not have enough variability to fully test this hypothesis. Future work should attempt to access clinicians with a broader range of experience.

Interestingly, the most consistent predictors of both attitudes and use related to work setting. Consistent with prior studies of other evidence-based practices (Becker and Jensen-Doss 2013; Jensen-Doss and Hawley 2010), private practitioners saw less benefit to monitoring and feedback in general, and felt standardized progress measures were less practical and less useful for treatment planning. Private practitioners were also less likely to collect standardized progress data. In addition, providers working in settings that dictated their assessment practices held more positive attitudes toward standardized progress measures and were more likely to administer assessments than those working in settings without some form of assessment policy. Consistent with the broader literature emphasizing the importance of organizational factors to evidence-based practice use (e.g., Aarons et al. 2012), these data suggest that factors such as access to resources to support assessment or funder requirements to monitor progress, likely play a strong role in determining assessment practices. Future research is also needed to further explicate the organizational factors that might support or hinder use of monitoring and feedback. Although two items were included in the original MFA to assess organizationally-related attitudes about how administrators might use the data, these items did not load on the final factors. Future research using a broader set of organizational items is needed. In addition, efforts targeted specifically at private practitioners may be needed. This is a setting that is under-represented in implementation work, although practice-oriented research strategies are being developed to address this gap (Koerner and Castonguay 2015).

Contrary to our hypotheses, degree and working with adults versus children were not significant predictors of attitudes or use. Prior studies finding an association between degree and assessment attitudes and practices focused on psychologists (Ionita and Fitzpatrick 2014; Jensen-Doss and Hawley 2010); degree may not be as relevant in the disciplines studied here. In this sample, nearly all providers said that working with adults was a major part of their practice, so our analyses of client age focused on those who said they also worked with children. It was not possible to determine whether the adults that those providers also worked with were parents of their child clients or separate adult cases, limiting our ability to cleanly test this hypothesis.

This study had several strengths. Our response rate was comparable to, or higher than, nearly all prior monitoring and feedback surveys (Batty et al. 2013; Hatfield and Ogles 2004; Ionita and Fitzpatrick 2014; Johnston and Gowers 2005), resulting in a large, national sample of primarily masters-level providers from professional disciplines that were not well represented in prior studies, but are often prominent providers among dissemination efforts (e.g., Herschell et al. 2014). We used a rigorous measure development process and generated two measures of attitudes that can be used in a broad range of future studies. We also assessed frequent use of progress measures similar to the monitoring and feedback procedures found to facilitate treatment success, unlike prior studies which primarily have measured less frequent progress monitoring.

Despite these strengths, this study also had several limitations, many of which suggest directions for future research. We do not have data about the individuals who did not respond to the survey, so it is possible that providers who did not respond differed in some important ways from those who did. The limited data available regarding the general memberships of the practice organizations suggest that our sample is representative of these organizations in terms of gender and percentage of doctoral-level providers,3 but its representativeness on other variables is unknown. Additionally, practice organization members represent only a subset of providers; it is possible that attitudes may differ in another population of mental health service providers. Further, given the role of behavioral control in predicting behavior change (Ajzen 1991), it would have been useful to have information about how much training respondents have received in monitoring and feedback. One ASA-MF item that did not load on the final set of factors asked whether clinicians had adequate training in standardized progress measures; responses were generally neutral on that item (M = 3.26, SD = 1.06, d = 0.25). Future studies should examine whether prior experiences and training in monitoring and feedback translate into more positive attitudes toward, and greater use of, the practices.

Another limitation is that we only were able to assess self-reported use of standardized progress measures. Future studies could employ more objective measures of use, such as electronic medical records review. This paper also only focuses on use of standardized progress measures. While the available data supporting the use of monitoring and feedback rests on such measures, future research should examine attitudes toward, and use of, other types of progress measures, such as idiographic measures. Finally, the links between attitudes and self-reported use documented here are cross-sectional in nature; as such, the directionality of the relationship is unclear. Future longitudinal research (e.g., in studies of clinician training) could help clarify whether attitudes lead to use or vice-versa. The former would suggest that attitudes could be used as an indicator of openness to future use or that focusing on improving attitudes might lead to increased use. The latter would suggest that other strategies to get clinicians to engage in monitoring and feedback, such as agency or funder requirements to do so, might more readily lead to increased use and, ultimately, improved attitudes as well.

Despite these limitations, this study provides useful information about monitoring and feedback practices that can inform future implementation efforts. Our data suggest that concerns about standardized assessment measures, particularly regarding their practicality and utility above and beyond other assessment strategies, and organizational/setting factors are likely drivers of the low use of this strategy. Addressing concerns about the measures themselves will likely require a combination of creating more practical monitoring and feedback systems (e.g., with brief, low-cost or free measures with stable operating platforms; Bickman et al. 2016) that fit the realities of the practice setting (e.g., Borntrager and Lyon 2015), better making the case that these systems do indeed add utility above and beyond simply checking in informally with clients every session, and considering alternative measurement approaches, such as idiographic measures. Therapists working within organizations will also need organizational supports such as strong buy-in from agency leaders and supervisors along with sufficient time and resources for data collection (Gleacher et al. 2016). Additional research is needed to identify other organizational factors that might contribute to the use of monitoring and feedback. For example, it may be that agencies that value data for decision-making have an organizational climate that expects clinicians to collect and utilizing data, contributing to clinician’s sense of a subjective norm (Ajzen 1991) regarding monitoring and feedback. Providers working within private practice settings may benefit from being involved in supportive, collaborative efforts with other private practitioners (Koerner and Castonguay 2015). Fortunately, our findings suggest that monitoring and feedback is an evidence-based practice that aligns well with provider values. However, the challenge remains to identify ways for providers to feasibly engage in its use.

Acknowledgments

Funding This research was supported by an award from the University of Miami’s Provost Resaerch Award program to Dr. Jensen-Doss. Dr. Lewis’ work on this project was supported by the National Institute Of Mental Health of the National Institutes of Health (NIH) under Award Number R01MH103310 and Dr. Lyon’s work by NIH award K08MH095939.

Footnotes

1Providers could indicate multiple work settings and multiple orientations. Providers who indicated they spent any time working in private practice were included in the Private Practice group, those who listed cognitive or behavioral as part of their orientations were counted in the Cognitive-Behavioral group, and those who listed psychodynamic as part of their orientations were counted in the Psychodynamic group.

2These definitions are included in the instructions for the final versions of the measures, which are included as supplemental material to this article.

3Degree information was not available for AMHCA members.

Portions of this paper were presented at the 2015 annual meeting for the Association for Behavioral and Cognitive Therapies, Chicago, IL.

Electronic supplementary material The online version of this article (doi:10.1007/s10488-016-0763-0) contains supplementary material, which is available to authorized users.

Compliance with Ethical Standards

Conflict of Interest None of the authors have conflicts of interest to declare.

Ethical Approval All procedures performed in this study were in accordance with the ethical standards of the University of Miami Institutional Review Board and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent This study was approved for a waiver of signed consent; all participants were provided with a consent statement.

References

  • Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: The evidence-based practice attitude scale (EBPAS) Mental Health Services Research. 2004;6(2):61–74. [PMC free article] [PubMed] [Google Scholar]
  • Aarons GA, Horowitz J, Dlugosz L, Ehrhart M. The role of organizational processes in dissemination and implementation research. Dissemination and Implementation Research in Health: Translating Science to Practice. 2012:128–153. [Google Scholar]
  • Aarons GA, Sawitzky AC. Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological Services. 2006;3(1):61. [PMC free article] [PubMed] [Google Scholar]
  • Ajzen I. The theory of planned behavior. Organizational Behavior & Human Decision Processes. 1991;50:179–211. [Google Scholar]
  • APA Presidential Task FORCE on Evidence-Based Practice. Evidence-based practice in psychology. American Psychologist. 2006;61(4):271–285. [PubMed] [Google Scholar]
  • Batty MJ, Moldavsky M, Foroushani PS, Pass S, Marriott M, Sayal K, Hollis C. Implementing routine outcome measures in child and adolescent mental health services: From present to future practice. Child and Adolescent Mental Health. 2013;18(2):82–87. [Google Scholar]
  • Becker EM, Jensen-Doss A. Computer-assisted therapies: Examination of therapist-level barriers to their use. Behavior Therapy. 2013;44(4):614–624. [PubMed] [Google Scholar]
  • Bickman L. A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(10):1114–1119. [PMC free article] [PubMed] [Google Scholar]
  • Bickman L, Douglas SR, De Andrade ARV, Tomlinson M, Gleacher A, Olin S, Hoagwood K. Implementing a measurement feedback system: A tale of two sites. Administration and Policy in Mental Health and Mental Health Services Research. 2016;43(3):410–425. [PMC free article] [PubMed] [Google Scholar]
  • Bickman L, Kelley SD, Breda C, De Andrade ARV, Riemer M. Effects of routine feedback to clinicians on mental health outcomes of youths: Results of a randomized trial. Psychiatric Services. 2011;62(12):1423–1429. [PubMed] [Google Scholar]
  • Borntrager CF, Lyon AR. Client progress monitoring and feedback in school-based mental health. Cognitive and Behavioral Practice. 2015;22(1):74–86. [PMC free article] [PubMed] [Google Scholar]
  • Boswell JF, Kraus DR, Miller SD, Lambert MJ. Implementing routine outcome monitoring in clinical practice: Benefits, challenges, and solutions. Psychotherapy Research. 2015;25(1):6–19. [PubMed] [Google Scholar]
  • Chorpita BF, Bernstein A, Daleiden EL Research Network on Youth Mental Health. Driving with roadmaps and dashboards: Using information resources to structure the decision models in service organizations. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(1–2):114–123. [PubMed] [Google Scholar]
  • Creed TA, Wolk CB, Feinberg B, Evans AC, Beck AT. Beyond the label: Relationship between community therapists’ self-report of a cognitive behavioral therapy orientation and observed skills. Administration and Policy in Mental Health and Mental Health Services Research. 2016;43(1):36–43. [PubMed] [Google Scholar]
  • Dillman DA, Smyth JD, Christian LM. Internet, mail, and mixed-mode surveys: The tailored design method. 3. Hoboken: Wiley; 2009. [Google Scholar]
  • Dozois DJ, Mikail SF, Alden LE, Bieling PJ, Bourgon G, Clark DA, Drapeau M, Gallson D, Greenberg L, Hunsley J. The CPA presidential task force on evidence-based practice of psychological treatments. Canadian Psychology/Psychologie canadienne. 2014;55(3):153. [Google Scholar]
  • Ebesutani C, Shin SH. Knowledge, attitudes, and usage of evidence-based assessment and treatment practices in the Korean mental health system: Current status and future directions. The Korean Journal of Clinical Psychology. 2014;33(4):891–917. [Google Scholar]
  • Elliott R, Wagner J, Sales CMD, Rodgers B, Alves P, Café MJ. Psychometrics of the personal questionnaire: A client-generated outcome measure. Psychological Assessment. 2016;28(3):263–278. [PubMed] [Google Scholar]
  • Frauenhoffer D, Ross MJ, Gfeller J, Searight HR, Piotrowski C. Psychological test usage among licensed mental health practitioners: A multidisciplinary survey. Journal of Psychological Practice. 1998;4(1):28–33. [Google Scholar]
  • Garland AF, Bickman L, Chorpita BF. Change what? Identifying quality improvement targets by investigating usual mental health care. Administration and Policy in Mental Health and Mental Health Services Research. 2010;37(1–2):15–26. [PMC free article] [PubMed] [Google Scholar]
  • Garland AF, Brookman-Frazee L, Hurlburt MS, Accurso EC, Zoffness RJ, Haine-Schlagel R, Ganger W. Mental health care for children with disruptive behavior problems: A view inside therapists’ offices. Psychiatric Services. 2010;61(8):788–795. [PMC free article] [PubMed] [Google Scholar]
  • Gilbody SM, House AO, Sheldon TA. Psychiatrists in the UK do not use outcomes measures: National survey. British Journal of Psychiatry. 2002;180(2):101–103. [PubMed] [Google Scholar]
  • Gleacher AA, Olin SS, Nadeem E, Pollock M, Ringle V, Bickman L, Douglas S, Hoagwood K, et al. Administration and Policy in Mental Health and Mental Health Services Research. 2016;43:426–440. [PMC free article] [PubMed] [Google Scholar]
  • Hall C, Moldavsky M, Taylor J, Sayal K, Marriott M, Batty M, Pass S, Hollis C, et al. Implementation of routine outcome measurement in child and adolescent mental health services in the United Kingdom: A critical perspective. European Child & Adolescent Psychiatry. 2014;23(4):239–242. [PMC free article] [PubMed] [Google Scholar]
  • Hatfield DR, Ogles BM. The use of outcome measures by psychologists in clinical practice. Professional Psychology: Research and Practice. 2004;35(5):485–491. [Google Scholar]
  • Hatfield DR, Ogles BM. Why some clinicians use outcome measures and others do not. Administration and Policy in Mental Health and Mental Health Services Research. 2007;34(3):283–291. [PubMed] [Google Scholar]
  • Hawley KM, Cook JR, Jensen-Doss A. Do noncontingent incentives increase survey response rates among mental health providers? A randomized trial comparison. Administration and Policy in Mental Health. 2009;36(5):343–348. [PMC free article] [PubMed] [Google Scholar]
  • Herschell AD, Lindhiem OJ, Kogan JN, Celedonia KL, Stein BD. Evaluation of an implementation initiative for embedding dialectical behavior therapy in community settings. Evaluation and program planning. 2014;43:55–63. [PMC free article] [PubMed] [Google Scholar]
  • Higa-McMillan CK, Powell CKiK, Daleiden EL, Mueller CW. Pursuing an evidence-based culture through contextualized feedback: Aligning youth outcomes and practices. Professional Psychology: Research and Practice. 2011;42(2):137–144. [Google Scholar]
  • Hu Lt, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal. 1999;6(1):1–55. [Google Scholar]
  • Ionita G, Fitzpatrick M. Bringing science to clinical practice: A Canadian survey of psychological practice and usage of progress monitoring measures. Canadian Psychology/Psychologie Canadienne. 2014;55(3):187–196. [Google Scholar]
  • Jensen-Doss A, Hawley KM. Understanding barriers to evidence-based assessment: Clinician attitudes toward standardized assessment tools. Journal of Clinical Child & Adolescent Psychology. 2010;39(6):885–896. [PMC free article] [PubMed] [Google Scholar]
  • Jensen-Doss A, Hawley KM. Understanding clinicians’ diagnostic practices: Attitudes toward the utility of diagnosis and standardized diagnostic tools. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(6):476–485. [PMC free article] [PubMed] [Google Scholar]
  • Jensen-Doss A, Hawley KM, Lopez M, Osterberg LD. Using evidence-based treatments: The experiences of youth providers working under a mandate. Professional Psychology: Research and Practice. 2009;40(4):417. [Google Scholar]
  • Johnston C, Gowers S. Routine outcome measurement: a survey of UK child and adolescent mental health services. Child and Adolescent Mental Health. 2005;10(3):133–139. [Google Scholar]
  • Koerner K, Castonguay LG. Practice-oriented research: What it takes to do collaborative research in private practice. Psychotherapy Research. 2015;25(1):67–83. [PubMed] [Google Scholar]
  • Kotte A, Hill KA, Mah AC, Korathu-Larson PA, Au JR, Izmirian S, Higa-McMillan CK, et al. Facilitators and barriers of implementing a measurement feedback system in public youth mental health. Administration and Policy in Mental Health and Mental Health Services Research. 2016:1–18. [PubMed] [Google Scholar]
  • Lambert MJ, Whipple JL, Hawkins EJ, Vermeersch DA, Nielsen SL, Smart DW. Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science and Practice. 2003;10(3):288–301. [Google Scholar]
  • Landes SJ, Carlson EB, Ruzek JI, Wang D, Hugo E, DeGaetano N, Lindley SE, et al. Provider-driven development of a measurement feedback system to enhance measurement-based care in va mental health. Cognitive and Behavioral Practice. 2015;22(1):87–100. [Google Scholar]
  • Little RJ. A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association. 1988;83(404):1198–1202. [Google Scholar]
  • Lyon AR, Dorsey S, Pullmann M, Silbaugh-Cowdin J, Berliner L. Clinician use of standardized assessments following a common elements psychotherapy training and consultation program. Administration and Policy in Mental Health and Mental Health Services Research. 2015;42(1):47–60. [PMC free article] [PubMed] [Google Scholar]
  • Lyon AR, Ludwig K, Wasse JK, Bergstrom A, Hendrix E, McCauley E. Determinants and functions of standardized assessment use among school mental health clinicians: A mixed methods evaluation. Administration and Policy in Mental Health and Mental Health Services Research. 2016;43(1):122–134. [PMC free article] [PubMed] [Google Scholar]
  • Martinez RG, Lewis CC, Weiner BJ. Instrumentation issues in implementation science. Implementation Science. 2014;9:118. [PMC free article] [PubMed] [Google Scholar]
  • Meehan T, McCombes S, Hatzipetrou L, Catchpoole R. Introduction of routine outcome measures: Staff reactions and issues for consideration. Journal of Psychiatric and Mental Health Nursing. 2006;13(5):581–587. [PubMed] [Google Scholar]
  • Muthén L, Muthén B. Mplus user’s guide. 6. Los Angeles, CA: Muthén & Muthén; 1998–2011. [Google Scholar]
  • Overington L, Fitzpatrick M, Hunsley J, Drapeau M. Trainees’ experiences using progress monitoring measures. Training and Education in Professional Psychology. 2015;9(3):202–209. [Google Scholar]
  • Palmiter DJ., Jr A survey of the assessment practices of child and adolescent clinicians. American Journal of Orthopsychiatry. 2004;74(2):122–128. [PubMed] [Google Scholar]
  • Persons JB. Case formulation–driven psychotherapy. Clinical Psychology: Science and Practice. 2006;13(2):167–170. [Google Scholar]
  • Reese RJ, Norsworthy LA, Rowlands SR. Does a continuous feedback system improve psychotherapy outcome? Psychotherapy: Theory, Research, Practice, Training. 2009;46(4):418–431. [PubMed] [Google Scholar]
  • Unsworth G, Cowie H, Green A. Therapists’ and clients’ perceptions of routine outcome measurement in the NHS: A qualitative study. Counselling and Psychotherapy Research. 2012;12(1):71–80. [Google Scholar]
  • Ventimiglia JA, Marschke J, Carmichael P, Loew R. How do clinicians evaluate their practice effectiveness? A survey of clinical social workers. Smith College Studies in Social Work. 2000;70(2):287–306. [Google Scholar]
  • Weisz JR, Chorpita BF, Frye A, Ng MY, Lau N, Bearman SK, Ugueto AM, Langer DA, Hoagwood KE, et al. Youth top problems: Using idiographic, consumer-guided assessment to identify treatment needs and to track change during psychotherapy. Journal of Consulting and Clinical Psychology. 2011;79(3):369. [PubMed] [Google Scholar]

Why is it important conduct reviews to monitor your client's progress?

Reviews assist you to work out factors that have helped or hindered, and to make sure that you 'concrete in' those factors that helped and work to reduce or eliminate those factors that hindered the accomplishment of your clients' goals.

How do you monitor the progress of a client?

Some additional tools are also provided to support the process of monitoring client progress..
Outcome Plan Analysis View..
Reassessing Clients Needs Over Time..
Ongoing Assessment Management..
Reprioritizing Factors..
Lapsing Factors..
Managing Outcome Plan Reviews..
Recording Client Participation..
Evaluating Client Progress..

What methods would you use to review and assess your client's progress?

How to Track Client Progress.
Progress reports. Structured progress reports are a simple and effective means of helping clients evaluate progress and focus on their goals. ... .
'Before and after' photos. Sometimes a simple visual reminder can speak volumes. ... .
Workout or nutrition records. ... .
Communication..

How do you monitor clients progress during exercise?

The three key methods of monitoring exercise intensity are The talk test, Rating of perceived exertion and Heart rate monitoring. When a client is working aerobically and at a moderate intensity an individual should be able to hold a conversation with a mild level of breathlessness at the end of a sentence.