Revised BoQ

The Revised School-Wide PBS Benchmarks of Quality (BoQ)

Karen Elfner Childs, Don Kincaid, Heather Peshak George

Background

The School-Wide Benchmarks of Quality (BoQ) was initially developed and validated in 2005 to address the need for an efficient method of measuring implementation of school-wide PBS that would also provide feedback to guide teams toward higher levels of implementation. Over the last 5 years the exposure and use of the instrument has increased. The BoQ is included in the PBIS Evaluation Blueprint as one of the tools used to address the key questions related to fidelity when evaluating SWPBS programs. It is included on PBS Surveys (and the soon to be released PBIS Assessment) and is used by many states as an integral part of their evaluation systems.

This evaluation brief addresses some of the questions that have arisen during the on-going and widespread utilization of the Benchmarks of Quality. Specifically the questions include: (1) Does a factor analysis indicate that the critical elements of the BoQ “hang together,” that all items are of adequate strength, and do the factors hold true across years of administration? (2) Do the 7 new classroom items form a consistent factor? And (3) Does the BoQ/SET concurrent validity hold true with a greater number of schools than that used in the initial validation study?

Question (1) Factor Analysis

Rationale

The 10 critical elements that comprise the Benchmarks of Quality were identified from a practical and theoretical origin. The original validation procedures for the BoQ did not include an analysis of the factorization. For more information on the development and validation of the BoQ (see Cohen, Kincaid & Childs, 2007). To explore the internal strength and interrelationships of items on the BoQ a factor analysis was performed.

Method

An Exploratory Factor Analysis was conducted using BoQ data from 228 schools via a Kaiser-Meyer-Okin (KMO) to determine if the data are likely to factor well and to identify weak items. Additionally, internal consistency was identified using the change of alpha (internal consistency). A Primary Factor Analysis was run with SPSS 15.0 using data from 281 schools and a Confirmatory Factor Analysis was conducted using LISREL 8.72 with data from two alternate school years (n=99 and n=188) to determine if the factor is invariant across years. An analysis of the relationship between the individual BoQ items and the overall score was also conducted.
Results
The exploratory factor analysis found that the data are likely to have a single factor (KMO=.92; .60) minimum necessary to proceed with exploratory factor analysis. Given extraction criteria, the analysis resulted in a 5 factor structure with 28 items. Because the theoretical and practical basis for the 10 factor structure is foundational in Tier 1/Universal PBIS training and technical assistance to support schools in implementation, an analysis of the 10-factor structure was also conducted.
Seven items were identified as being weak by both the factor analysis and the correlation to the total.

  • Team has broad representation (alpha=.24/rtot=.251)
  • Suggested array of responses to major problem behaviors (alpha=.40/rtot=..388)
  • Data entered weekly (minimum) (alpha=.32/rtot=.325)
  • System includes opportunities for naturally occurring reinforcement (alpha=..40/rtot=.392)
  • Faculty/staff are taught how to respond to crisis situations (alpha=..35/rtot=..347)
  • Responding to crisis situations is rehearsed (alpha=..32/rtot=.329)
  • Procedures for crisis situations are readily available (alpha=.34/rtot=.339)

Question (2) Classroom Items

Rationale

One of the most widespread suggestions for improvements in utility of the BoQ during the first few years was measuring the fidelity of “classroom” implementation. It is clear that the actions of individual teachers within their classrooms (teaching expectations and rules, using prompts and reminders to keep the focus on desired behaviors, appropriate use of reinforcement system, and accurate use of the discipline system processes and forms) weighs heavily on the school’s ability to reach a high level of implementation and the desired outcomes. Unfortunately, these skills were not well generalized from training to practice across classrooms.

Method

The following items were derived largely from some of the more widely utilized classroom management assessment tools and are included in the 2010 BoQ.

  • Classroom rules are defined for each of the school-wide expectations and are posted in classrooms
  • Classroom routines and procedures are explicitly identified for activities where problems often occur (e.g. entering class, asking questions, sharpening pencil, using restroom, dismissal)
  • Expected behavior routines in classroom are taught
  • Classroom teachers use immediate and specific praise
  • Acknowledgement of students demonstrating adherence to classroom rules and routines occurs more frequently than acknowledgement of inappropriate behaviors
  • Procedures exist for tracking classroom behavior problems
  • Classrooms have a range of consequences/ interventions for problem behavior that are documented and consistently delivered

Scoring for each of the 7 classroom items is as follows: 2 points-Evident in most classrooms (>75% of classrooms), 1 point-Evident in many classrooms (50-75% of classrooms), and 0 points-Evident in only a few classrooms (less than 50% of classrooms). These 7 items were piloted in nearly 500 schools at the end of the 2008-2009 school year before being incorporated into the revised version (2010) of the BoQ. An initial factor analysis indicated that the items held together well.

Results

An additional factor analysis of the 7 items was conducted after the first year of administration in Florida with data from 398 schools that completed the revised Benchmarks of Quality. The Principle Component Factor Analysis showed only 1 factor explaining 62.10% of the variance and all items had strong primary loading (.68 and above). The Confirmatory Factor Analysis using the Cronbach Coefficient Alpha Test found very strong raw and standardized alphas (Alpha ».90) demonstrating that the new Classroom items form one consistent factor.

Question (3) Concurrent Validity

Rationale

The initial validation of the Benchmarks of Quality assessed the concurrent validity of the BoQ using the SET (School-wide Evaluation Tool) with 47 schools and found a moderate correlation. The relative lack of strength was hypothesized to be due to the fact that the BoQ seems to be better able to discriminate among schools that are implementing with fidelity. Though the instruments share common elements (response to discipline incidents, school expectations/rules, etc.) the BoQ measures some of those areas with greater specificity. Additionally, the BoQ covers critical features not covered by the SET including team functioning and buy-in. Given the relatively small number of schools involved in the initial concurrent validity study, a supplementary assessment was conducted with a greater number of schools.

Method

In the follow-up concurrent validity assessment Pearson product-moment correlations were conducted with data from 720 schools in Maryland and Illinois completing both the SET and the BoQ within the same general time frame of a given school year.

Results

Illinois and Maryland Descriptive Statistics

The results show there is a significant correlation between the BoQ and SET scores with r=0.53 and p<0.0001.

Data Source

Concurrent Validity

n

r

Maryland

668

0.51

Illinois

27

0.62

Both

695

0.53

The findings show both Illinois and Maryland’s BoQ and SET scores are significantly correlated with each other. Also, Illinois BoQ and SET scores have a higher correlation coefficient value than Maryland. This may be explained by the differences in the states’ procedures for use of the instruments (e.g. only schools with an 80/80 on the SET use the BoQ inIllinois).

Discussion and Further Directions

The results of the factor analysis directed the recent changes reflected in the BoQ (Revised; 2010) released in the 2009-2010 school year. The 7 items that did not load on a single factor and had weak item to total correlations were removed. The 7 classroom items that were found to hold together as a factor were added as a “classroom” critical element maintaining the 53 item, 10 element structure of the instrument. Due to a difference in possible points for the newly added 7 items, the total possible points on the BoQ increased from 100 to 107.

There are multiple tools available for measuring and monitoring implementation and during a time of scale up across the nation, the BoQ proves to be an efficient statistically sound instrument.

Citation for this Research Brief
OSEP Technical Assistance Center on Positive Behavioral Interventions and Supports. Web site: http://pbis.org/evaluation/evaluation_briefs/default.aspx

References

Cohen, R., Kincaid, D., & Childs, K. (2007). Measuring School-Wide Positive Behavior Support Implementation: Development and Validation of the Benchmarks of Quality (BoQ).Journal of Positive Behavior Interventions, 9(4), 203-213.

Horner, R.H., Todd, A.W., Lewis-Palmer, T., Irvin, L. K., Sugai, G., & Boland, J. B. (2004). The school-wide evaluation tool (SET): A research instrument for assessing school-wide positive behavior support. Journal of Positive Behavior Interventions, 6, 3–12.

Kincaid, D., Childs, K. & George, H.P. (2005). School-wide Benchmarks of Quality (BoQ). Unpublished instrument, University of South Florida.

The Revised School-wide Benchmarks of Quality is available at: http://www.pbssurveys.org