- Evaluation Briefs
- Racial Disproportionality
- Patterns/Predictors of CICO
- Economic Costs
- School Climate & SWPBIS
- High School Implementation
- Sustainability of Programs
- Use of FBA
- Suspensions and Future
- Drug and Alcohol Use Rate
- Drill Down Tool
- When to Use FBA
- Stronger Tier II and III
- Patterns of Minor ODRs
- Ethnicity Report
- Minor Misbehavior
- Discipline Referral Rates
- Cost of Implementation
- Measuring SWPBS
- Is BoQ Stable
- Revised BoQ
- Restraint-Seclusion Policies
- SWPBS and Socioeconomics
- ODR Across Grade Levels
- ODR Reductions and Ethnicity
- ODR and Population
- ODR and Enrollment Size
- Implementation Across US
- SWIS and Ethnicity
- Evaluation Tools
- State Implementation Survey
- Evaluation Examples
How to Measure School-wide Positive Behavioral Interventions and Supports Implementation Fidelity with the Team Implementation Checklist: Percent of Points or Percent of Items
Claudia G. Vincent & Tary J. Tobin
Research Statement and Rationale
The extent to which schools implement school-wide positive behavioral interventions and supports (SWPBS) with fidelity is of consequence to researchers, practitioners, and policy makers. Researchers gathering evidence for the effectiveness of SWPBS use implementation fidelity measures to differentiate between schools that implement and those that do not within the contexts of descriptive or experimental group designs. School personnel focused on creating welcoming and orderly school environments look towards SWPBS fidelity of implementation measures to channel resources towards intervention components with low fidelity scores. Policy makers base decisions to adopt SWPBS on rigorous research studies conducted with defensible implementation criteria. Given these far-reaching consequences of SWPBS fidelity implementation measures, our focus is on comparing different ways to calculate the criterion score for the Team Implementation Checklist (TIC), a fidelity of implementation measure widely used by schools engaged in SWPBS implementation.
The commonly accepted TIC criterion is derived from a count of raw points, including points that represent items rated Òin progress.Ó Because an item rated Òin progressÓ is not yet fully implemented, an alternative calculation of the TIC criterion could be based on the number of items rated ÒachievedÓ (i.e., fully implemented). The following analysis explores if calculating the TIC criterion based on the number of items rated ÒachievedÓ is more representative of SWPBS implementation fidelity than calculating the TIC criterion based on the number of raw points.
To assess the extent to which either TIC criterion calculation represents SWPBS implementation fidelity, it is necessary to calibrate both TIC criterion calculations on a known measure of SWPBS implementation fidelity. In addition to the TIC, the Benchmarks of Quality (BoQ; Kincaid, Childs, & George, 2005) is commonly used to assess SWPBS implementation fidelity and has been found to be valid even when Òadministered in diverse methodsÓ (Childs, George, & Kincaid, 2011, p. 1). Childs, Kincaid, and George (2011) recently reported an explanation of changes in the latest version of the BoQ (Kincaid, Childs, & George, 2010). Kincaid, Childs, BlasŽ, and Wallace (2007) used the BoQ in a study of perceptions of barriers and facilitators of SWPBS implementation, as viewed by educators in schools classified as high or low implementers. Schools scoring 70% on the BoQ typically would score 80% on the TIC. Given this validated criterion score of the BoQ, we examined if schools that obtained 80% or more of all TIC points differ on their BoQ score from schools that rated 80% or more of all TIC items as Òachieved.Ó
Scoring the TIC
The TIC (version 3.1; Sugai, Horner, Lewis-Palmer, Rossetto Dickey, 2011) consists of a total of 22 items grouped into 6 subscales: (1) establish commitment, (2) establish and maintain team, (3) self-assessment, (4) establish school-wide expectations: prevention systems, (5) classroom behavior support systems, and (6) build capacity for function-based support. Each item is scored on a 3-point scale, with 2 = Òachieved,Ó 1 = Òin progress,Ó and 0 = Ònot yet started,Ó for a total maximum score of 44. Teams are advised to complete it quarterly until at least an 80% criterion is reached, although actual use varies widely (Tobin, 2006).
Scoring the BoQ
The BoQ consists of a total of 53-items arranged into 10 subscales: (1) PBIS team, (2) faculty commitment, (3) effective discipline procedures, (4) data entry, (5) expectations and rules, (6) reward system, (7) lesson plans for teaching behavioral expectations, (8) implementation plan, (9) classroom systems, and (10) evaluation. Items are scored on different point scales: 12 items can receive a maximum score of 3; 30 can receive a maximum score of 2; and 11 can receive a maximum score of 1, so that the total maximum score is 107. Cohen, Kincaid, and Childs (2007) found that schools that score at or above 70% of total points on the BoQ see reductions in office discipline referral rates. Thus, 70% of the BoQ total score is considered representative of SWPBIS implementation.
Our sample was drawn from the SWPBS research database located at the University of Oregon and contained data from schools that completed both the TIC and the BoQ in the 2010-2011 school year via PBIS Assessment (www.pbisassessment.org), a web-based application allowing schools to complete SWPBS fidelity measures, review outcomes, and engage in action planning towards full and sustained implementation. (For a detailed description of the data and associated technical notes, please see Appendix.) The final dataset consisted of a total of 448 schools, 1 Pre-K, 282 elementary, 89 middle, 38 high, and 38 schools with other grade level configurations (e.g., K-8, K-12). A total of 22 of these schools were labeled as alternative schools. Table 1 provides an overview of the sampleÕs mean enrollment, mean TIC total points and mean BoQ total points:
Table 1 Sample Overview
Mean enrollment (SD)
Mean TIC Total Points (SD)
Mean BoQTotal Points (SD)
13 US States (CO, CT, GA, IL, KY, MI, MO, NY, OK, OR, PA, SC, WI)
Most frequently represented states:
To examine how the two methods of calculating the TIC criterion compare with the established BoQimplementation criterion of 70% of BoQ points, we compared mean percent of BoQ points across schools that met and did not meet the TIC criterion for both calculation methods. If calculating the TIC criterion based on number of points led to substantively different conclusions regarding implementation fidelity, one would expect the BoQ score for schools that obtained or did not obtain 80 or more percent of TIC total points to differ from the BoQ score for schools that rated or did not rate 80 or more percent of TIC items as Òachieved.Ó
Table 2 provides an overview of the results of our descriptive analysis. It is important to note that 272 schools met the traditional TIC criterion (80% or more of TIC points), while 180 schools met the alternative TIC criterion (80% or more of TIC items rated ÒachievedÓ). Thus, the alternative TIC criterion calculation appears more rigorous. However, the groups did not differ substantively on the BoQ measure.
Table 2 Descriptive Outcomes
TIC Criterion Met/Not Met
Mean Percent of BoQ points (SD)
³80% of TIC points (n = 272)
< 80% of TIC points (n = 176)
³80% of TIC items ÒachievedÓ (n = 180)
< 80% of TIC items ÒachievedÓ (n = 268)
Schools that met the TIC criterion calculated with either method had similar BoQ scores far exceeding the BoQ implementation criterion. Schools that did not meet the TIC criterion calculated with either method also had similar BoQ scores close to the minimum BoQimplementation criterion of 70%.
Of the 180 schools with 80% or more of the TIC items Òachieved,Ó 93% (n = 167) also met the BoQ implementation criterion of 70% or more BoQ points. Of the 272 schools that met the traditional TIC criterion (80% or more TIC points), 88% (n = 238) also met theBoQ implementation criterion of 70% or more BoQ points.
Our analysis has shown that basing the TIC criterion on number of points or number of items does not appear to have substantive consequences for implementation fidelity. It appears that calculating the TIC criterion with number of items rated as ÒachievedÓ results in fewer schools being considered implementers. Whether a more rigorous criterion is desirable, depends on the meaning and consequences of Òimplementation.Ó The purpose of SWPBS implementation is to improve discipline outcomes for students; increased implementation is commonly associated with reductions in office discipline referrals (Bradshaw, Koth, Thornton & Leaf, 2009; Bradshaw, Mitchell, & Leaf, 2009; Bradshaw, Reinke, Brown, Bevans, & Leaf, 2008). Therefore, ÒimplementationÓ might have to be calibrated on student outcomes, with office discipline referrals being the most readily available measure of discipline. The current dataset consisting of 1 year of data did not lend itself to examining reductions in referrals over time in conjunction with changes in TIC scores. Future research efforts might focus on multi-year datasets that would allow identification of an optimal criterion score associated with meaningful reduction in referrals via a receiver operating characteristic (ROC) analysis. ROC analysis would allow identifying a TIC criterion score where the highest benefit (true and meaningful reductions in referrals) is associated with the lowest cost (misclassification of schools).
Bradshaw, C. P., Koth, C. W., Thornton, L. A., & Leaf, P. J. (2009). Altering school climate through school-wide positive behavior interventions and supports: Findings from a group randomized effectiveness trial. Prevention Science, 10, 100-115.
Bradshaw, C. P., Mitchell, M. M., & Leaf, P. J. (2009). Examining the effects of school-wide positive behavioral interventions and supports on student outcomes. Journal of Positive Behavior Interventions Online First, doi: 10.1177/1098300709334798
Bradshaw, C., Reinke, W., Brown, L., Bevans, K., & Leaf, P. (2008). Implementation of school-wide positive behavioral interventions and supports (PBIS) in elementary schools: Observations from a randomized trial. Education and Treatment of Children, 31, 1-26.
Childs, K. E., George, H. P., & Kincaid, D. (2011). Stability in variant administration methods of the School-Wide PBS Benchmarks of Quality (BoQ). Evaluation Brief. OSEP Technical Assistance Center on Positive Behavioral Interventions and Supports. Retrieved fromhttp://www.pbis.org/evaluation/evaluation_briefs/mar_11_(2).aspx
Childs, K. E., Kincaid, D., & George, H. P. (2011). The revised School-Wide PBS Benchmarks of Quality (BoQ). Evaluation Brief. OSEP Technical Assistance Center on Positive Behavioral Interventions and Supports. Retrieved fromhttp://www.pbis.org/evaluation/evaluation_briefs/mar_11_(1).aspx
Cohen, R., Kincaid, D., & Childs, K. E. (2007). Measuring school-wide positive behavior support implementation: Development and validation of the Benchmarks of Quality. Journal of Positive Behavior Support, 9, 203-213.
Kincaid, D., Childs, K., BlasŽ, K. A., & Wallace, F. (2007). Identifying barriers and facilitators in implementing school-wide positive behavior support. Journal of Positive Behavior Interventions, 9, 174-184. doi: 10.1177/10983007070090030501
Kincaid, D., Childs, K. E., & George, H. (2005). School-wide benchmarks of quality. Unpublished instrument. Tampa: University of South Florida.
Kincaid, D., Childs, K., & George, H. (2010). School-wide Benchmarks of Quality (Revised).Unpublished instrument. USF, Tampa, Florida.
Sugai, G., Horner, R. H., Lewis-Palmer, T., Rossetto Dickey, C. (2011, May). Team Implementation Checklist, Version 3.1. Eugene: Educational & Community Supports, University of Oregon.
Tobin, T. J. (2006). Use of the Team Implementation Checklist in regular and alternative high schools. Eugene: University of Oregon, Educational and Community Supports. Retrieved from http://pages.uoregon.edu/ttobin/alt_tic.pdf
The original dataset consisted of 3 files:
á School demographics (e.g. grade level, location, enrollment, office discipline referral rates) (n = 2340)
á TIC scores (n = 3423)
á BoQ scores (n = 2897)
Because we wanted to focus our analysis on public schools with reliable ODR data located in the United States, we carefully reviewed each data file for the presence of (a) private schools, (b) number of students with ODR exceeding overall enrollment, (c) schools within schools (coded as ÒParentOrChildÓ), and (d) school days listed as less than 150 or more than 366. This review resulted in the following deletions:
Identified and deleted 3 private schools
Identified and deleted 39 schools with StudentsWith ODR GT SWIS enrollment
Identified and deleted 20 schools that were ÒParent OrChildÓ
Identified and deleted 95 schools with SchoolDays not between 150 and 366 days
Identified and deleted 5 private schools
Identified and deleted 48 schools that were ÒParentOrChildÓ
Identified and deleted 6 private schools
Identified and deleted 42 schools that were ÒParent OrChildÓ
The TIC and BoQ variables contained in the dataset were compared to the TIC and BoQitems on the copies of both instruments posted on www.pbisassessment.org. Results were as follows:
á The TIC data contained in the dataset consisted of a total of 22 variables matching the 22 items on the instrument.
á The BoQ data contained in the dataset consisted of a total of 60 variables, while the instrument contained a total of 53 items. The following discrepancies were identified:
o 7 items were contained in the dataset, but did not appear on the BoQ posted on
BoQ_TeamBroadRepresentation (maximum score in dataset = 1)
BoQ_DisciplineResponsesToMinors (maximum score in dataset = 1)
BoQ_DataEnteredWeekly (maximum score in dataset = 1)
BoQ_RewardsNaturalReinforcement (maximum score in dataset = 1)
BoQ_CrisisFacultyTaught (maximum score in dataset = 1)
BoQ_CrisisResponseRehearsed (maximum score in dataset = 1)
BoQ_CrisisProceduresAccessible (maximum score in dataset = 1)
These additional items were completed by 202 schools. Total possible additional points: 7.
o The 7 items of the BoQ Classroom subscale were missing for all schools in the dataset:
BOQ_ClassroomRulesDefined (maximum score on scoring guide = 2)
BOQ_ClassroomRoutinesIdentified (maximum score on scoring guide = 2)
BOQ_ClassroomRoutinesTaught (maximum score on scoring guide = 2)
BOQ_ClassroomTeachersUsePraise (maximum score on scoring guide = 2)
BOQ_ClassroomBehaviorAcknowledgement (maximum score on scoring guide = 2)
BOQ_ClassroomProcedures (maximum score on scoring guide = 2)
BOQ_ClassroomBehaviorConsequences (maximum score on scoring guide = 2)
Total missing points: 14
á The identified discrepancies had implications for the BoQ total score possible used to calculate the percent of points scored. We examined descriptives for the variableBoQ_TotalScore contained in the dataset. For schools that completed the additional items (n = 202), values of the variable ranged from 13 to 100. This range appears accurate . For schools that did not complete the additional 7 items (n = 1137), values ranged from 25-107. Given the missing classroom items, a total score of 107 is impossible. The true total score should not exceed 93.
Therefore, the BoQ_TotalScore for the majority of the sample (n = 1137) was not interpretable. We created a new variable, ÒBoQTotal.RecodedÓ by summing across all BoQsubscales contained in the dataset. This new variable represented the true BoQ total score and was used to calculate the percent of points scored. The denominator of the percent calculations for schools that completed the 7 additional items was 100 (107 – 14 + 7). The denominator of the percent calculations for schools that did not complete the 7 additional items was 93 (107 – 14).
Files were then merged matched on school ID numbers. A total of n = 448 schools were represented in all 3 files.