Databases

I was very optimistic when our school adopted a positive behavior support system, especially since that meant committing to a data based decision making process, but unfortunately, it hasn’t made getting consensus any easier. For instance, last month after doing a very involved functional behavior assessment involving many people, we developed a behavior intervention plan for a student with verbal aggression.

After three weeks of implementing the plan and collecting data, not only have we seen no reduction in this student’s behavior, we can’t agree on what to do about it. Our psychologist wants to give the program more time to work, the classroom teacher wants to scrap it and start over, and everyone else basically wants to avoid team meetings. Any ideas?

Three weeks of programming with no movement in the data definitely calls for a response from the team, but the plan may not need changing. As you suggest, one of the strengths of the positive behavior support approach is that we use data to guide our process. Unfortunately, unlike people, all data are not created equal.

Before deciding to change the plan, we need to be confident that the data obtained reflect reality and are not just an artifact of the way they are being recorded. Are all staff using the same behavioral definitions and recording parameters? It is not unusual for a kind of category dilation to occur as the plan progresses. Sometimes as we make observations, our recording criteria expand to include more instances of behavior over time. When this happens we may record more occurrences of behavior than we did during baseline or in earlier phases of treatment.

How are staff recording the data? Are they making notations as the behavior happens or are they waiting until late in the day (or week) and "remembering" how many times the behavior occurred? The plan may be more effective than we believe, but if the numbers are incorrect, we could discontinue a successful strategy.

If we are confident that the data are real, then we have to investigate whether the plan is actually being carried out as written. Did the team understand the plan? Was there consensus about the support strategies before they were implemented? Were elements of the plan inconvenient or logistically unfeasible and so were "unofficially" discontinued? Did supplemental—but unhelpful—additional procedures emerge of which the entire team was unaware? Program drift can develop quickly and happens more often than we may think.

Both of these questions—are the numbers real and is the plan being properly implemented—can be answered by spending time in the setting obtaining reliability measures and observing the plan being carried out. Once we have confidence in the data and in program fidelity, we can then consider whether the plan needs to be revised. In doing so, we can ask these questions:

Did the person himself or herself participate as a team member in the development of the plan? Does the plan build on the person’s existing strengths, competencies, talents, and skills? Are quality of life enhancements adequately addressed? Did we correctly identify the function of the behavior of concern? Do the preventive strategies adequately address behavioral function? Are the replacement behaviors and skills functionally equivalent to the behavior of concern? Are the replacement behaviors and skills culturally and socially appropriate? Are there unidentified competing contingencies at work in the environment? Is the team being properly supported to record data and carry out the plan?

We applaud your team’s commitment to positive approaches and to data based decision-making. Remember, things seldom run as smoothly as we like. Navigating these waters requires a systematic approach, a team that supports its members, and a tremendous amount of flexibility and tenacity. Keep at it.

Good luck. Thanks for shaking the Magic 8 Ball.