Evaluating CCEIS Initiatives

Sig Dispro Blog Series Logo

We often think of evaluation as a one-time activity that we do at the end of a project to generate required information for a report. However, evaluation is a powerful tool that we should use consistently in order to help us, as practitioners, do our job better and provide better outcomes for our students. Evaluating what we’re doing also gives us a way to highlight our good work.

Evaluating Comprehensive Coordinated Early Intervening Services (CCEIS) is not too different from how we evaluate other initiatives. In fact, we can use many of the same tools and surveys across initiatives and simply customize the tools and surveys based on the content of the specific initiative (e.g., learning objectives of a training).

Most CCEIS plans include an element of professional learning, either training or coaching. Our expectations of professional learning, regardless of the specific topic, are the same—increase knowledge and then see participants change their behavior based on that new knowledge. The following image illustrates changes we hope to accomplish.

CCEIS change illustration

Let’s look at an example of one LEA’s CCEIS activities. An LEA conducts a root cause analysis of significant disproportionality. 

  1. The LEA identifies five schools that suspend children with and without disabilities at a higher rate than other schools in the LEA
  2. Analysis also reveals that the same five schools have a higher-than-average teacher turnover and a high student transiency rate

To address the findings, the LEA proposes the following activities:

  • LEA supports the five schools with CCEIS funds (15% of the LEA's IDEA grant)
    • Provides a part-time Positive Behavioral Interventions and Supports (PBIS) coach for the schools
    • Provides PBIS training for all staff in the schools
    • Provides parent training to ensure an understanding of new discipline approach

How could the LEA evaluate these activities?

  1. Outputs. Outputs are immediate results of activities and things we can count or document. In this example, to evaluate the outputs of supporting the five schools with the CCEIS funds, the LEA could collect and report the number of training materials the schools developed with the funds, the number of trainings they held for teachers and parents, and attendance at those trainings (a roster with number and role of those attending the trainings).
  1. Short-term outcomes. Short-term outcomes are the direct results of the activities and outputs. For this example, the LEA could evaluate increases in adult knowledge as a result of the PBIS trainings. In order to do this, the LEA could create a short survey with questions that measure changes in knowledge, awareness, and attitudes about PBIS. It could look at ratings of the quality, relevance, and usefulness (QRU) of the trainings from participants’ perspectives. The LEA should align knowledge gain questions with the learning objectives of the activity. Here are some sample questions to measure QRU:
    1. Did the participants find the training to be of high quality (i.e., was the content sound and grounded in accepted professional practice)?
    2. Was the training relevant to participants’ work (i.e., did the content address an important problem or critical issue in the field)?
    3. Was the training useful for participants’ work?
  1. Intermediate outcomes. Intermediate outcomes are changes in adult behavior based on knowledge acquired through the activities and outputs. To measure changes, the LEA could use an appropriate fidelity tool (In this case, the Tiered Fidelity Instrument [TFI]) to see if the adults the LEA trained are implementing PBIS with fidelity. (The TFI is an existing tool that is part of the PBIS toolkit. )
  2. Long-term outcomes. Long-term outcomes are the results that fulfill the project’s goals. The LEA could track long-term student outcomes by examining in-school suspension (ISS) and out-of-school suspension (OSS) data for students with and without disabilities. In addition, the LEA also would look at the rates of teacher turnover since the LEA identified teacher turnover as a factor related to student suspensions.

This example gives a quick outline of how we could evaluate our CCEIS plans after we conduct a root cause analysis, identify a root cause, and plan and implement a response. We would collect evidence of outputs and measure short-term, intermediate, and long-term outcomes. In addition, we could create efficiencies in the process. For example, once we develop a survey for one training, we could use it for other trainings and simply change the questions designed to measure knowledge and align with those questions with the learning objectives. For measuring fidelity, many evidence-based practices, like PBIS, have established tools that can be used so we don’t have to “reinvent the wheel.” Finally, we could use student outcome data (like the numbers of ISS and OSS) that we are collecting for other purposes, such as reporting, for evaluation, as well.

Looking at the data we collect related to trainings, fidelity of implementation of what participants learn, and the effect on student outcomes on a monthly or quarterly basis will help us and our teams improve our programming, identify challenges, and make mid-course corrections when necessary.

If you want to learn more about evaluating CCEIS and other issues related to Significant Disproportionality, please visit the IDC Significant Disproportionality Resources page and the materials and recordings from our 2021 Significant Disproportionality Summit.

If you have questions, please reach out to us. You can contact your IDC State Liaison by selecting your state in the Find Your IDC State Liaison section on the TA page of the IDC website.  Your IDC State Liaison is available to help with any of IDC’s resources or discuss any of your TA needs.

Kim Schroeder