Site Search

Results 1 - 3 of 3

    Format: Presentations

    What Data Can Tell Us – and What It Can’t: How to Be Sure We Know What Our Data Means

    Validity is about more than results—it is determined by the question asked, the data collection design, and analysis. In this session, presenters led a discussion on what the data can, and cannot, say about outcomes, using examples from a selection of Part B and Part C indicators. Questions posed included: What is the difference between student outcomes and program performance? Can your data tell you which states, districts, or programs are performing better? What policy questions would you like to know about that your data currently cannot answer?

    Format: Presentations

    Part B Transition Indicators: Supporting States in the SSIP

    The National Post-School Outcomes Center, in collaboration with IDC, provided a picture of post-school outcomes for youth with disabilities over the last four years based on Indicator 14 data. Presenters discussed methods states use to collect these data. To further states' work in RDA and improve results, presenters provided information about resources and TA that support states in examining the transition indicators as stakeholders work through the three phases of the SSIP.

    An IDC Resource

    Format: Online Applications

    The Uses and Limits of Data: Supporting Data Quality With a Strong Data Chain

    This online learning module provides a general overview of how the methods and design of data collection and analysis affect interpretation of the data. The module presents the different links in the data chain (e.g., defining the question, measurement strategy) and describes how each link contributes to quality of data and data analyses. The module also includes examples from a selection of Part B and Part C SPP/APR indicators to illustrate how each step in the data chain contributes to the integrity of the data and its interpretation