Table of Contents | Search Technical Documentation | References
While NAEP releases a certain number of blocks of items to the public each time an assessment is completed, the majority of blocks are not released and are included in the next assessment of the same subject. These common blocks provide a way to link the two assessments. In order for the link between assessments to be as strong as possible, the responses to the common constructed-response items must be scored in the same way as they would have been scored for the previous assessment.
A number of student responses from previous assessments are rescored during the scoring of responses from the current assessment. Scoring sessions for student responses from a previous year and the current year are alternated, so that the previous-assessment responses can be used as a tool in monitoring interrater reliability throughout the scoring process. The duration of this scoring process is two to three months. Reliability measures are calculated to compare the scores assigned to responses from previous assessment scoring sessions and those assigned during the current assessment scoring sessions. The purpose of monitoring these statistics is to maintain uniformity of scoring across assessments.
Read more about cross-year (trend) scoring.
Access the interrater reliability data from the page Constructed-Response Interrater Reliability.