Skip to main content

Table of Contents | Search Technical Documentation | References

The NAEP Database → Database Quality Control: 2013 to 2018

NAEP Technical DocumentationDatabase Quality Control: 2013 to 2018

  

  

 

2018 Summary Comparison Tables

2017 Summary Comparison Tables

2016 Summary Comparison Tables

2015 Summary Comparison Tables

2014 Summary Comparison Tables

2013 Summary Comparison Tables

Beginning in 2013, a new automated quality control process replaced the earlier manual process. This new process involves comparing item-by-item percentage summary statistics generated by the NAEP Materials Processing and Scoring contractor (Contractor 1) from their raw data, once processed, to the same statistics generated from the database of the NAEP Design, Analysis, and Reporting contractor (Contractor 2). For every item, response percentages for each category are compared between the two file systems to ensure the accurate transmission of Contractor 1’s processed data to Contractor 2’s final database.

The process involves three steps.

  1. Contractor 1 computes frequency distributions for every student, school, and teacher question and then delivers statistics to Contractor 2 along with Contractor 1's final data file.
  2. Contractor 2 independently produces frequency distributions from their database after processing the data.
  3. Software programs exhaustively compare the statistical properties (including differences in frequencies, percentages, averages, and medians) of the two sets of frequency distributions to ensure reasonable accuracy.

It is important to note that Contractor 1’s database contains data for every processed booklet, digital test form, and questionnaire, while the final database is a subset of that data. For example, students may be removed from analysis because they are ineligible or excluded (ineligible students are removed from the final database when the data are merged with sampling weights; however, excluded students remain in the database to allow some information for these students to be examined). Similarly, collected teacher or school questionnaire information may be removed from analysis because the teacher and/or school had no students who were assessed. In the case of student data, the excluded student rate is approximately two percent. The goal of the exercise is to ensure that there are no unreasonable or unexplainable differences in the summaries of responses. There has been no case that raised a concern for unreasonable or unexplainable differences in the summaries of responses.

The summary comparison tables summarize the differences in percentages for the NAEP assessments. For each subject/grade/instrument, the statistics include the differences in percentages across all of the questions in that group. The statistics of percentage differences shown include the minimum, maximum, average, and median difference, all of which are expressed based on the absolute value of the response-category level. Summary comparison tables from the 2013 through 2018 NAEP assessments are available via the links in the upper right section of this page.


Last updated 02 November 2022 (SK)