Equating Was Real Problem With Kentucky Performance Events

K-12 Testing

A Fall 1997 Examiner article on Kentucky reported that the scoring of performance events was being revised "due to reliability problems" and that the performance events would be administered but not included in accountability scores because they "lacked sufficient reliability. . . for use in a high-stakes accountability program."


Advanced Systems, the company which until recently held the contract for Kentucky's program, has informed FairTest that while the rest of the Examiner article was "highly accurate," these statements were not. In fact, "the reliability of the performance events was just as high, question for question, as for open-ended questions. . . . The reason for taking performance events out of the accountability system was not reliability, but equating."


Because the few performance events administered each year are so easy to remember, they cannot be reused without some students learning them ahead of time. But if they are new each year, how can one be sure that the tasks are equal in difficulty from year to year?


Kentucky students could not be used to equate the events, so comparable students in other states tried out the proposed tasks. Recently, these districts declined to continue because of the heavy time demands of the tryouts. The only other option was to pay another set of schools more money than was available in the budget for this purpose, which was not done. This left no proven, practical means to equate the fifth year's tasks with those of the previous four years, making their use in accountability impossible.


The letter concludes that Advanced Systems still believes "that performance events are an important tool for assessing students," and hopes that equating issues will be overcome so states desiring to include them in their tests will be able to do so. When contacted for confirmation, Brian Gong, Kentucky Associate Commissioner, said "It's unfortunate that the initial reports in the Kentucky press incorrectly attributed the decision to remove performance events to their reliability. That was never the issue."


For a discussion of performance assessments in accountability programs, see "New Thinking on Accountability," Examiner, Summer 1996.