Value-added measures (VAM) are widely used by economists to evaluate policy prescriptions and are increasingly used to evaluate individual teachers. Objections to the use of VAM scores come in two kinds. The first objection is that test scores don’t measure the educational outcomes we value. The second objection is that VAM measures may fare poorly at predicting future performance, even in terms of the future performance on tests. The second argument is basically that just too many important things are unmeasurable when VAM scores are computed. A new paper by David Deming goes a long toward laying to rest the second argument.
In the Charlotte-Mecklenburg schools, a large number of students are admitted to particularly desirable schools by lottery. Comparing outcomes for lottery winners to outcomes for lottery losers comes pretty close to looking at the results of a random experiment. Deming first computed lottery-based differences between schools. Then he built VAM-models of the kind used both in research and in practical application. Using information from previous years, Deming predicted the differences between schools based on VAM models. The lottery-based outcomes and VAM-based predictions were, on average, the same. In other words, there’s no evidence of any bias using VAM methods.