Sadly, it’s not particularly surprising that it took a proclamation by researchers from prominent institutions (Harvard and MIT) to get the media’s attention to what should have been obvious all along. That they don’t have alternative metrics handy highlights the difficulties of assessment in the absence of high-quality data both inside and outside the system. Inside the system, designers of online courses are still figuring out how to assess knowledge and learning quickly and effectively. Outside the system, would-be analysts lack information on how students (graduates and drop-outs alike) make use of what they learned– or not. Measuring long-term retention and far transfer will continue to pose a problem for evaluating educational experiences as they become more modularized and unbundled, unless systems emerge for integrating outcome data across experiences and over time. In economic terms, it exemplifies the need to internalize the externalities to the system.
Gil Press worries that “big data enthusiasts may encourage (probably unintentionally) a new misguided belief, that ‘putting data in front of the teacher’ is in and of itself a solution [to what ails education today].”
As an advocate for the better use of educational data and learning analytics to serve teachers, I worry about careless endorsements and applications of “big data” that overlook these concerns:
1. Available data are not always the most important data.
2. Data should motivate providing support, not merely accountability.
3. Teachers are neither scientists nor laypeople in their use of data. They rely on data constantly, but need representations that they can interpret and turn into action readily.
Assessment specialists have long noted the many uses of assessment data; all educational data should be weighed as carefully, even more so when implemented at a large scale which magnifies the influence of errors.