MOOC measurement problems reveal systemic evaluation challenges

Sadly, it’s not particularly surprising that it took a proclamation by researchers from prominent institutions (Harvard and MIT) to get the media’s attention to what should have been obvious all along. That they don’t have alternative metrics handy highlights the difficulties of assessment in the absence of high-quality data both inside and outside the system. Inside the system, designers of online courses are still figuring out how to assess knowledge and learning quickly and effectively. Outside the system, would-be analysts lack information on how students (graduates and drop-outs alike) make use of what they learned– or not. Measuring long-term retention and far transfer will continue to pose a problem for evaluating educational experiences as they become more modularized and unbundled, unless systems emerge for integrating outcome data across experiences and over time. In economic terms, it exemplifies the need to internalize the externalities to the system.

Beating cheating

Between cheating to learn and learning to cheat, current discourse on academic dishonesty upends the “if you can’t beat ’em, join ’em” approach.

From Peter Nonacs, UCLA professor teaching Behavioral Ecology:

Tests are really just measures of how the Education Game is proceeding. Professors test to measure their success at teaching, and students take tests in order to get a good grade.  Might these goals be maximized simultaneously? What if I let the students write their own rules for the test-taking game?  Allow them to do everything we would normally call cheating?

And in a new MOOC titled “Understanding Cheating in Online Courses,” taught by Bernard Bull at Concordia University Wisconsin:

The start of the course will cover the basic vocabulary and different types of cheating. The course will then move into discussing the differences between online and face-to-face learning, and the philosophy and psychology behind academic integrity. One unit will examine the best practices to minimize cheating.

Cheating crops up whenever there is a mismatch between effort and reward, something which happens often in our current educational system. Assigning unequal rewards to equal efforts biases attention toward the inflated reward, motivating cheating. Assigning equal rewards to unequal efforts favors the lesser effort, enabling cheating. The greater the disparities, the greater the likelihood of cheating.

Thus, one potential avenue for reducing cheating would be to better align the reward to the effort, to link the evaluation of outputs more closely to the actual inputs. High-stakes tests separate them by exaggerating the influence of a single, limited snapshot. In contrast, continuous, passive assessment brings them closer by examining a much broader range of work over time, collected in authentic learning contexts rather than artificial testing situations. Education then becomes a series of honest learning experiences, rather than an arbitrary system to game.

In an era where students learn what gets assessed, the answer may be to assess everything.