Balancing human-human and human-computer interaction

A fundamental challenge in implementing personalized learning is in determining just how much it should be personal—or interpersonal, to be more specific. Carlo Rotella highlights the tension between the customization afforded by technology and the machine interface needed to collect the data supporting that customization. He narrows in on the crux of the problem thus:

For data to work its magic, a student has to generate the necessary information by doing everything on the tablet.

That invites worries about overuse of technology interfering with attention management, sleep cycles, creativity, and social relationships.

One simple solution is to treat the technology as a tool that is secondary to the humans interacting around it, with expert human facilitators knowing when and how to turn the screens off and refocus attention on the people in the room. As with any tool, recognizing when it is hindering rather than helping will always remain a critical skill in using it effectively.

Yet navigating the human-to-data translation remains a tricky concern. In some cases, student data or expert observations can be coded and entered into the database manually, if worthwhile. Wearable technologies (e.g., Google Glass, Mio, e-textiles) seek to shorten the translation distance by integrating sensory input and feedback more seamlessly in the environment. Electronic paper, whiteboards, and digital pens provide alternate data capture methods through familiar writing tools. While these tools bring the technology closer to the human experience, they require more analysis to convert the raw data into manipulable form and further beg the question of whether the answer to too much technology is still more technology. Instructional designers will always need to evaluate the cost-benefit equation of when intuitive human observation and reflection is superior, and when technology-enhanced aggregation and analysis is superior.

On the realistic use of teaching machines

From the perspective that all publicity is good publicity, the continued hype-and-backlash cycle in media representations of educational technology is helping to fuel interest in its potential use.  However, misleading representations, even artistic or satirical, can skew the discourse away from realistic discussions of the true capacity and constraints of the technology and its appropriate use. We need honest appraisals of strengths and weaknesses to inform our judgment of what to do, and what not to do, when incorporating teaching machines into learning environments.

Adam Bessie and Arthur King’s cartoon depiction of the Automated Teaching Machine convey dire warnings about the evils of technology based on several common misconceptions regarding its use. One presents a false dichotomy between machine and teacher, portraying the goal of technology as replacing teachers through automation. While certain low-level tasks like marking multiple-choice questions can be automated, other aspects of teaching cannot. Even while advocating for greater use of automated assessment, I note that it is best used in conjunction with human judgment and interaction. Technology should augment what teachers can do, not replace it.

A second misconception is that educational programs are just Skinner machines that reinforce stimulus-response links. The very premise of cognitive science, and thus the foundation of modern cognitive tutors, is the need to go beyond observable behaviors to draw inferences about internal mental representations and processes. Adaptations to student performance are based on judgments about internal states, including not just knowledge but also motivation and affect.

A third misconception is that human presence corresponds to the quality of teaching and learning taking place. What matters is the quality of the interaction, between student and teacher, between student and peer, and between student and content. Human presence is a necessary precondition for human interaction, but it is neither a guarantee nor a perfect correlate of productive human interaction for learning.

Educational technology definitely needs critique, especially in the face of its possible widespread adoption. But those critiques should be based on the realities of its actual use and potential. How should the boundaries between human-human and human-computer interaction be navigated so that the activities mutually support each other? What kinds of representations and recommendations help teachers make effective use of assessment data? These are the kinds of questions we need to tackle in service of improving education.