Balancing human-human and human-computer interaction

A fundamental challenge in implementing personalized learning is in determining just how much it should be personal—or interpersonal, to be more specific. Carlo Rotella highlights the tension between the customization afforded by technology and the machine interface needed to collect the data supporting that customization. He narrows in on the crux of the problem thus:

For data to work its magic, a student has to generate the necessary information by doing everything on the tablet.

That invites worries about overuse of technology interfering with attention management, sleep cycles, creativity, and social relationships.

One simple solution is to treat the technology as a tool that is secondary to the humans interacting around it, with expert human facilitators knowing when and how to turn the screens off and refocus attention on the people in the room. As with any tool, recognizing when it is hindering rather than helping will always remain a critical skill in using it effectively.

Yet navigating the human-to-data translation remains a tricky concern. In some cases, student data or expert observations can be coded and entered into the database manually, if worthwhile. Wearable technologies (e.g., Google Glass, Mio, e-textiles) seek to shorten the translation distance by integrating sensory input and feedback more seamlessly in the environment. Electronic paper, whiteboards, and digital pens provide alternate data capture methods through familiar writing tools. While these tools bring the technology closer to the human experience, they require more analysis to convert the raw data into manipulable form and further beg the question of whether the answer to too much technology is still more technology. Instructional designers will always need to evaluate the cost-benefit equation of when intuitive human observation and reflection is superior, and when technology-enhanced aggregation and analysis is superior.

On the realistic use of teaching machines

From the perspective that all publicity is good publicity, the continued hype-and-backlash cycle in media representations of educational technology is helping to fuel interest in its potential use.  However, misleading representations, even artistic or satirical, can skew the discourse away from realistic discussions of the true capacity and constraints of the technology and its appropriate use. We need honest appraisals of strengths and weaknesses to inform our judgment of what to do, and what not to do, when incorporating teaching machines into learning environments.

Adam Bessie and Arthur King’s cartoon depiction of the Automated Teaching Machine convey dire warnings about the evils of technology based on several common misconceptions regarding its use. One presents a false dichotomy between machine and teacher, portraying the goal of technology as replacing teachers through automation. While certain low-level tasks like marking multiple-choice questions can be automated, other aspects of teaching cannot. Even while advocating for greater use of automated assessment, I note that it is best used in conjunction with human judgment and interaction. Technology should augment what teachers can do, not replace it.

A second misconception is that educational programs are just Skinner machines that reinforce stimulus-response links. The very premise of cognitive science, and thus the foundation of modern cognitive tutors, is the need to go beyond observable behaviors to draw inferences about internal mental representations and processes. Adaptations to student performance are based on judgments about internal states, including not just knowledge but also motivation and affect.

A third misconception is that human presence corresponds to the quality of teaching and learning taking place. What matters is the quality of the interaction, between student and teacher, between student and peer, and between student and content. Human presence is a necessary precondition for human interaction, but it is neither a guarantee nor a perfect correlate of productive human interaction for learning.

Educational technology definitely needs critique, especially in the face of its possible widespread adoption. But those critiques should be based on the realities of its actual use and potential. How should the boundaries between human-human and human-computer interaction be navigated so that the activities mutually support each other? What kinds of representations and recommendations help teachers make effective use of assessment data? These are the kinds of questions we need to tackle in service of improving education.

Messy personalized learning

Phil Nichols describes his youthful adventures reappropriating the humble graphing calculator to program games:

For me, it began with “Mario” — a TI-BASIC game based loosely on its Nintendo-trademarked namesake. In the program, users guided an “M” around obstacles to collect asterisks (coins, presumably) across three levels. Though engaging, the game could be completed in a matter of minutes. I decided to remedy this by programming an extended version. I studied the game’s code, copying every line into a notebook then writing an explanation beside each command. I sought counsel from online tutorials, message boards, and chat rooms. I sketched new levels on graph paper, strategically placing asterisks in a way that would present a challenge to experienced players. Finally, after a grueling process of trial and error, I transformed my designs into code for three additional stages.

As he summarizes, his non-school-sanctioned explorations of an otherwise school-based tool led to sophisticated discoveries and creations:

[W]ith the aid of my calculator, I’d crafted narratives, drawn storyboards, visualized foreign and familiar environments and coded them into existence. I’d learned two programming languages and developed an online network of support from experienced programmers. I’d honed heuristics for research and discovered workarounds when I ran into obstacles. I’d found outlets to share my creations and used feedback from others to revise and refine my work. The TI-83 Plus had helped me cultivate many of the overt and discrete habits of mind necessary for autonomous, self-directed learning. And even more, it did this without resorting to grades, rewards, or other extrinsic motivators that schools often use to coerce student engagement.

While he positions calculator programming as a balance between the complementary educational goals of “convention” and “subversion,” this also echoes tradeoffs between routine expertise and adaptive expertise, between efficiency and creativity, or between convergent and divergent thinking. It remains an ongoing risk in overly restrictive learning environments. Standards that dictate the time and sequence of each stage of students’ progression fail to allow for the different paths which personalization accommodates. Yet even adaptive learning systems that seek to anticipate every next step a student might take must be careful not to add so many constraints that crowd out productive paths the student might otherwise have pursued. Personalized learning needs to leave room for error and open-ended discovery, because some things just aren’t known yet.

MOOCsperiments: How should we assign credit for their success and failure?

That San Jose State University’s Udacity project is on “pause” due to comparatively low completion rates is understandably big news for a big venture.

We ourselves should take pause to ponder what this means, not just regarding MOOCs in particular, but regarding how to enable effective learning more broadly. The key questions we need to consider are whether the low completion rates come from the massive scale, the online-only modality, the open enrollment, some combination thereof, or extraneous factors in how the courses were implemented. That is, are MOOCs fundamentally problematic? How can we apply these lessons to future educational innovation?

Both SJSU and Udacity have pointed to the difficulties of hasty deployment and starting with at-risk students. In an interview with MIT Review, Thrun credits certificates and student services with helping to boost completion rates in recent pilots, while the inflexible course length can impede some students’ completion. None of these are inherent to the MOOC model, however; face-to-face and hybrid settings experience the same challenges.

As Thrun also points out, online courses offer some access advantages for students who face geographic hurdles in attending traditional institutions. Yet in their present form, they only partly take advantage of the temporal freedom they can potentially provide. While deadlines and time limits may help to forestall indefinite procrastination and to maintain a sense of shared experience, they also interfere with realizing the “anytime, anywhere” vision of education that is so often promoted.

But the second half of “easy come, easy go” online access makes persistence harder. Especially in combination with massive-scale participation that exacerbates student anonymity, no one notices if you’re absent or falling behind. While improved student services may help, there remain undeveloped opportunities for changing the model of student interaction to ramp up the role of the person, requiring more meaningful contributions and individual feedback. In drawing from a larger pool of students who can interact across space and time, massive online education has great untapped potential for pioneering novel models for cohorting and socially-situated learning.

Online learning also can harness the benefits of AI in rapidly aggregating and analyzing student data, where such data are digitally available, and adapting instruction accordingly. This comes at the cost of either providing learning experiences in digital format, or converting the data to digital format. This is a fundamental tension which all computer-delivered education must continually revisit, as technologies and analytical methods change, as access to equipment and network infrastructure changes, and as interaction patterns change.

The challenges of open enrollment, particularly at massive scale, replay the recurring debates about homogeneous tracking and ability-grouping. This is another area ripe for development, since students’ different prior knowledge, backgrounds, preferences, abilities, and goals all influence their learning, yet they benefit from some heterogeneity. Here, the great variability in what can happen exaggerates its importance: compare the consequences of throwing together random collections of people without much support, vs. constraining group formation by certain limits on homogeneity and heterogeneity and instituting productive interaction norms.

As we all continue to explore better methods for facilitating learning, we should be alert to the distinction between integral and incidental factors that hinder progress.

Why personalized learning and assessment?

Much of the recent buzz in educational technology and higher education has focused on issues of access, whether through online classes, open educational resources, or both (e.g., massive open online courses, or MOOCs). Yet access is only the beginning; other questions remain about outcomes (what to assess and how) and process (how to provide instruction that enables effective learning). Some anticipate that innovations in personalized learning and assessment will revolutionize both, while others question their effectiveness given broader constraints. The goal of this blog is to explore both the potential promises and pitfalls of personalized and adaptive learning and assessment, to better understand not just what they can do, but what they should do.