When CopyrightX launched the first version of its course, it notified students enrolling in the course of an important feature: that students would be assigned to one of two curricula being tested. One curricula focused more on traditional U.S. case law; the other included more global examples and more secondary source explanation of concepts. Students were randomly assigned to one of the two curriculua, and in the end, the course staff decided that the U.S. case law version was superior for a number of reasons.
Educational experiments, like the one in CopyrightX, raise a number of ethical questions. Was it ethical to try out a new curriculum with students? Did it change the ethics of the situation that the course was being taught for free? Typically, experiments happen between iterations of a course (a teacher tries one thing in Fall '13 and another in Fall '14). Was it ethical to experiment on the same cohort of students with a course?
One feature of the CopyrightX example is that students were well informed of the nature of the experiment in advance. How do these ethics change when students are not informed in advance, especially in circumstances where they cannot be informed in advance? In the past few weeks, an unusual experiment took place in a Coursera course, where a professor taught a MOOC for a week and then deleted all of the content. The reasons for the experiment are not totally clear--it seems to be an effort to provoke students into interaction and course ownership and to question the relationship in MOOCs between faculty, for-profit learning management system providers, and universities (more on this from George Siemens and Jonathan Rees). Suffice it to say though, if you tell students in advance that you are going to delete all content after a week, it doesn't have quite the same effect.
Discussions of the ethics of experiments in education have been going on for a long time (here's one short salvo from 2005), but the rise of online learning raises a whole host of new questions. The recent publication of an experiment with Facebook highlights some of the new issues of research ethics in the digital age. Mitchell Stevens of Stanford and I wrote an op-ed for Inside Higher Ed last week that raises some of these issues:
In 2012, for one week, Facebook changed an algorithm in its News Feed function so that certain users saw more messages with words associated with positive sentiment and others saw more words associated with negative sentiment. Researchers from Facebook and Cornell then analyzed the results and found that the experiment had a small but statistically significant effect on the emotional valence of the kinds of messages that News Feed readers subsequently went on to write. People who saw more positive messages wrote more positive ones, and people who saw more negative messages wrote more negative ones. The researchers published a study in the Proceedings of the National Academy of Sciences, and they claimed the study provides evidence of the possibility of large-scale emotional contagion.
The debate immediately following the release of the study in the Proceedings of the National Academy of Sciences has been fierce. There has been widespread public outcry that Facebook has been manipulating people's emotions without following widely accepted research guidelines that require participant consent. Social scientists who have come to the defense of the study note that Facebook conducts experiments on the News Feed algorithm constantly, as do virtually all other online platforms, so users should expect to be subject to these experiments. Regardless of how merit and harm are ultimately determined in the Facebook case, however, the implications of its precedent for learning research are potentially very large.