Emma H. Geller (University of California, San Diego)
On the first day of my required undergraduate research methods course, I tell students that I think of my role in the classroom as being a “coach.” I ask them: when you go to a practice or a rehearsal, what do you expect to do? What do you expect your coach to do? How is that different from what you might typically expect in a lecture hall? One big difference, I tell them, is that a coach does not simply tell you how to play the game; instead, they provide you lots of opportunities to practice skills in action. Sometimes you will do drills in isolation that you never use during an actual game (like singing scales or dribbling with two hands at once), and sometimes you will have a dress rehearsal or a scrimmage that’s meant to be as similar as possible to the full show or “big game” you are working towards. But you would never expect to sit silently at practice and become the world’s best basketball player or musician or actress. Similarly, you should not expect to be able to sit silently in my classroom and become an expert in research design and critical thinking about research methods: you’ve got to practice those skills to get good at them!
The primary tool I use to help students practice thinking skills in my class is a technique called Peer Instruction. Peer Instruction is an instructional routine for engaging students with challenging conceptual material by explaining to their peers. This technique was created in the 1990s by Eric Mazur, a physics instructor at Harvard, who noticed that students struggled to understand and apply the concepts he taught in lecture, despite feedback that his teaching was clear and easy to follow. I happened to experience Peer Instruction as an undergraduate in a physics course, and that experience has shaped both my research interests and teaching habits in the nearly two decades since then. In this essay, I’ll share some of the evidence base behind this practice, as well as the specifics of how I have implemented it in my research methods course.
What is Peer Instruction, and what’s the evidence that it works?
A typical peer instruction routine follows a structured sequence of lecture and discussion (Crouch & Mazur, 2001). First, the instructor lectures for a short period (10-15 minutes) on a specific concept or topic. This is immediately followed by a challenging multiple-choice question called a ConcepTest, which requires students to apply the concept that has just been taught. ConcepTests should not assess simple memory for presented information; rather they should require application and understanding of a concept. Good questions are ones where incorrect answer choices are plausible and/or based on common misunderstandings. Students respond to this ConcepTest individually first. Next, they are prompted to discuss their reasoning with peers sitting near them. This discussion should focus on why the student chose the answer they did and on resolving disagreement if different students provided different answers. After discussion with peers, students then answer the ConcepTest again. Often, this process includes class wide discussion facilitated by the instructor before revealing the correct answer and addressing any remaining questions or confusions before moving on to the next topic.
In the last two decades, much research has suggested that students greatly benefit from this technique. There is strong evidence, for example, that Peer Instruction improves understanding of the specific ConcepTest posed in class (Crouch & Mazur, 2001), as well as performance on isomorphic questions that test the same concept in a new question (Smith et al., 2009). In fact, students learn just as much from peer discussion as they do from instructor explanations, and that the combination of peer discussion followed by instructor explanation is particularly beneficial (Smith et al., 2011). Perhaps most convincingly, this effect holds for both strong and weak students in the same class, and suggests that the strongest students benefit from the peer discussion phase much more than from the instructor explanation phase (Smith et al., 2011). While much of the early research on Peer Instruction comes from courses in physics, more recent work has seen the use of peer instruction expanded to many domains, especially sciences such as biology, chemistry, and psychology.
Schell and Butler (2018) recently reviewed common modifications to the peer instruction routine and how findings from the science of learning (such as the effectiveness of retrieval, repetition, spacing, and feedback) inform the effectiveness of these modifications. Their recommendations highlight the importance of the peer discussion phase of the cycle as critical to effective learning. In line with this, one recent study found that students did not merely rely on their discussion partners’ confidence, but that peer discussion helped students develop and test more coherent explanations for their answers (Tullis & Goldstone, 2020). This recent evidence suggests that Peer Instruction is both flexible and powerful as a way of engaging students in explanatory processes that promote deeper and longer-lasting learning.
How I use Peer Instruction in my Research Methods course
During class, I generally follow the typical peer instruction routine of lecturing on a topic for roughly 10-15 minutes, followed by a related ConcepTest. Each peer instruction question takes roughly 5-8 minutes of class time; students have ~1 minute to respond individually, ~2 minutes to discuss their thinking with a neighbor, and we spend 2-5 minutes discussing all the answer options (and students’ reasoning) as a class. This means that a 50-minute lecture period typically contains 2-4 peer instruction cycles, and an 80-minute lecture period generally contains 3-5 cycles.
The questions I pose are intended to help students grapple with the most challenging and most frequently misunderstood concepts in class. For example, one of my most consistently effective questions occurs in the lecture when we cover types and scales of measurement. The question describes a researcher who measures memory by asking participants to study a list of words and then write down all of the words they can remember. Students are then asked to decide whether the number of words recalled is a self-report, behavioral, or physiological measure. Without fail, a majority of the class incorrectly believes this is a self-report measure, and we have a lively discussion about the distinction between self-report and behavioral measures, including how we might change the measure to make it a different type and why the differences between types of measures matters for psychological research. Asking students to grapple with this distinction in a concrete way helps them develop a much better grasp of the concept and then apply their understanding to novel questions about measurement types later in the course. Had I simply listed some examples of common behavioral measures, they might have memorized that list but never really understood the concept or why it matters.
How Peer Instruction fits into my grading scheme
Students complete peer instruction questions for participation credit, which means they are required to answer the questions, but they are not penalized for choosing wrong answers. In fact, I repeat frequently that the point of Peer Instruction is to discuss wrong answers, and that I am most interested in hearing from students who are unsure of their answer or torn between multiple options. Participation in Peer Instruction accounts for 10% of students’ overall grade in the course, and it is meant to balance an equivalent percentage of their grade that comes from weekly quizzes where the style of question is the same but accuracy counts.
I have used different systems for tracking Peer Instruction participation over the last 6 years, particularly as remote instruction during Covid reshaped the way students participate in class. In the pre-Covid years, I preferred the use of an in-class response system, much like iClickers, which allowed me to see student responses in real time. Students earned peer instruction credit for each lecture by answering at least half of the questions posed that day. This allowed some flexibility if students arrived late or needed to leave early, but generally resulted in attendance rates around 90% throughout the term.
During remote instruction, our courses were held over Zoom but we could not require synchronous attendance, so this kind of in-class response system was not feasible. Students who attended class synchronously (either in person or on Zoom) were still able to discuss the peer instruction questions with classmates during class, but any student watching a recording of class would miss out on this part of the cycle. To mitigate the lack of discussion and explanation with peers, I created separate assignments for each lecture in our LMS that prompted all students to write brief explanations of their own reasoning for each question. In these Canvas assignments, each ConcepTest from class was followed by the open ended prompt: “Explain your reasoning for the previous question. How did you pick your answer? Is anything still confusing or unclear about this question or topic?” Using the “graded survey” option in Canvas (a standard option under the Quiz menu), students were awarded students points for submitting the assignment without requiring correct answers. These assignments were graded automatically, and I allowed half credit for late submissions. In the last two years, these deadlines have sometimes been at the end of the week (e.g. all assignments due by Sunday night), by the next class period, or by the end of the class period, depending on the expectations for flexibility and synchronous attendance in any given term.
How the rest of the course builds on Peer Instruction
My biggest pitch to students about the value of peer instruction is that it prepares them to succeed at higher-stakes assignments in my course. A specific goal in my course is to help students feel comfortable reading and evaluating published research in psychology. To that end, students are required to read an assigned article each week and take a quiz on the methods described in the paper. The weekly quiz targets the same topics that were addressed in Peer Instruction questions that week. For example, in the week when we discuss the types of measurement question I described above, I warn students: “I will ask you exactly this kind of question about the article you are reading this week. So while you are reading, I want you to pay attention to how the variables are measured and ask yourself what type of measure it is!” These quizzes are cumulative in the sense that concepts from the earliest weeks in the course are repeated on quizzes in later weeks, so that students are revisit the same concepts in new contexts each week.
My exams follow the same format as my weekly quizzes and (by extension) the peer instruction assignments. Students read a brief paragraph summarizing a study and then answer a series of questions about the methods of the study. The beauty of this system is that I can ask essentially the same questions over and over – “What is the independent variable? How many levels does it have, and was it manipulated within or between subjects? What is the dependent variable? What type and scale of measurement best describe it?” etc. – and students have already practiced this type of thinking in class. Generating new questions for quizzes and exams is only a matter of choosing new articles to read or writing new scenarios to evaluate! Every new article adds to my bank of scenarios and questions that can be re-used in future terms.
This ecosystem of questions and assessments helps reinforce the idea that we are developing the skills associated with understanding and evaluating research methods by practicing them over and over again in new contexts. When students answer incorrectly in class or on a quiz, they have the chance to discuss and understand their mistakes before they get to the exam!
How students feel about Peer Instruction
I have been using Peer Instruction in my required, lower-division research methods course for the last 6 years, in class sizes ranging from 20 to 220 students, and I can attest that it is one of the most-appreciated components of my course. In 18 iterations of this course (and over 1,400 students), more than 80% of students reporting that they “liked” or “loved” the peer instruction assignments (a rating of 4 or 5 on a 1-5 scale), and over 90% reported that they learned “some” or “a lot” from them (a rating of 3 or 4 on a 1-4 scale). This pattern has held for both in-person and remote versions of the course, in spite of changes (and challenges) to implementing peer instruction online. More than a third of students have spontaneously identified peer instruction as their favorite part of the research methods course.
It is rare to find an instructional technique that is both well-supported by research evidence and also well-liked by students! Peer Instruction has played a huge role in making my course engaging and effective for the psychology majors at UC San Diego. I would be happy to share thoughts and materials with any instructors looking to do the same for their courses!
Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970-977.
Schell, J. A., & Butler, A. C. (2018). Insights from the science of learning can inform evidence-based implementation of peer instruction. Frontiers in Education, 3(33), 1-13.
Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T. (2009). Why peer discussion improves student performance on in-class concept questions. Science, 323 (5910), 122-124.
Smith, M. K., Wood, W. B., Krauter, K., & Knight, J. K. (2011). Combining peer discussion with instructor explanation increases student learning from in-class concept questions. CBE—Life Sciences Education, 10(1), 55-63.Tullis, J. G., & Goldstone, R. L. (2020). Why does peer instruction benefit student learning? Cognitive Research: Principles and Implications, 5(1), 1-12.