Society for the Teaching of Psychology: Division 2 of the American Psychological Association

E-xcellence in Teaching
Editor: Annie S. Ditta

  • 02 May 2019 3:55 PM | Anonymous

    Kameko Halfmann (University of Wisconsin – Platteville)

    I remember the first semester I taught general psychology, fresh, energetic, and a little bit naive. Relatively new to teaching, I would read students’ essays and exams, often in frustration when students clung to misconceptions of psychology that I thought I had adequately dispelled. “How do they not remember me explaining this?!” I would wonder in despair. Since then, it has become one of my missions to figure out how to more effectively teach and dispel these common misconceptions.

    Indeed, students walk into general psychology with a common sense understanding of human behavior, often heavily influenced by popular science and armed with misconceptions (Lilienfeld, Lohr, & Morier, 2016). Teaching general psychology, I learned, compels active myth busting to help students understand human cognition and behavior through a scientific lens. Best practices in teaching and learning include providing meaningful examples (e.g., Ausubel, 1968), encouraging student cooperation and teamwork (e.g., Johnson, Johnson, & Smith, 1998), and active learning (e.g., Kellum Carr & Dozier, 2001), to name a few.

    Over my handful of semesters as an assistant professor, I’ve leaned on these best practices, attempting to incorporate more examples and active learning into all of my courses. I occasionally collected data, dipping my toes into Scholarship of Teaching and Learning (SoTL); the data always letting me know that students felt like the activities helped them learn.

    I specifically developed an interest in teaching with technology. This interest grew from another revelation I had: students are not, so to speak, the digital natives we think they are (Prensky, 2001). Students use technology frequently, but the Education Center for Analysis and Research (ECAR) suggests student tech use is broad, not deep. Moreover, ECAR’s report (2018) indicates students still need support to use technology in meaningful ways. Similarly, Beetham and Sharpe (2007) indicate that students do not necessarily have the “habits of practice” for navigating new technology. So, I began to incorporate assignments and activities that forced students to use technology in educational ways. For example, I incorporated social media assignments into several of my courses.

    Then, last year, I had the chance to apply for an in-house grant titled “Innovations in Teaching with Technology.” I decided to apply with the goal to purchase Neulog plug-and-play psychophysiology modules. These modules are relatively inexpensive, easy to use, transportable technology that would allow me to incorporate psychophysiology into my courses. Previous research suggested using technology, such as portable EEG, correlated with enhanced attention, interest, and exam scores (Stewart, 2015). Labs such as these would allow students to “do” psychology and bring course content to life (Dunn, McCarthy, Baker, Halonen, & Hill, 2007) rather than having a lecturer “tell” students about experiments.

    In particular, I thought, psychology students tend to struggle to understand concepts associated with the biological basis of behavior; therefore, employing active learning methods to bring these concepts to life in lab sessions could be especially impactful (Thibodeau, 2011). I ended up receiving the grant. I also decided it was time for me to more seriously assess my teaching using SoTL.

    Initially, I developed one activity, designed to dispel the lie detector myth (i.e., the myth that “the polygraph is an accurate means of detecting dishonesty,” Lilienfeld, Lynn, Ruscio, & Beyerstein, 2010). Students observed me give a demonstration with a student volunteer, showing how to use the equipment. They also saw, through the demonstration, how several stimuli could elicit an electrodermal response. For example, I would have the volunteer take a deep breath, smell a scented candle, and, if they let me, I’d touch their ear with the eraser of a pencil. Each of these stimuli caused an electrodermal response. In other words, the demonstration showed students how the supposed lie detector test was really just measuring autonomic nervous system activity, and many stimuli, not just lying, could lead to changes in sympathetic nervous system arousal. Students then gathered in groups of 5 or 6 and engaged with the technology themselves for about 25 minutes.

    The first semester I used this activity, students reported that the activity improved the quality of the course, helped them understand concepts, helped them connect to others, promoted professional growth, enhanced their experience of participation and should be used more often. Each rating was significantly higher than a neutral baseline, with relatively large effect sizes. The following semester, I decided to take this research a step further: did the students actually understand the content better?

    In order to pursue this question, I needed another activity that was similar but covered unique content compared to the first. I decided to develop a biofeedback activity using the electrocardiogram module. Students, again, watched a demonstration on how to use the technology and then engaged with the technology, testing how various stimuli affect heart rate and answering questions related to biofeedback.

    I was teaching three sections of general psychology last semester when I assessed student understanding before and after engaging in these activities. Early-ish in the semester, when we were covering stress and emotion, I implemented these two activities (i.e., the lie detector activity and the biofeedback activity) over the course of two class periods, using a nonequivalent group pre-test/post-test design. On the first day, all of my students across three sections of general psychology took a pre-quiz related to the autonomic nervous system and why the polygraph is not considered an accurate index of lying. Two sections participated in the activity using the Neulog technology (lie detector active group). The third section participated in a lecture/discussion on the same topic (biofeedback active group). All sections took a post-quiz.

    The following class period, I flipped the groups. The section that had previously participated in a lecture/discussion did the biofeedback activity (i.e., the biofeedback active group) and the other two sections engaged in lecture/discussion on the same topic (i.e., the lie detector active group). Everyone took a pre-quiz and post-quiz again. I also included four questions (two per content type) on the following exam and two questions on the final exam (one per content type) to assess learning.

    What did I learn? Did the activities work? To be honest, the main thing I learned were the many challenges associated with conducting SoTL research. I did not find an effect of activity on understanding. Neither activity seemed to help or hurt student understanding of the content. But I did see an effect of activity group: one of my groups was outperforming the other overall: the biofeedback active group performed better, on average, across all assessments. I also found an effect of question content: the biofeedback-associated questions were easier, hitting a ceiling for the biofeedback active group on the exam. I also found an effect of time, where students improved from the pre-quiz to post-quiz (for the lie detector active group) and from post-quiz to exam (for the biofeedback active group). But none of these effects interacted with the activity group that students were in. Based on these assessments, participating in an active learning lesson did not boost performance relative to a more lecture-based lesson.

    But back to some of the lessons I learned about SoTL: Determining an appropriate method of assessment was challenging. I clearly used questions that were not well-matched in difficulty across content. I also tried to use variations of similar questions over the course of the semester for the different assessment time points; however, some of the questions were clearly more challenging than others. So, my first major lesson was

    1. Pretest assessment questions so they are matched on difficulty across content type and time of assessment.

    Another challenge related to my assessment was selecting an appropriate number of questions. I didn’t want this one topic related to my activities to take over my exams, and I ended up using fewer questions than I should have used to gauge student understanding. I also solely relied on multiple choice questions. My second main lesson was

    1. Use several questions and question types to assess understanding over the course of the semester.

    Neither of these lessons are particularly surprising (e.g., see http://regangurung.com/scholarship-of-teaching-and-learning-sotl/ for resources on SoTL), but they do take time and forethought to exercise well. Having assessed students several times now, I can better construct my assessments to reflect student understanding and not simply difficulty or other artifacts.

    I also attempted to assess students’ understanding at the end of the semester and included two key questions on the cumulative final exam. However, I decided to drop students’ lowest exam of five this semester, and so for many of the students, the cumulative final exam was optional, and only 37 students out of 100 took the final exam. This was the first time I used five exams, including a cumulative final, and it was the first semester I decided to drop the lowest exam. I did not anticipate such a low proportion of students would take the final exam. Although not directly related to my SoTL project, I would not use this set up again. Not only did many students miss out on an important learning opportunity (i.e., taking the final exam), it reduced my analytic power for this research.

    Another challenge I ran into were nonequivalent groups. There are two solutions to this problem that come to mind. First, I could collect more data with a new sample. Second, I could use random assignment to split my classes into two groups and invite only half of my students to participate in each activity (giving the other half a day off or a recorded lecture). Hopefully, this semester, I’ll collect more data in different courses and reach out to students from last semester to see if I can capture one more assessment from them to measure longer-term retention of material. Ideally, I will collect the new data using the random assignment technique to split my classes.

    I clearly ran into several limitations that prevented me from drawing confident conclusions at the end of the semester. I don’t know if I will ever be fully satisfied with my teaching or if it is possible to design a perfect SoTL project. Each semester, it seems my students challenge me in new ways, reigniting my mission to find a better way to teach a concept or dispel a misconception. And in following semesters, I respond by tweaking my courses, and sometimes by completely overhauling a course. I’ll continue to lean on other’s research as I slowly accumulate my own SoTL. I hope this research encourages others to put their own teaching to the test. You may discover something works better or worse than you thought. Or, like me, you might just be at the starting point for figuring out how to best assess student learning to determine what works.


    References

    American Psychological Association. (2014). Strengthening the common core of the Introductory Psychology Course. Washington, D.C.: American Psychological Association, Board of Educational Affairs. Retrieved from https://www.apa.org/ed/governance/bea/intro-psych-report.pdf

    Ausubel, D. P. (1968). Educational Psychology: A Cognitive View. New York: Holt, Rinehart, & Winston.

    Beetham, H., & Sharpe, R. (2007). An introduction to rethinking pedagogy for a digital age. In Beetham, H., & Sharpe, R. (eds), Rethinking Pedagogy for a Digital Age: Designing and Delivering e-Learning. New York, NY: Routledge.

    Dunn, D. S., McCarthy, M. A., Baker, S., Halonen, J. S., & Hill, G. W. (2007). Quality benchmarks in undergraduate psychology programs. American Psychologist, 7, 650-670. DOI: 10.1037/0003-066X.62.7.650

    EDUCAUSE Center for Analysis and Research. (2018). The ECAR study of undergraduate students and information technology. Louisville, CO: ECAR. Retrieved from https://library.educause.edu/~/media/files/library/2018/10/studentitstudy2018.pdf?la=en

    Johnson, D. W., Johnson, R. T., & Smith, K. A. (1998). Cooperative learning returns to college: What evidence is there that it works? Change: The Magazine of Higher Learning, 30, 27-38. https://doi.org/10.1080/00091389809602629

    Kellum, K. K., Carr, J. E., & Dozier, C. L. (2001). Response-card instruction and student learning in a college classroom. Teaching of Psychology, 28(2), 101-104.
    http://dx.doi.org/10.1207/S15328023TOP2802_06

    Lilienfeld, S. O., Lohr, J. M., & Morier, D. (2001). The Teaching of Courses in the Science and Pseudoscience of Psychology: Useful Resources. Teaching of Psychology, 28(3), 182–191. https://doi.org/10.1207/S15328023TOP2803_03

    Lilienfeld, S.O., Lynn, S.J., Ruscio, J., & Beyerstein, B.J. (2010). 50 great myths of popular psychology: Shattering widespread misconceptions about human behavior. New York: Wiley-Blackwell.

    Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9, 1-6.

    Stewart, P. C. (2015). This is your brain on psychology: Wireless electroencephalography technology in a university classroom. Teaching of Psychology, 42, 234-241. https://doi.org/10.1177/0098628315587621

    Thibodeau, R. (2011). Design and implementation of an undergraduate laboratory course in psychophysiology. Teaching of Psychology, 38, 259-261. https://doi.org/10.1177/0098628311421325



  • 02 Apr 2019 9:48 AM | Anonymous

    Sharon Claffey (Massachusetts College of Liberal Arts)

    Active learning is a popular classroom technique that shows up in several forms. Classrooms are “flipped” and class time is devoted to activities rather than requiring students to simply listen to lecture. Team Based Learning (TBL) is a form of small group learning that is centered on student experience and interaction (Michaelsen & Sweet, 2008) and is very structured in its design. In my past TBL courses, students would read the material independently and then come to class ready to take a Readiness Assessment Measure (RAM) independently, and then immediately take a RAM as a team effort. The rest of the course time would be divided into team activities and, less frequently, independent assignments. 

    When I taught courses using the TBL approach, I was not entirely happy with the rigidity of the design. As with any course design, I found students who loved the TBL approach and students who hated it. Particularly, I found that students quickly learned which teammate was the strongest academically and often defaulted to that student (e.g., having that particular student complete the team RAM without input from the rest of the team). This reduction in activity for some students contradicted the point of TBL and frustrated the students who felt they were carrying their team. 

    Consequently, I opted to take the parts of the TBL classroom that I loved (i.e., the active learning) and create a modified approach that was less reliant on strict formatting.  The core goal of the TBL design is to shift the focus of class time from lecture to more active participation and application exercises that facilitate learning and promote critical thinking about the material. Group activities have been shown to increase retention of material along with student satisfaction (Drouin, 2010), and to increase performance on objective knowledge assessments followed by students’ ratings of their ability to apply knowledge (Kreiner, 2009).

    While most research on TBL focuses on learning and classroom performance (e.g. Jakobsen, McIlreavy, & Marrs, 2014; Liu & Beaujean, 2017), I decided to examine the impact TBL has on empathy and social support among the students. This took the essence of TBL, but not the assessment (i.e., the active learning remained intact, but not the traditional TBL grading structure). Would active learning and increased interaction among small groups affect the students’ classroom experience?  Specifically, I wanted to examine if students in an active learning environment had more empathy and felt like they received more social support (both social and instrumental) from their classmates.

    I compared students from my two sections of Social Psychology (taught in consecutive class periods in the same room). Students in both sections of the course received traditional lecture during the first half of the semester. This was intended to ensure that students had time to get used to the material and exam formats and because I wanted to avoid potential regression to the mean. After the second exam, the students in both sections were randomly assigned into teams which they sat with during that portion of the course. In my experience, students tend to start a semester sitting near people they know and also tend to stay in those seats throughout the semester. I didn’t want their friendships to impact potential empathy and social support rather than course design. After the change in seating arrangement, one randomly assigned section of the course used a modified TBL approach. The modified TBL section and the lecture based section received the same lectures (available online) and exams, but the lecture based students did not complete the classroom activities and received the lecture during class time in addition to having the lecture available online.

    Students were given two surveys: 1) after the second exam and immediately after being placed in teams and 2) after the third exam. After the end of the semester, I found that the classes weren’t different in measurements of Empathy or Emotional Social Support. I also found that the two sections did not differ in the reports of information or advice received from classmates on the first measurement of informational social support (which was after placement in new tables but before the active learning component began). At the second measurement, the modified TBL students had higher reports of receiving information or advice from classmates than the lecture group.  While this is encouraging, it could simply be an artifact of students needing to work together to complete the team assignments. Thus, it is possible that the informational support was specific to course material and not information relevant to other topics. 

    I then examined some other factors to explore potential differences. While the modified TBL students missed more classes on the first measurement than the lecture students, there was no difference between the sections on classes missed on the second measurement. Perhaps the active learning environment increased students’ attendance in addition to impacting students’ perceptions of importance of lecture (since it was rated less important by modified TBL students on the second measurement).

    Peterson (2016) found that students in an active learning environment outperformed students in a lecture section of the same class. I found that students in the modified TBL section improved on exam scores, but not the lecture students. I also found on the second measurement a difference between the reports of how important lecture was to understanding the course material, with the modified TBL students reporting it of less importance than the lecture students. While this could be explained by hindsight bias, it is interesting to note that the modified TBL students had better exam grades on the second measurement. 

    In addition, I found that the TBL class also had a lower expected course grade on second measurement than first measurement. This is similar to the Dunning-Kruger effect (Kruger & Dunning, 1999), although not tied to cognitive ability, but exposure to material in additional ways. I will also mention that there was no difference between sections on actual course grade. Similar to other researchers (e.g., Travis, Hudson, Henricks-Lepp, Street, & Weidenbenner, 2016), course satisfaction was not impacted.

    Comparing the two classrooms was informative, but I also wanted to examine any differences that happened between the measurements within each class structure. There were no differences on course rating, importance of lecture, classes missed, or student effort put into the class. This is interesting because it indicates that the shift from lecture to active learning did not impact the students in those areas. I have had students tell me that they either love or hate a flipped classroom, so I suppose it is possible that students who like the course began to dislike it (and vice versa) which washed out any differences.

    In conclusion, I found there were only some benefits to the modified TBL. While some may find this disappointing, I actually take comfort that there was no difference between course grade or course satisfaction. This gives me freedom in the future to shift the style of the course (between passive and active) without fear that I will be negatively impacting students. Such a shift would prevent stagnation in my teaching style and keep me active and engaged (which I would argue is also important for successful student experiences). 


    References

    Drouin, M. A. (2010). Group-based formative summative assessment relates to improved student performance and satisfaction. Teaching of Psychology, 37, 114-118.

    Kreiner, D.S. (2009). Problem-based group activities for teaching sensation and perception. Teaching of Psychology, 36, 253-256.

    Jakobsen, K.V., McIlreavy, M., & Marrs, S. (2014). Team-based Learning: the importance of attendance. Psychology Learning and Teaching, 13, 25-31.

    Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.

    Liu, S.N.C., & Beaujean, A.A. (2017). The effectiveness of Team-Based Learning on academic outcomes: A meta-analysis. Scholarship of Teaching and Learning in Psychology, 3, 1–14.

    Michaelsen, L.K., & Sweet, M. (2008). The essential elements of Team-Based Learning. New Directions for Teaching and Learning, 116, 7-27.

    Peterson, D. J. (2016). The flipped classroom improves student achievement and course satisfaction in a statistics course: A quasi-experimental study. Teaching of Psychology, 43(1), 10-15.

    Travis, L. L., Hudson, N. W., Henricks-Lepp, G. M., Street, W. S., & Weidenbenner, J. (2016). Team-Based Learning improves course outcomes in Introductory Psychology. Teaching of Psychology, 43(2), 99-107.


  • 02 Mar 2019 11:06 AM | Anonymous

    Carolyn Brown-Kramer (University of Nebraska-Lincoln)  

    On the first day of the semester, students in my Advanced Social Psychology course learn that they will be required to facilitate two hour-long group discussions, and to contribute substantially to discussions throughout the semester.  Several panic, most are anxious, and a few drop the course. 

                On the last day of the semester, an hour flies by as students discuss social psychological theories and concepts, empirical articles, real-world applications, and what they have gained from the very process of discussion itself. They pose questions to each other, disagree, draw connections across units, give each other shout-outs for making insightful points, and laugh together as friends.

                What happens in the intervening 15 weeks between the first day and the last day?

    Why student-led discussion?

                Discussion helps students use higher-level cognitive processes to analyze problems (Bloom et al., 1956) and to identify connections within and across courses and to life outside the classroom (Svinicki & McKeachie, 2014).  In addition, students learn to engage actively in class (Cashin, 2011), identify the limits of their understanding, and use peers’ insights to fill in these gaps (Cashin, 2011).  Student-led discussions further increase these benefits relative to instructor-led discussions (Casteel & Bridges, 2007).  In other words, once students have established a basic knowledge base through lecture, readings, or other content delivery methods, discussions help them learn deeply and extend their learning to new circumstances.

                In my 40-student senior-level Advanced Social Psychology course, students explore a series of controversial issues (e.g., “Is ‘hookup culture’ harmful to young adults?”) from multiple perspectives, each beginning with a day of instructor-led lecture, two assigned articles representing opposing sides of the issue, and an out-of-class written assignment to help them process the articles.  During the next class period, students spend 45 minutes in small-group discussion, in which groups of about 13 students are led by two student co-facilitators assigned to represent the two sides of the issue.  The student co-facilitators are tasked with presenting additional empirical research and working together to engage all group members throughout the discussion.  They help students draw connections to the real world, pose thought-provoking questions to stimulate deep conversation, answer questions about the readings and the research articles they presented, and contrast evidence from multiple sources to help their peers understand the nuances of the issue.  On each discussion day we reserve the last 15 minutes of class to form one large circle to discuss the major arguments, evidence, and themes brought up in the small groups to cross-pollinate ideas throughout the class. 

                These discussions—both small-group and whole-class—are student-led.  I do not talk, except to call for the discussions to begin, transition, and end.  I don’t call on reticent students, and I don’t jump in to save facilitators who are struggling.  And believe it or not, it works, nearly all the time and for nearly all of my students.

    Keys to successful student-led discussion

                Over the eight semesters I have taught this class, I have developed a repertoire of helpful techniques for establishing good groups, helping students develop their skill at facilitating discussion, setting clear expectations for discussion participation, and giving students opportunities for improvement.

    • 1.     Create effective discussion groups.  Based on a self-report inventory students complete at the beginning of the semester, I form discussion groups that are heterogeneous in demographics and personality to increase students’ exposure to ideas that differ from their own, and to balance introverted and extraverted students.
    • 2.     Ensure students arrive prepared for discussion.  I use written assignments that require students to read the assigned articles carefully, to respond in ways that tie together lecture and readings, and to generate their own discussion questions that they are encouraged to bring up in their groups (Connor-Greene, 2005; West, 2018).  Students comment that having to discuss the articles with peers motivates them to read more closely: “This class forced me to sit down and decipher each article in order to understand the material.  However, it didn’t end at simply understanding the article but also being able to efficiently communicate what the articles were about.”
    • 3.     Foster community within discussion groups.  Students stay in the same discussion groups throughout the semester to increase camaraderie and accountability, establish group norms, and help students make connections with ideas raised previously.  On most discussion days, I hear at least one remark along the lines of, “Didn’t you say a couple of weeks ago something about…” or “This reminds me of our discussion last week in which….”
    • 4.     Mentor effective discussion facilitation.  I explicitly teach students how to lead effective discussions via assigned readings (i.e., Cashin & McKnight, 1986; Nunn, 1996) and engage in whole-class discussion at the beginning of the course (Brank & Wylie, 2013).  This helps set clear expectations and build discussion facilitators’ comfort and confidence.  Throughout the course, but especially toward the beginning of the semester when groups are still establishing their norms, I remind facilitators that their role includes managing both quieter and more talkative members to engage all participants (Svinicki & McKeachie, 2011), and I remain “on call” for any facilitators who struggle with this.
    • 5.     Provide common ground for all students to start from.  Evidence suggests that providing a shared basis for discussion increases participants’ comfort and engagement (Svinicki & McKeachie, 2011).  Each week, all students in the class attend the same lecture, read the same assigned articles, respond to the same set of prompts on their written assignment, and generate their own discussion questions.  In addition, I begin each discussion day with a brief multimedia piece introducing a new aspect of the week’s controversial issue (e.g., for the hookup culture week, we listen to the NPR Hidden Brain episode “Just Sex”).  Introverted students typically feel more comfortable drawing upon some aspect of these shared experiences, such as presenting a new interpretation of the material or analyzing how the assigned articles fit together.  Extraverted students, in contrast, seem to enjoy driving the conversation in new directions, often drawing connections to the real world or to other coursework within or outside of psychology.
    • 6.     Establish shared expectations for engagement.  I am transparent in my expectations and grading, providing feed-forward in the form of detailed syllabus statements and grading rubrics for both discussion participation and facilitation.  In addition, students are given feedback based on instructor ratings, TA ratings, peer ratings, and their own self-ratings of discussion participation and facilitation.  I provide both formative and summative feedback to participants and facilitators as soon as possible after discussions to help students identify and correct inappropriate expectations (Burchfield & Sappington, 1999; Krohn et al., 2011).
    • 7.     Build student skills throughout the semester.  Being an effective discussion participant takes practice, as does being an effective discussion facilitator.  Students have the opportunity to participate in 11 discussions throughout the semester and to facilitate discussions twice with different co-facilitators each time.  Following each experience facilitating discussion, students write a metacognitive self-reflection essay indicating what went well and what they want to improve next time.  After making changes during their second experience facilitating discussion, students express tremendous pride at their improvement.  It is incredibly rewarding for me, as well, to see students’ active listening skills, preparation, professionalism, and confidence increase throughout the semester, and to hear their plans for using these skills in the future.  As one student commented, “Sometimes I think it can be more frightening sitting in a group discussion than standing on stage, because in a group discussion people are engaging on the facts that the speaker is presenting. That can put a lot of pressure on that personal speaker. That is something I learned not to be afraid of anymore.”
    • 8.     Facilitate peer-to-peer advice.  Here’s my favorite activity in this course.  On the first day of class, I give my students each a sealed envelope containing a letter written by a student at the end of the previous semester of the course.  As students read and compare letters in small groups, they begin to identify themes in their peers’ advice.  Most commonly students are advised to keep up with the readings, to participate actively in discussion even if they’re nervous, and to seek help when they need it.  By hearing the same advice from multiple sources, and by hearing it from their peers rather than from their professor, they take the advice seriously.  This exercise also increases students’ sense of community right away and models healthy risk-taking—an important step in building trust and increasing their own willingness to “put themselves out there.”  Sixteen weeks later, at the end of the semester, the students write their own letters to future students (Gooblar, 2015; Lang, 2016; Norcross, Slotterback, & Krebs, 2001). 

    Effectiveness of student-led discussion

                Daily participation rates. I examined the proportion of students earning full credit, partial credit, or no credit for participation in discussion on a week-by-week basis across four semesters of Advanced Social Psychology.  For example, for a class of 34 students with 13 weeks of discussion, there were a total of 442 opportunities in which students could earn full, partial, or no credit for their discussion participation.  Students earned full credit on 87.5% of discussion days (range: 85.8 to 89.5%), partial credit on 7.8% of discussion days (range: 6.3 to 9.9%), and no credit on 4.7% of discussion days (range: 0.6 to 7.6%).  Put another way, combining across students and across weeks, fewer than five percent of students failed to participate on any given day of discussion, and well over three-quarters of students earned full participation credit on any given day—a far cry from traditional discussions in which only a quarter of students participate (Karp & Yoels, 1976, as cited in Nunn, 1996).

                Student evaluations of teaching (SETs).  On end-of-semester SETs across two semesters (N = 64), 48 students (75%) indicated that they had improved “quite a bit” or “a great deal” at leading small group discussions.  In their open-ended responses, a number of students indicated that this was among their favorite classes (n = 28), that they found discussions to be a helpful way to learn (n = 27), that they welcomed controversy or opposing arguments to enrich group discussions (n = 13), that discussions helped them apply class content to real life (n = 11), and that leading—not just participating in—discussions helped them learn (n = 10). 

                Student-led discussion is by no means perfect.  Even with all of the above techniques and with face-to-face meetings with struggling students, each semester there are still a couple of students who rarely contribute to discussion and a couple of students who tend to dominate the discussion.  Students sometimes make rude, offensive, or off-color comments, and although the groups tend to self-regulate and stop such behavior, occasionally I have to step in.  I haven’t yet found a panacea for these problems entirely, but the above techniques do help. For the vast majority of students, student-led discussion is an enjoyable, rewarding, intellectually stimulating classroom technique.  Just see for yourself in these students’ comments on end-of-semester evaluations:

    • ·        “I feel that especially through group discussions, I have been able to deeply understand the complexity of social psychology concepts, and even apply them to real-world situations, or to other concepts.”
    • ·       “[Discussion] made me learn so much more than just lecture alone. It provided new insight, ideas, and thoughtful consideration.”
    • ·       “I now feel more confident to disagree (even if to just play devil’s advocate) in an academic conversation, as that is one of the easiest ways to encourage people to think beyond their own perception or perspective.”
    • ·       “I feel confident enough now explaining and applying social psychology topics into everyday life, which clearly stem from being able to discuss the topics. Discussing these topics forced me to become comfortable talking about them, and learning how to do so.”

    References

    Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). The taxonomy of educational objectives: Handbook 1, the cognitive domain. New York: David McKay.

    Brank, E., & Wylie, L. (2013). Let’s discuss: Teaching student about discussions.  Journal of the Scholarship of Teaching and Learning, 13, 23-32.

    Burchfield, C. M., & Sappington, J. (1999). Participation in classroom discussion.  Teaching of Psychology, 26, 290-291.

    Cashin, W. E. (2011). Effective classroom discussions.  IDEA Paper No. 49. Manhattan, KS: The IDEA Center.

    Cashin, W. E., & McKnight, P. C. (1986). Improving discussions.  IDEA Paper No. 15. Manhattan, KS: Kansas State University.

    Casteel, M. A., & Bridges, K. R. (2007). Goodbye lecture: A student-led seminar approach for teaching upper division courses.  Teaching of Psychology, 34, 107-110.

    Connor-Greene, P. A., (2005). Fostering meaningful classroom discussion: Student-generated questions, quotations, and talking points.  Teaching of Psychology, 32, 173-189.

    Gooblar, D. (2015, April 29). Ending at the start.  [Blog post]. Retrieved from https://chroniclevitae.com/news/986-ending-at-the-start

    Krohn, K. R., Foster, L. N., McCleary, D. F., Aspiranti, K. B., Nalls, M. L., Quillivan, C. C., … & Williams, R. L. (2011). Reliability of students’ self-recorded participation in class discussion. Teaching of Psychology, 38, 43-45.

    Lang, J. M. (2016). Small teaching: Everyday lessons from the science of learning. San Francisco, CA: Jossey-Bass.

    Norcross, J. C., Slotterback, C. S., & Krebs, P. M. (2001). Senior advice: Graduating seniors write to psychology freshmen.  Teaching of Psychology, 28, 27-29.

    Nunn, C. E. (1996). Discussion in the college classroom: Triangulating observational and survey results.  Journal of Higher Education, 67, 243-266.

    Svinicki, M., & McKeachie, W. J. (2011).  McKeachie’s teaching tips: Strategies, research, and theory for college and university teachers (13th ed). Belmont, CA: Wadsworth.

    West, J. (2018). Raising the quality of discussion by scaffolding students’ reading.  International Journal of Teaching and Learning in Higher Education, 30, 146-160.



  • 03 Feb 2019 10:25 AM | Anonymous

    Judith Danovitch  (University of Louisville)


    As an educator and a researcher, one of my primary goals is to enable my students to apply psychological findings to their daily lives. To this end, I encourage my students to share what they have learned in my child development courses with others, but I also worry about them being able to do so accurately and comprehensibly. The last thing I want is for my students to contribute to the pervasive misconceptions people have about psychology (see Lilienfeld, Lynn, Ruscio, & Beyerstein, 2011). Inspired by the growing interest among psychologists in translating research for the public, and the success of innovative outreach events (e.g., the Ultimate Block Party; Grob, Schlesinger, Pace, Golinkoff, & Hirsh‐Pasek, 2017), I designed a course to teach undergraduates how to communicate with the public about psychology through a direct experience.

     “Giving psychology away” is a seminar that fulfills a university capstone course requirement for Psychology majors. The course goals have students identify how psychological theories and concepts can be applied to solving real-world problems, and understand and critique how the media represents psychological concepts and findings. In the process of meeting these goals, students develop their ability to translate scholarly language into lay terms, and ultimately demonstrate their capacity to do so by teaching local children about psychology.

    Course content and class sessions

    The course begins with readings and discussions about the value of psychological research for promoting human welfare (Zimbardo, 2004). It then proceeds to sessions addressing the representation of psychological concepts in the media, with examples of both accurate and inaccurate representations, and how the public perceives psychological research (Lilienfeld, 2012).  This includes a discussion of common misconceptions in psychology and how they originated (e.g., the Mozart effect; Bangerter & Heath, 2004). The course also covers ongoing challenges for psychological scientists, such as the “replication crisis” and reliance on WEIRD samples (e.g., Henrich, Heine, & Norenzayan, 2010). Although students have typically completed four years of coursework in psychology, they often remark that the course content is new to them.

    Class sessions revolve heavily around open discussion and each session includes an activity that incorporates communication skills. One skill that students practice repeatedly is summarizing research concisely using language that a lay audience can understand. For example, after reading a research article, students must state the problem the research addresses, the solution based on the research findings, and the relevance of the study to the public using only three sentences. Students also complete activities intended to support the public’s understanding of science. For example, in a session on media representations of research, students write 3 tips for evaluating a newspaper article about a research study, and I compile the tips into a class-wide document that students can share with others. In addition, one of the primary writing assignments for the course is to compose a 250-word blog post about a research study. These posts then undergo several rounds of peer editing and are eventually published for the public on the class blog (see http://getpsychedlouisville.wordpress.com/blog/).

    The Get Psyched! Outreach Event

    To put their communication skills into practice, students work in pairs to develop a demonstration of a psychological concept or finding for third graders in the local community. The purpose of teaching third graders about psychology is two-fold: 1) educating children about a scientific discipline that is rarely included in elementary school curricula, yet has direct applications to children’s everyday lives, and 2) challenging students to be as clear and concise as possible. Third graders are an ideal audience because they are old enough to complete basic tasks, yet they have short attention spans and low tolerance for jargon and excessively detailed explanations. As I often tell my students: if you can explain psychology to a third grader, then you can explain it to anyone!

    The first challenge for students is to identify and develop a 3-5-minute task that is engaging and comprehensible to children. Students begin by brainstorming a long list of potential topics, and then narrowing them down to a set that includes a variety of concepts while avoiding overlap (e.g., having 2 false memory demonstrations). Students are then paired into teams and assigned a topic based on their interests and they spend the majority of the semester developing the demonstration, including written and verbal explanations of the concept. Some students have presented classic introductory psychology demonstrations such as the Stroop task, and others have developed novel and creative demos of concepts ranging from spatial memory to social conformity. The demonstrations make use of common, inexpensive household materials (paper cups, index cards, blindfolds, etc.) and the only restrictions are that these should not involve consuming food, be very messy, or be excessively reliant on technology. After preparing their materials and practicing their presentations in class, the course culminates with the “Get Psyched!” event in which students share their demos.

    As of Fall 2018, we have held two “Get Psyched!” events at the University of Louisville. The first event was held on a Saturday in a large space on campus. With funding from an internal grant, we printed and posted advertisements for the event around town and parents were invited to register their children in advance. The event was successful in that approximately 50 parents and children attended, and they unanimously provided positive feedback. However, there was a relatively high no-show rate and, despite our efforts to advertise in lower SES and predominantly minority communities, we found that attendees were predominantly white and from higher SES areas. Requiring college students to be available on a Saturday was also barrier for students who had family or work commitments.

    The second time the course was offered, the Get Psyched! event was held on two separate school days at an elementary school close to campus that served children from predominantly low SES backgrounds. Students set up their demonstrations in the school gym and third grade classes were invited to attend with their teachers. Children were divided into groups of 3 or 4 and circulated through the demonstrations. Every 8 minutes they rotated from one station to the next, and completed all 7 demonstrations by the end of the hour.

    At the beginning of each event, each child received a “lab notebook” (made of 4 sheets of standard paper, printed on both sides and stapled in the center). Each page in the notebook corresponded to one of the demonstrations and included three sections: 1) “what is the task?,” followed by a preprinted description of the activity, 2) “what happened?,” with space to enter data or mark responses, and 3) “what does this show?,” followed by a blank space. During the demonstration, students explained to the children what they would be doing, and supported them in recording their data (e.g., how many seconds it took to name the colors of each list of words). The students then discussed the results with the children (e.g., “you were slower at naming the colors when they didn’t match the words”), explained the concept underlying the demonstration (e.g., “this happened because you read the words automatically and your brain had to work harder when the color and the word did not match”), and, most importantly, gave an example of how the concept was applicable to the children’s daily lives (e.g., “when you have practiced something many times, it becomes automatic.”) Children were also given an opportunity to ask questions about the demonstration. After completing each demonstration, children received a child-friendly written description of the concept and its relevance to daily life printed on a large mailing label that they were to stick in the “what does this show?” section of their lab notebook. Thus, by the end of the hour, children not only heard and discussed the explanations of each demo with the students, but they also had a complete notebook to take home and share with their families. Additional resources for parents about psychological concepts including the class blog website, were printed on the back page of the lab notebook as well.

    Feedback

    Anonymous evaluations from parents and children who participated in the Get Psyched! events were universally positive. In their evaluations, children were asked to list their favorite and least favorite activity and one new thing they learned. Following each event, students reviewed the feedback from attendees and wrote a reflection paper about their experience. In these papers, students frequently remarked on how challenging they found the presentations and how communicating psychology to the public was more difficult than they expected. Despite the challenges, students indicated that this course was the first time they had to apply their psychological training outside of the classroom, and that the experience was educational and useful. As the instructor, I have found that teaching this course has helped me develop my own communication skills as well and doing so has been a uniquely enjoyable and rewarding experience.

    Author’s note

    In the spirit of giving psychology away, the materials for Giving Psychology Away and the Get Psyched! Events can be accessed here:  https://drive.google.com/open?id=1SCcGIvNclmVixEGFrD-54ao5KjPLpl21

     


    References

    Bangerter, A., & Heath, C. (2004). The Mozart effect: Tracking the evolution of a scientific legend. British Journal of Social Psychology, 43, 605-623.

    Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Most people are not WEIRD. Nature, 466, 29.

    Grob, R., Schlesinger, M., Pace, A., Golinkoff, R. M., & Hirsh‐Pasek, K. (2017). Playing with ideas: Evaluating the Impact of the ultimate block party, a collective experiential intervention to enrich perceptions of play. Child Development, 88, 1419-1434.

    Lilienfeld, S. O. (2012). Public skepticism of psychology: why many people perceive the study of human behavior as unscientific. American Psychologist, 67(2), 111.

    Lilienfeld, S. O., Lynn, S. J., Ruscio, J., & Beyerstein, B. L. (2011). 50 great myths of popular psychology: Shattering widespread misconceptions about human behavior. John Wiley & Sons.

    Zimbardo, P. G. (2004). Does psychology make a significant difference in our lives? American Psychologist, 59, 339-351. doi:10.1037/0003-066X.59.5.339


  • 02 Jan 2019 5:54 PM | Anonymous

    Meredith E. Kneavel (LaSalle University), Joshua D. Fetterman (Chestnut Hill College), Ian R. Sharp (Chestnut Hill College)

    Psychology is unique among the sciences because much psychological subject matter cannot be directly observed. Psychologists often define “invisible” constructs, like emotion or cognition, in terms of observable, measurable, and agreed upon criteria. These operational definitions allow psychologists to “see the invisible” and keep psychological theories testable and falsifiable. Because of this, operational definitions are foundational methodological concepts for the field of psychology and are featured prominently in various psychology courses. Unfortunately, students often struggle to grasp the nature and importance of operational definitions and sometime find discussion of this topic dry and boring. In order to combat this, we suggest a classroom activity that demonstrates the importance of rigorous operational definitions and can also be tied to several different psychological concepts that capture student’s attention. This activity illustrates the necessity of operational definitions to students, while also engaging them in broader psychological content that is, perhaps, more reflective of their motivation for enrolling in the course. It also offers a rare opportunity to watch cartoons during class, which students (and their teachers) may appreciate.

    The Demonstration

    The purpose of this demonstration is to illustrate the importance of operational definitions for behaviors and constructs in psychological research.  It has been recognized that exact operational definitions of psychological concepts can be difficult (see Marx, 2010) which is the point of the exercise discussed here. The overall demonstration takes approximately twenty minutes and utilizes a Looney Tunes clip. Any clip depicting physically aggressive behavior will be sufficient though we have used Rabbit Season, Duck Season Trilogy in the past. Instructions to students consist only of ‘count the number of aggressive acts that you observe.’ And no definition of “aggressive act” is provided. The video is a little less than five minutes, and, at the end, the instructor asks students to share how many aggressive acts they recorded.

    After the exercise, the instructor should gather the aggression scores and lead a discussion of how students defined aggression. It is important to record the aggression scores (mostly the lowest and highest in the range) for later in the demonstration. We recommend recording the number of aggressive acts from each student in an Excel spreadsheet, where the mean and standard deviation can be quickly calculated. If anonymity is preferred, Poll Everywhere or similar tools allow students to submit their ratings via cell phone and have the results projected to the class. Poll Everywhere can be programmed to create automatic bar graphs to illustrate the range of responses.

    Following the sharing of the number of aggressive acts observed, the instructor can facilitate a discussion addressing why students recorded different scores which often leads to a discussion of how and why aggression was viewed differently. For instance, there may be a gender difference in the conceptualization of aggression. This can lead to a discussion of how researchers may operationalize aggression as physical, nonphysical or relational (Crick & Grotpeter, 1995). Following this discussion, the class can then come to a consensus about what aggression is and how it can be operationalized.  At this point, it is helpful to go back to the video and ask whether certain acts are considered aggression or not. This helps to refine the class’ operational definition and can start a conversation about inter-rater reliability. 

    Following the agreed upon class definition of aggression, the instructor can then re-show the video and instruct the class to ‘count the number of aggressive acts observed’.  The instructor can run a comparison of the ranges or standard deviations for the two sets of numbers to illustrate the spread in scores between the first trial and the second trial. Typically, after the class has agreed upon an operational definition, the range of scores is much smaller with students generally agreeing on about fifteen to twenty acts of aggression. If there are any outliers, this can lead to a very interesting discussion as it usually means that a student may have had a sudden insight about aggression that wasn’t shared in the original formulation of the definition. Illustrating the change in the range of scores highlights the importance of having an agreed upon operational definition.

                  This technique is primarily valuable in demonstrating the concept of operational definitions but has secondary uses in reinforcing or illustrating concepts such as gender differences in perceptions of aggression, measures of dispersion (range and standard deviation), inter-rater reliability, and difficulties in assessment and observational research. Because class time is valuable, this short activity is particularly useful as it allows the flexibility to incorporate multiple concepts into one demonstration.

    Gender Differences Adaptation

    Most research indicates that males are more physically and verbally aggressive than females (Archer, 2004; Card, Stucky, Sawalani, & Little, 2008; Hyde, 1984). Females tend to exhibit more relational aggression (Card et al., 2008; Ostrov & Keating, 2004), especially during the teenage years (Archer, 2004). However, overall gender difference in relational aggression is small and seems to depend on data collection methods (Archer, 2004; Card et al., 2008; Eagly & Steffen, 1986). Nonetheless, if the clip primarily depicts physical aggression (as most cartoons do), gender differences in the number of aggressive acts that students record should appear. Gender differences can be illustrated by having students count aggressive acts (as described above), or by having students make Likert scale ratings of the aggressiveness of characters or both. It is possible that gender differences may be found using one measurement technique but not the other.  Furthermore, this demonstration could be modified to focus specifically on gender differences in aggression by showing two clips, one that depicts physical aggression and one the depicts relational aggression. and discuss the gender disparity.

    Misinformation Effect Adaptation

    Research indicates that human memory is quite fallible (Chan, Jones, Jamieson, & Albarracin, 2017; Loftus, 2005; Loftus & Pickrell, 1995), particularly where eyewitness testimony is concerned (Wells & Olson, 2003). Indeed, faulty eyewitness testimony is partly responsible for the distressingly high number of wrongfully convicted individuals who are later exonerated through the use of DNA evidence (Wells & Olson, 2003). Typically, in research on false memories, individuals are shown a video and later given incorrect information or asked leading questions about what they saw. Often people will erroneously recall the incorrect information as having come from the video (Loftus, Miller, & Burns, 1978), or reconstruct their memories of the video to be more consistent with the leading questions (Loftus & Palmer, 1974). These flaws in memory can be discussed in the context of the operational definitions activity described above. If individuals cannot agree on what they saw in the first place, it is not possible for their assessments to be accurate (in the same way that reliability is a prerequisite for validity).  After making counts of the number of aggressive acts that students saw in the cartoon, half of the class could be asked to make a rating of how aggressively the individuals “fought” during the video, while the other half could be asked to make a rating of how aggressively the individuals “interacted” during the video (students should be unaware that the class has been asked two different questions until after the demonstration has concluded). Those who read the word “fought” should make higher ratings of aggressiveness due to the leading nature of the question even though all members of the class will have seen the same video.

    Clinical Applications Adaptation

    Inter-rater reliability is of significant importance in a variety of clinical applications. For example, evidence of poor inter-rater reliability in the administration of symptom severity outcome scales has led to negative or failed clinical trials where the treatment otherwise would have outperformed a placebo (Kobak, Feiger, & Lipsitz, 2005). In our course on Psychological Assessment, we use videos streamed from the Internet demonstrating clinician-administered, semi-structured diagnostic and severity scales (e.g., the Montgomery-Asberg Depression Rating Scale [MADRS]). In one demonstration, students are provided with the Structured Interview Guide for the MADRS [SIGMA] and asked to rate the ten items. Once the students have rated the ten items, scores are collected and an intra-class correlation coefficient (ICCs) generated.  Then each item is reviewed with a discussion of discrepancies in scoring and the use of the instructor’s scores as a gold standard. This scale is particularly useful in discussing interrater reliability because the ten items requires the rater to consider the intensity, frequency, and duration of multiple constructs of depressive symptoms.  Each of the ten items are rated from 0-6 and untrained undergraduate students tend to demonstrate a large range of scores within each of the items. The ten items are then discussed, and students are asked to explain how they arrived at their scores, often providing fruitful examples of why ratings differed. This is also an opportunity for students to discuss the administration of the scale and illustrate various important interviewing techniques (e.g., avoiding leading questions, clarifying ambiguous information). The cartoon video can be used in advance of the introduction of the clinical scale as a means of illustrating the importance of operation definitions.  The video can be used to reinforce concepts of interrater reliability by systematically reviewing acts of ‘aggression’. The class can then go back through the video together and discuss the specific acts until there is agreement between raters.  

    The adaptability and utility of the demonstration spans multiple courses and can be molded to fit the number, type and level of student. The demonstration can be utilized in a research methods course or in a content specific course, such as a social psychology. 


    References

    Archer, J. (2004). Sex differences in aggression in real-world settings: A meta-analytic review. Review of General Psychology, 8, 291-322. DOI: 10.1037/1089-2680.8.4.291

    Card, N. A., Stucky, B. D., Sawalani, G. M., & Little, T. D. (2008). Direct and indirect aggression during childhood and adolescence: A meta-analytic review of gender differences, intercorrelations, and relations to maladjustment. Child Development, 79, 1185-1229

    Chan, M. S., Jones, C. R., Jamieson, K. H., & Albarracin, D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28, 1531-1546. DOI: doi.org/10.1177/0956797617714579

    Crick, N. R., & Grotpeter, J. K. (1995). Relational aggression, gender, and social-psychological adjustment. Child Development, 66 (3), 710-722.  doi.org/10.2307/1131945 

    Eagley, A. H., Steffen, V. J. (1986). Gender and aggressive behavior: A meta-analytic review of the social psychological literature. Psychological Bulletin, 100, 309-330.

    Kobak, K. A., Brown, B., Sharp, I., Levy-Mack, H., Wells, K., Okum, F., & Williams, J. B. W. (2009). Sources of unreliability in depression ratings. Journal of Clinical Psychopharmacology, 29, 82-85. DOI:10.1097/JCP.0b013e318192e4d7

    Kobak, K. A., Feiger, A. D., & Lipsitz, J. D. (2005). Interview quality and signal detection in clinical trials. American Journal of Psychiatry, 162(3), 628-628. doi:10.1176/appi.ajp.162.3.628

    Hyde, J. S. (1984). How large are gender differences in aggression? A developmental meta-analysis. Developmental Psychology, 20, 722-736.

    Loftus, E. F. (2005). Planting misinformation in the human mind: A 30-year investigation of the malleability of memory. Learning and Memory, 12, 361-366.

    Loftus, E. F., Miller, D. G., & Burns, H. J. (1978). Semantic integration of verbal information into a visual memory. Journal of Experimental Psychology: Human Learning and Memory, 4, 19-31.

    Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of automobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning and Verbal Behavior, 13, 585-589. DOI: doi.org/10.1016/S0022-5371(74)80011-3

    Loftus, E. F., & Pickrell, J. E. (1995). The formation of false memories. Psychiatric Annals, 25, 720-725

    Marx, M.H. (2010). Operational Definition In Weiner, I.B. Y Craighead, W. E. (Eds.), The Corsini Encyclopedia of Psychology (p. 1129). Hoboken, NJ: Wiley.  

    Ostrov, J. M., & Keating, C. F. (2004). Gender differences in preschool aggression during free play and structured interactions: An observational study. Social Development, 13, 255-277.

    Wells, G. L., & Olson, E. A. (2003). Eyewitness testimony. Annual Review of Psychology, 54, 277-295. DOI: 10.1146/annurev.psych.54.101601.145028


  • 03 Dec 2018 1:33 PM | Anonymous

    Chris Hakala (Springfield College)

    For the past several years, I have had the fortune of serving both as a classroom instructor of psychology and the director of a teaching center. Prior to this, I was a faculty member, and have taught many, many courses over the length of my career. During all this time, I have attended many conferences, talks, workshops, etc. all dedicated to teaching. For many of those events, I came away feeling like much of what was discussed or described about effective teaching was really couched in the world of psychology. Faculty were using principles of psychology to help students learn effectively, or they were talking about “novel” approaches to teaching that, according to much of the data, would in fact, hinder effective learning. What I didn’t hear much about, at least up to that point, was about translational research that tried to systematically use what we know about how students learn to impact our basic classroom practice.

                Much of that has changed over the last 20 years, as more and more researchers have begun to systematically examine how we can implement psychological principles into the classroom. Much of the work has come out of the labs from Washington University under the direction of Roediger, but there are many, many people who have contributed to this conversation (e.g., Benassi, Overson, & Hakala, 2014; Brown, McDaniel, & Roeiger, 2014; Lang, 2016; McDaniel, Agarwal, Huelser, McDermott, & Roediger III, 2011; Sana, Fenesi, & Kim, 2011; Whiffen & Karpicke, 2017). In many of these conversations, the discussion has centered on what the faculty member can do to improve the memory and learning of students. For example, much of the work on retrieval enhanced learning suggests that by creating conditions under which students are required to repeatedly retrieve information, learning should improve. There is clear evidence that this, in fact, works. When students are required to retrieve information, there are overwhelming positive benefits for those students (see Karpicke, 2012). If, for example, a faculty member gives quizzes to students each class, to test knowledge acquired in a previous class, students are more likely to remember that information when they are required to retrieve it for a cumulative exam. This, alone, is great information for students, and the kind of strategies that should lead to extensive quizzing among faculty to increase learning.

                There are other findings that are also supported by the data. One is that if you interleave (Blasiman, 2017), you are more likely to retain information over a longer period of time. The idea of interleaving helps students by teaching one set of concepts, switching to another set of concepts, and then returning to the original concept. When done across classes, there is strong evidence that students benefit from the second exposure to the content under different conditions.

                Given the increased evidence of utilizing such concepts in the classroom to improve student learning, it stands to reason that teaching is getting better, students are learning and are able to transfer that knowledge to other contexts and domains. In short, the student experience should be one that is seamless, integrated, and more complete than it has ever been.

                Sadly, this is not the case. One of the biggest problems with teaching is that we often know a lot about one thing but do things differently when required to act. For example, a classic psychology study suggests that organisms do better when reinforcement is used rather than punishment. It’s taught in psychology courses around the country, and it’s a concept that is clearly understood by anyone who has even a passing knowledge of basic psychological principles. However, when faced with a behavior that is not a desired one, people often resort to punishing that behavior rather than using the strategy that has been shown to be more effective under many conditions.

                Why do we, as faculty, do the same thing. That is, given all that we know that works in the classroom, why do faculty still resort to teaching in a manner that has been shown to not increase student learning and to not help students transfer knowledge from one context to another?

                This straw-man argument is one that is bandied about at many a teaching conference, and often by psychologists, who are stunned that their less informed colleagues are not using all that we know about human behavior to better education our college students. The typical comments are similar to, “well, we know that lecturing doesn’t work. Why do we still do it?” or “Why don’t these students read what we tell them to read”, or “It’s not my responsibility to hold my students’ hands”.

                The argument suggests that learning is straightforward and that if we do these things, students will learn, our colleges and universities will improve, and life will be better. All we need to do is read Small Teaching or Make it Stick and do what they tell us, and we will now be the best model teachers that we can be, and our students will be amazing. Oh, were it that simple.

                As psychologists, we know that behavior is complicated. I’d like to add to that, from a pedagogical standpoint, that learning is messy and teaching is not only idiosyncratic, but also deeply personal. To say that all we need to do is X to improve teaching is to underestimate all that we know about human behavior. It’s the equivalent of saying, “if you eat green beans, you will live to 100”. Life, like learning, is messy, with all sorts of variables that can impact any given situation. To maximize any given situation, one needs to be flexible, adaptable, aware of what is effective, and, understanding of the role of uncontrollable factors. In the scenario about green beans, consider that one might take that to mean that to live a long life, one needs to eat well. That’s true, to some extent, but there are countless counterexamples. One needs many different factors to coalesce for a long life. Eating well is one, but it is neither necessary nor sufficient for a long life. We have SOME control over factors that impact these very important landmarks in our world, but to think that we have complete control over our life span is to be, I would argue, a bit delusional.

                I would say the same about teaching and learning. The idea that there is one ideal teaching strategy or one ideal teaching approach that would work for all students is folly. Rather, I would argue, similar to most of the things we face in life, an understanding of teaching and learning requires us to know several things:

    • 1.      How humans learn
    • 2.      How we can impact how our students learn
    • 3.      How we can do that in our classroom given
    • a.     Who we are
    • b.     Who are students are
    • 4.      How that translates to classroom activities that are consistent with our discipline
    • 5.      How that is received by students in our particular institutions’ culture

    To ignore this is to pretend that all students come to us with the same preparation, all faculty enter the class with the same sets of skills to teach, and all of us teach at institutions that have nearly identical campus cultures. In short, this doesn’t really seem to make a lot of sense.

                I would like to argue that we should recognize that teaching and learning are personal   

           tasks and that to really be effective as instructors, we need to recognize:

    • 1.     Our strengths and weaknesses in presenting course material
    • 2.     Our students’ strengths and weaknesses
    • 3.     Our content and what it lends itself to
    • 4.     Our institution’s culture
    • 5.     How what we know about learning can be crafted to fit into these above mentioned issues.

    Thus, an effective classroom is one that makes use of what we know about how students learn, but it is one that is crafted to best meet the needs of our students in the context of what works within the course, the institution and the instructors’ skill set.

                As psychologists, we have a good understanding of human behavior. And, given that knowledge, we should apply it to any interactions we have with other humans. In our research, we carefully weigh variables, look for confounds and other factors that will impact our results.  We need to consider these exact attributes when we design our courses and plan our class sessions.

    Be aware of how students learn, read the work by others, and ADAPT it to your context, to your classes, to your teaching and to your students. Only you, as the instructor, know what strategies would fit in your courses. Consult with your teaching center, or others on campus that know about the literature.  However, when translating that into your classes, consult with your experience, your expertise, and your own knowledge to craft a classroom experience that maximizes learning for your students and does so in a way that is authentic, effective, genuine and productive.

    References

    Benassi, V. A., Overson, C. E., & Hakala, C. M. (2014). Applying science of learning in education: Infusing psychological science into the curriculum. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/asle2014/index.php

    Blasiman, R. N., (2017). Distributed concept reviews improve exam performance. Teaching of Psychology, 44 (1), 46-50.

    Brown, P. C., McDaniel, M., & Roediger, H. (2014). Make it stick : the science of successful learning. Cambridge: The Belknap Press of Harvard University Press.

    Karpicke, J. D. (2012). Retrieval-based learning: Active retrieval promotes meaningful learning. Current Directions in Psychological Science, 21, 157-163.

    Lang, J. (2016). Small Teaching: Everyday Lessons from the Science of Learning. New York: Jossey-Bass.

    McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger III, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology103(2), 399.

    Sana, F., Fenesi, B, & Kim, J.A. (2011). A case study of the introductory psychology blended learning model at McMaster University. The Canadian Journal for the Scholarship of Teaching and Learning 2(1), 6.

    Whiffen, J. W., & Karpicke, J. D. (2017). The role of episodic context in retrieval practice effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43, 1036-1046.


  • 01 Nov 2018 5:43 PM | Anonymous

    Bonnie Laster (Wingate University)

    What immediately comes to mind when one considers psychology? Dull? Dry and boring theory? Students commonly regard general psychology as a tedious Gen Ed obligation; another box to tick on the graduation checklist. Though instructors may occasionally glean satisfaction from the indiscriminate spark lit in a previously unmotivated student, I would venture to guess most of us are challenged to effectively disseminate largely fundamental and theoretical content to an assembly comprised of underclassman, the majority of whom are non-psychology majors.  But, here’s thing: this content, particularly its practical application, is important. Psychological inquiry is essential in examining the why and how of human behavior and cognition, regardless of a students’ intended field. Students may remember foundations of other fields examined during their undergraduate years, but they can actually use psychology in any occupation across any discipline at any time. General psychology can teach students how to understand human behavior, including their own; perceptibly, an invaluable skill. Moreover, psychology is actually pretty interesting, as well as multifaceted. So, why then, do students often view it as a wearisome necessity? Perhaps, it’s because of us.

    It is extraordinarily easy for faculty to fall into the trap of “textbook teaching”. Meaning, we instruct in a traditional lecture-dense format in a standard “start with chapter 1” approach. Although a traditional approach has its merits, it may essentially undermine the wealth of knowledge residing undeveloped and untapped within the audience itself. Students come to college with an enormous amount of personal experience in human behavior and cognition. Perhaps you have witnessed the “Aha!” or “So, that’s why I do that!” moment as students connect their experience of human behavior with psychological theory. Psychology holds the unique benefit of relating to everyone and everything. Though not necessarily intuitively, psychology examines behavior and cognition we have all experienced and will continue to experience in our distinct journeys. As instructors, we may benefit greatly from exploring this implicit knowledge and expanding upon it.

    Research has long suggested students learn best by not only acquiring knowledge, but by organizing it meaningfully (Chi, Glaser, & Farr, 1988). As such, from day one in my classes, I try to integrate students’ own intimate experience with psychological concepts. Doing so supports personally meaningful interactions for students and offers them a familiar anchor as we expand the concept beyond their tangible experience. Overwhelmingly, the best resource I have found to teach foundational content is the actual student. Students come pre-equipped with a certain level of fundamental understanding of psychology. Tapping into this understanding via what I have termed, an inverted constructivist curriculum (IvC), can be an effective way to facilitate students’ awareness of their prevailing knowledge, by allowing them to explore what they know, but don’t explicitly know that they know.

    Think about personal examples students may have offered in your classes. Most students can relate to much of what we’re instructing. For example, what student hasn’t experienced operant conditioning or social loafing? Who hasn’t experienced Fight or Flight or had the occasional struggle with memory retrieval during an exam? We should capitalize on this experience. In the IvC approach, two main concepts are inverted: topics and execution. That is, while historical concepts and classic theory may seem a logical starting place for many (as evidenced by the majority of general psychology textbooks), I begin with students’ understanding of themselves through examination of personality and social psychology. I also invert execution, allowing students to discuss their experiential familiarity of concepts before connecting them to definitions.

    Topics

    Although Chapter 1 may be an intuitive place to start in an introductory course, to capture the essence of one’s understanding of self (and to simultaneously capture student interest), I have found personality can serve as an effective starting point. Personality is typically viewed with interest by most students, and starting here holds the additional benefit of student self-analysis. By participating in common personality measurements such as the Big 5 Factor Inventory or Gray’s Reinforcement Sensitivity Theory, students may understand their own perspectives and nuances more clearly, while also providing insight into classroom behavior (e.g. from Gray’s theory (1970) we can predict students high in BAS (behavioral activation system)  will be more likely to participate in class discussions than students higher in BIS (behavioral inhibition system). Social psychology also tends to be popular with students. Online media provides real-world scenarios illustrating such concepts as group think or group polarization. Confirmation bias can help explain why students’ parents (or students themselves) are drawn to a particular media outlet, to the exclusion of all others. Learning is another area students can relate well to, particularly when discussing welcomed ideas (yes, you should sleep more in college to help consolidation to LTM!). I have found that rather than starting with historical underpinnings, classic theory or early pioneers in the field, capturing the students’ interest from day one with more relatable concepts can help sustain attention when the “drier” ones are considered. I do cover history and systems, methodology, etc., but introduce them after we explore the more relatable areas of psychology.

    Execution

    Beyond topic, I also invert execution, asking students to first consider their existing experience within relevant parameters. I offer definitions and explanations after the concept has been explored within students’ experiential understanding. For example, a typical introduction of topic may start off with something as nonchalant as, “how did you learn to ride a bike?” to segue into scaffolding or, “have you ever studied diligently, then blanked while taking a test?” to acquaint students with memory. Although some topics within psychology don’t necessarily lend themselves to the IvC approach (let’s hope not many students can relate to phrenology), the majority of concepts can. After posing the guiding question(s), my role is to then observe while students talk amongst themselves, sharing their various experiences. With smaller classes, I encourage small groups of students, while students in larger lectures can pair and share with immediate neighbors. After an appropriate amount of time (less than 5 minutes, typically around 2-3) I reassemble the students to share with the larger group and examine the concept more didactically through traditional Powerpoint or outline lecture. This technique allows students to first explore their own knowledge and experience, while simultaneously constructing meaning with peers (a nod to both Piagetian and Vygostskian theories). I have found that with personal and shared experience in mind, students can then assimilate empirical definitions and explanations more readily and with greater meaning. I have also found that, surprisingly, this technique really doesn’t take any extra time throughout the semester; we still cover all of the topics necessary to cover throughout the semester. In fact, we sometimes run ahead of schedule, since students are able to internalize the concepts more quickly. The strongest advantage of the IvC is its covert nature. By the time the more refined aspects of the topic at hand are explored, students have already created a deeper meaning with it, through consideration of their existing experience, as well as the experience of their peers. And, truthfully, students also enjoy the opportunity to talk about themselves.  

    Considerations

    Though I view the IvC as a logical and pragmatic approach, to be clear, I am by no means intending to imply that psychology is solely “common sense” and by considering their own experience students may gain a thorough and sophisticated understanding of psychological theory. Just because students can relate personal experience to concepts does not negate the scientific nature of the discipline. It is also not a “blow off” approach, which over-simplifies concepts or lacks proper assessment. I include rigorous student evaluation via examinations, research papers and group projects. Although it can be a fun and personal way to explore psychology, with the IvC, traditional accountability it still maintained. What about the reluctant student; the one who doesn’t wish to share their experience or participate in group activity? I always allow students to work independently if desired, by jotting down their own experiences without pairing up, considering theories and concepts independently, or to brainstorm real-world examples from media or fiction. What do students think about the IvC? When incorporating it, I tend to see greater class attendance and engagement, as well as higher academic achievement. Student feedback, via end of semester surveys and assessments, reflects positive experience for most students. Students generally like the curriculum, citing personal and peer real-world examples as its particular strength. Previous students occasionally even get back in touch with me to share how this approach has helped them retain psychological concepts in their various pursuits.

    Worth considering, however, are limitations within the IvC. It is not necessarily a one size fits all; not every class may benefit from its unique structure. Large lecture classes which have the propensity for unruliness may not be suitable for the approach, as students may take advantage of too much freedom and talk time. Some departments require a standardized instruction with specific topics examined at explicit points during the semester, leaving little wiggle room to potentially incorporate student participation. Ultimately, successful IvC incorporation depends upon the students themselves. Students must be willing to share with one another to make the approach work. Although I try to incorporate the approach in most of my classes, I’ve found some groups simply aren’t as cohesive as others, or may be unenthusiastic to share. I typically try to start off with the curriculum, tweaking for more or less reflection, and more or less lecture as necessitated by the group.

    To summarize, the IvC incorporates the following points: By tapping into their inherent and experiential familiarity of concepts, students themselves are utilized at the creators of fundamental knowledge. Students learn to associate their experience with psychological foundations. As a result, students are able to organize concepts in a personally meaningful way, which in turn promotes interest and retention. Although the IvC incorporates didactic instruction, its active learning is paramount to the curriculum as students personally and socially construct meaning. I’ve found great success with this approach. I hope you will too.

    References

    Chi, M.T.H., Glaser, R., & Farr, M. (1988). The nature of expertise. Hillsdale, NJ: Erlbaum.

    Gray, J. (1970). The psychophysiological basis of introversion-extraversion. Behaviour Research and Therapy, 8, 249-266.


  • 01 Oct 2018 2:34 PM | Anonymous

    Robert R. Bubb, Jamie Sailors, Sharon Wilbanks, Margaret Vollenweider, Emily Cumbie, & Hannah Ferry  (Auburn University)

    Indicators of student success at colleges and universities take many forms (Suskie, 2009). One indicator is the ability to produce employable graduates (Hoachlander, Sikora, & Horn, 2003). Students’ successful marketing of their skills, experiences, and knowledge can influence employment opportunities following graduation (Floyd & Gordon, 1998). A national survey found that 80% of employers valued an electronic portfolio that summarizes and demonstrates applicants’ key skills and knowledge (e.g., teamwork, oral and written communication, critical thinking) when determining whom to hire (AACU, 2015). The ePortfolio Project at Auburn University is a campus-wide initiative that encourages students to develop a personal website that highlights students’ skills, experiences, and knowledge through reflective contextualization of artifacts. ePortfolios provide a modern, 21st century context for students to communicate with a professional audience; ePortfolios are more professional than Facebook and more personal than LinkedIn. The ePortfolio initiative at Auburn University is implemented at the department level. The following essay highlights the Human Development and Family Studies Department (HDFS) promotional efforts to implement an ePortfolio within its major that will be useful for the implementation of ePortfolios at other universities, departments, and courses.

    Learning Objectives

    HDFS majors complete a basic ePortfolio as an assignment in a required professional development and ethics course. This initial ePortfolio draft includes an About Me page, a resume, at least one professional image of the student, a contact page, and one artifact. The artifact reflects a skill, experience, or knowledge gained as part of the course. As students traverse the major curriculum, faculty in other courses assign specific assignments such as written papers, course reflections, presentations, and field experiences that can be used as artifacts. Prior to graduation, the ePortfolio is reviewed in the capstone internship course during the students’ last semester.

    From implementation to finalization of the ePortfolio, students meet the following learning objectives:

    • ·       Articulate a professional philosophy that identifies and supports professional goals;
    • ·       Engage in self-reflection to identify personal strengths and areas for improvement;
    • ·       Think critically about how accomplishments relate to career goals;
    • ·       Write effectively to convey a clear message to a professional audience;
    • ·       Apply classroom knowledge to professional practice; and
    • ·       Demonstrate technical competency in basic web design, visual literacy, and presentation.

    Websites

    Auburn University supports four free website platforms that students use to create ePortfolios (i.e., Wix, Weebly, Wordpress, Googlesites). Each website has different strengths and weaknesses; however, Wix, Weebly, and Wordpress rate similarly on ease of use, customization, storage, user support, and administrative settings. Anecdotally, students prefer Wix or Weebly. These two platforms provide clear descriptions and intuitive customization tools. A comparison among the platforms is located at the following link: http://wp.auburn.edu/writing/wp-content/uploads/Choose-A-Platform-8-17.pdf

    About Me Page

    A basic ePorfolio commences with the About Me page. The About Me page introduces the student, explains her or his professional goals, and articulates the purpose of the ePortfolio. The page clearly identifies the students’ post-graduation goals and centers on an overall theme that connects the student’s skills, experiences, and knowledge to her or his professional goals such as major, interests, and future plans. Essentially, the page is similar to the job interview question, “Tell me about yourself,” and emphasizes the professional, rather than the personal, aspects of the student. Any personal information or stories should directly support professional goals. The About Me page also includes navigational links to content pages that contain evidence of the student’s skills, experience, and knowledge.

    Content Pages

    An ePortfolio includes several content pages that can focus creatively on several themes; however, the most common pages center on professional, volunteer, study abroad, and service experiences. Content pages about professional experiences highlight academic coursework, internships, employment, study abroad, and research and teaching experiences related to career goals. Pages on volunteer experiences and service focus on work with charities and memberships to professional organizations that relate to the student’s career goals. Pages on interests and honors discuss awards and hobbies that demonstrate professionally relevant skills, abilities, and knowledge.

    Each content page contains artifacts. Artifacts serve as evidence and communicate to a professional audience the skills and knowledge students learn from their college experience. Artifacts may include text, images, videos, PowerPoint presentations, course assignments, class presentations and papers, and conference presentations. Each artifact is contextualized through reflective writing. Reflection provides a brief explanation of how each artifact relates to the student’s goals, to other experiences, and to the skills needed for successful employment or graduate studies. The written text informs both what the artifact is (what?), why it matters (so what?), and how the experience informs the future (now what?).

    Quality Assurance

    A successful ePortfolio requires a high standard of quality. Once published online, the ePortfolio is available to anyone with Internet access. A poorly created product may reduce rather than improve a student’s chances to gain meaningful employment or acceptance to a graduate program. Students are encouraged to release their ePortfolio to a professional audience only once it meets a professional standard. The university and the HDFS department provide several resources to assist students in producing a quality product.

    The Miller Writing Center at Auburn University provides online and in-person resources. Online resources include tips on identifying artifacts, choosing a theme, learning how to write reflectively, understanding ethical literacy, and ensuring that essential criteria have been met before publishing. Examples of ePortfolios are also available. In-person resources include appointments with writing tutors and ePortfolio workshops. In addition to student resources, the Writing Center also provides resources for faculty who wish to incorporate an ePortfolio as part of their course. Faculty resources include introductory materials, peer support from faculty across campus, internal grants to promote ePortfolios, and rubrics for assessment. The following link contains resources for implementing an ePortfolio: http://wp.auburn.edu/writing/eportfolio-project/

    The HDFS department also developed a rubric and support materials to encourage ePortfolio quality at the professional level. The Roadmap helps students identify and develop potential artifacts that are presented as required assignments in HDFS courses. The Roadmap is introduced to all incoming freshman interested in the HDFS major and encourages critical thinking through reflection as a means to provide context for how a particular artifact applies to a student’s career goals. The following link contains the HDFS roadmap: http://humsci.auburn.edu/hdfs/files/HDFS_ePortfolio_road_map.pdf

                In addition to the roadmap, both the introductory professional development and ethics course and the final internship capstone course implement a rubric that informs students about the expected outcomes. To promote clear communication about these outcomes to both students and faculty, a rubric support document defines the evaluated facets included in the rubric. The following link contains the rubric support document: http://humsci.auburn.edu/hdfs/files/HDFS_ePortfolio_rubric_definitions.pdf

                  The HDFS rubric went through multiple revisions and the department tested it prior to implementation. The final rubric resulted in a good inter-rater reliability coefficient when tested on student ePortfolios available online, ICC (3, 6) = 0.88. The rubric consists of 18 items across four domains: effective communication, critical thinking through reflection, technical competency, and visual literacy. Each item is rated on a six-point scale where two points represent each of three levels of quality: novice, developing, and professional. Student ePortfolios are expected to be rated at the developing level by the end of the professional development and ethics course. By the internship course and prior to public release, student ePortfolios are expected to be rated at the professional level. The following link contains the rubric: https://drive.google.com/file/d/0BwX2mM8EONjaR185Uy1Bb3NnOWs/view?usp=sharing

                Finally, the HDFS department offers two workshops per semester and an ePortfolio departmental award to encourage and promote ePortfolio excellence. The two workshops are held in the department computer lab and are open to all HDFS students. One workshop is targeted toward a novice audience who are in the beginning stages of an ePortfolio. The other workshop is focused on more advanced work for students who have already started an ePortfolio.

                Each spring semester, the HDFS department recognizes two excellent undergraduate ePortfolios. The department awards an HDFS Undergraduate Award for ePortfolio Excellence to a sophomore-junior level student and another to a senior level student. The winning students receive recognition by the department, a ceremonial plaque, and a small monetary award for producing a quality ePortfolio.

    Conclusion

                A high-quality, professional ePortfolio has the potential to highlight the experiences, skills and knowledge necessary for students to be successful applicants in today’s job market or graduate programs. In addition to highlighting student qualifications, the process of creating an ePortfolio develops skills and abilities that are desirable in today’s emerging fields. Critical thinking and self-reflection are valued, adaptive qualities necessary in an ever-changing employment landscape. While effective communication, writing, and technical skills are highly sought in most professions, ePortfolios demonstrate student qualifications through the artifacts presented, reflective contextualization, and the creative process. Through this process, students are better prepared to answer questions such as, “Tell me about yourself?” in face-to-face interviews. Additionally, the personal nature of the ePortfolio provides employers and graduate program selection committees a window into the less tangible characteristics required for a good person-organizational fit. Finally, university and departmental encouragement and support can facilitate the resources necessary for students to publically release professional ePortfolios of high quality. In turn, successfully turning college graduates into employable professions—an indicator of a successful collegiate education.         

    References

    American Association of Colleges and Universities (AACU). (2015). Falling short? College learning and career success. Washington, DC: Hart Research Associates.

    Floyd, C. J., & Gordon, M. E. (1998). What skills are most important? A comparison of employer, student, and staff perceptions. Journal of Marketing Education, 20, 1303-109.

    Hoachlander, G., Sikora, A. C., & Horn, L. (2003). Community college students: Goals, academic preparation, and outcomes. Washington, DC: National Center for Education Statistics, U.S. Department of Education.

    Suskie, L. (2009). Assessing student learning: A common sense guide. Josey-Bass: San Francisco, CA.


  • 03 Sep 2018 8:06 PM | Anonymous

    Alice Szczepaniak (Boston University)

    Robyn Johnson (Boston University)

    Naamah Azoulay Jarnot (University of Southern Maine)

    Changiz Mohiyeddini (Northeastern University)

    Sohila Mohiyeddini (California University of Management & Sciences)

    Haley Carson  (Northeastern University)


    Despite over 75 years of research on student persistence (Jones & Braxton, 2010), there have been few substantial gains in student persistence in recent years (Tinto, 2007). Persistence measures those students who continue to be enrolled in the university (McGrath & Burd, 2012). Low persistence rates can have a widespread impact:

    • ·       On a national level, college degree attainment has been linked to economic growth. Graduates from four-year colleges pay an average of 91% more in taxes each year than those with just high school degrees (Ma, Pender, & Welch, 2016).
    • ·       At an institutional level, student retention is used as a key performance indicator for the institution (Crosling, Heagney, & Thomas, 2009). Freshman persistence and graduation rates are among the metrics that define the quality of an academic institution (Culver, 2008).
    • ·       On an individual level, persistence is necessary for a college student to realize the social and economic benefits associated with higher education (Wolniak, Mayhew, & Engberg, 2012).

    According to higher education theorist Vince Tinto’s model of college student departure, dropout from college is the result of the students’s experiences in the academic and social systems of the college. The higher the degree of integration of the student into the college’s social and academic system, the greater the student’s commitment to the specific institution and to the goal of college completion (Tinto, 1975). Terenzini and Wright (1987) found that students’ levels of academic and social integration in one year had a positive influence on their level of academic and social integration in the next year. More recently, Strauss and Volkwein (2004) established that social activities, classroom experiences, and friendships are key predictors of institutional commitment.

    Based on this background information, we reasoned that student experiences that allow for both academic and social integration would increase student persistence. Thus, the objective of our study was to investigate whether positive group work experiences (Mohiyeddini, Johnson, Azoulay Jarnot, & Mohiyeddini, in preparation; Mohiyeddini, Azoulay, & Bauer, 2015) will increase students’ intention to persist.

    The Study

    Students were recruited at three different college campuses in London. To be included in the study, the students had to have current membership in a small mixed-gender group work of three to four students for at least one semester. While the classes were on different subjects, for each class the aim of group work was to produce a collaborative report and/or a presentation as a graded course requirement. Students participating in the study completed an initial questionnaire that included demographic and socioeconomic information, as well as a baseline measure of their intention to persist. Approximately five months after the first measurement, these students were asked to complete a follow-up questionnaire on their current intention to persist and their experiences with their group work. 232 students completed the study.

    To measure group work experiences we used the Positive Group Work Inventory (PGWI) (Mohiyeddini et al., in preparation). The PGWI is made up of 24 items that measure six central factors of group work experiences:

    • 1.     Perceived respect
          “We comment on each other’s performance with an appropriate tone”
    • 2.     Perceived fairness
          “The workload and responsibilities were fairly distributed among us”
    • 3.     Effective commitment
          “My group members were committed to our group work”
    • 4.     Perceived transparency
          “The rules for our collaboration were clear”
    • 5.     Perceived support
          “Other group members gave me the support that I needed to complete my part”
    • 6.     Perceived inclusion
          “I had the feeling that I belonged to my group”

    We measured the students’ intention to persist twice, once at the beginning of the study and again at the end of study (approximately 5 months later) with two items following Ajzen’s recommendations (1991):

    • 1.     “I intend to complete my degree at my current university”
    • 2.     “I intend to continue with my education at my current university”

    Our Findings

    After controlling for variables such as age, gender, and the student’s baseline intention to persist, we found that perceived respect (β = .125, p = .010) and perceived inclusion (β = .147, p = .002) were predictive of students’ intention to persist. The more students perceived respect and inclusion in their group work experience, the higher their intention to persist and complete their degree at their current academic institution. The predictive value of perceived inclusion suggests that if groups could foster a better sense of inclusion among members, that intention to persist could have an even larger impact on individual’s intention to persist, though the groups in this particular study did not do a particularly good job of fostering that kind of inclusive environment.

    Our findings are in line with recent theories and research on the impact of perceived respect on teams. Perceived respect reflects that the individual feels valued by the team (Branscombe, Spears, Ellemers, & Doosje, 2002; Huo & Binning, 2008; Smith, Tyler, & Huo, 2003; Tyler & Blader, 2003). Individuals who feel respected by other team members experience higher levels of identification with the team (Sleebos, Ellemers, & de Gilder, 2007) and put more effort into achieving team goals (Tyler & Blader, 2003).

    In a related vein, social identity theory (Tajfel, 1978; Tajfel & Turner, 1979) highlights that social identification processes, during which individuals tend to think of themselves in terms of their belonging to and inclusion in a social group or collective, have a crucial impact on individuals’ collaborative behaviors. Following social identity theory, our results extend these findings and may suggest that perceived inclusion in a team supports the sense of being a part of an academic institution as a larger community and therefore strengthens a student’s intention to complete their education at that institution.

    Limitations

    Although the current investigation advanced research on student persistence and positive group work experiences of students in several ways, there were also a number of limitations to our study. First, the study was based on self-reported data, which are affected by reappraisal of past events due to present (critical) circumstances, by impairment of memory over time, and by non-disclosure and reporting biases. Second, the questionnaire used in this study was presented in a consistent order and was not counterbalanced, which might have influenced the results and prompted order effects. Furthermore, considering the sample size, a non-random sampling method, lack of control group, and the recruitment of very few colleges, the generalizability of the findings is limited.

    What to Do with this Information

    Despite these limitations, our study expands our understanding of student persistence and highlights the potential impact of positive group work experiences on students. Fostering positive group work experiences could be an effective tool to improve the persistence intention of students. This can be done through:

           Workshops for faculty and staff that explain key conditions of a positive group work experience and provide tools and a framework for facilitating respect and inclusion in their class.

           Courses for students, such as first year seminars, that focus on teaching positive group work skills, particularly respect and inclusion.

    References

    Ajzen, I. (1991) The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.

    Branscombe, N. R., Spears, R., Ellemers N., & Doosje, B. (2002). Intragroup and intergroup evaluation effects on group behavior. Personality and Social Psychology Bulletin, 28(6), 744–753. doi:10.1177/0146 167202289004.

    Crosling, G., Heagney, M., & Thomas, L. (2009). Improving student retention in higher education. Australian Universities’ Review, 51(2), 9-18.

    Culver, T. (2008). A new way to measure student success: Introducing the student success "Funnel"--A valuable tool for retention planning and goal-setting. Retrieved from http://ezproxy.neu.edu/login?url=http://search.proquest.com/docview/1238186212?accountid=12826

    Huo, Y. J., & Binning, K. R. (2008). Why the psychological experience of respect matters in group life: An integrative account. Social and Personality Psychology Compass, 2(4), 1570-1585. https://doi.org/10.1111/j.1751-9004.2008.00129.x

    Jones, W. A., & Braxton, J. M. (2010). Cataloging and comparing institutional efforts to increase student retention rates. Journal of College Student Retention, 11(1), 123-139.

    Ma, J., Pender, M., & Welch, M. (2016). Education pays 2016: The benefits of higher education for individuals and society. The College Board, Trends in Higher Education Series. Retrieved from https://trends.collegeboard.org/sites/default/files/education-pays-2016-full-report.pdf

    McGrath, S. M., & Burd, G. D. (2012). A success course for freshmen on academic probation: Persistence and graduation outcomes. NACADA Journal, 32(1), 43-52.

    Mohiyeddini, C., Azoulay, N., & Bauer, S (2015, May). Maximizing collaborative small group work experiences: An assessment approach. Paper presented at the Conference for Advancing Evidence-Based Teaching, Boston, MA.

    Mohiyeddini, C., Johnson, R., Azoulay Jarnot, N., & Mohiyeddini, S. (in preparation). Individual differences in positive group work experiences in collaborative student learning.

    Sleebos, E., Ellemers, N., & De Gilder, D. (2007). Explaining the motivational forces of (dis)respect: How self-focused and group-focused concerns can result in the display of group-serving efforts. Gruppendynamik und Organisationsberatung, 38(3), 327-342.

    Smith, H. J., Tyler, T. R., & Huo, Y. J. (2003). Interpersonal treatment, social identity and organizational behavior. In S. A. Haslam, D. van Knippenberg, M. J. Platow, & N. Ellemers (Eds.), Social identity at work: Developing theory for organizational practice (pp. 155-171). Philadelphia, PA: Psychology Press.

    Strauss, L. C. & Volkwein, J. F. (2004). Predictors of student commitment at two-year and four-year institutions. The Journal of Higher Education, 75(2), 203-227.

    Tajfel, H. (Ed.) (1978). Differentiation between social groups: Studies in the social psychology of intergroup relations. European Monographs in Social Psychology No. 14, London: Academic Press.

    Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. Austin, and S. Worchel. (Eds) The social psychology of intergroup relations. Monterey, CA: Brooks/Cole.

    Terenzini, P. T., & Wright, T. M. (1987). Influences on students’ academic growth during four years of college. Research in Higher Education, 26(2), 161-179.

    Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Educational Research, 45(1), 89-125. Retrieved from http://www.jstor.org/stable/1170024

    Tinto, V. (2007). Research and practice of student retention: What next? Journal of College Student Retention, 8(1), 1-19.

    Tyler, T. R., & Blader, S. L. (2003). The group engagement model: Procedural justice, social identity, and cooperative behavior. Personality and Social Psychology Review, 7(4), 349–361.

    Wolniak, G. C., Mayhew, M. J., & Engberg, M. E. (2012). Learning’s weak link to persistence. The Journal of Higher Education, 83(6), 795-819.


  • 02 Aug 2018 7:43 PM | Anonymous

    Amber M. Chenoweth and Brittany L. Jackson (Hiram College)

    Autism Spectrum Disorders (ASD) have a relatively recent history in terms of research attention. With the newly updated diagnostic criteria in the DSM-5 (American Psychiatric Association, 2013), even more attention has been made to this spectrum of developmental disorders as individual diagnoses may have changed (e.g., individuals with former diagnoses of Asperger’s syndrome are now diagnosed with ASD). Further, typical developing students are finding themselves in a variety of situations in which students with ASD are included, often without a full understanding of the experience of their peers with ASD. This lack of understanding can lead to a range of responses toward their peers with ASD, including simple confusion and frustration when attempting to interact with their peers with ASD, to the extreme of bullying those with ASD (Swaim & Morgan, 2001). As Harnum, Duffy, and Ferguson (2007) found, this is due to the perception that individuals with ASD are not the same as typical developing individuals, leading to less openness to interaction.

          Our institution is poised with a unique opportunity for our students to interact more fully with individuals with ASD, being situated nearby to a Living and Learning Community. This organization is a fully-functioning organic farm that provides the opportunity for adults with ASD to work and be provided with occupational therapy options. Several students from our institution have participated in internship opportunities at Living and Learning Community and found these experiences rewarding, both in a service aspect as well as in future career exploration. Moreover, this interaction with individuals with ASD serves to increase student understanding of the complexity of this spectrum of disorders. Because of this stimulated interest in ASD among students on our campus, several faculty members across disciplines offer courses on ASD. Given our institution’s emphasis on interdisciplinary learning, we found this to be a great opportunity to engage students in a course to explore the many facets of ASD.

          Why an interdisciplinary course? Much value can be gained from engaging students in exploring a complex topic through multiple and integrated lenses. By allowing students the opportunity to explore these topics within the course setting, we can push them to challenge their previously held beliefs and ideas while they explore that shared space between disciplines. Further, interdisciplinary courses, particularly those that are team-taught, can foster creative, critical, and divergent thinking, all skills that are sought by our students’ future employers (Putrienė, 2015).

    Our Course

    Our course integrates the disciplines of psychology and theatre. From the psychology perspective, students are exposed to material from the scientific literature on ASD, examining with depth the topics of diagnosis, hypothesized causes, treatments, as well as the concept of neurodiversity. The theatre perspective exposes students to two key areas: playwriting and acting. Students learn techniques for telling a story drawing from multiple sources – readings, interviews, discussions – and acquire how to portray what they learn in both abstract and concrete ways, while being made aware of issues of accuracy and sensitivity to a population different from themselves. The interdisciplinary nature of the course also integrates the disciplines to model for students how the two inform one another. For example, the psychology content serves as the context for which to explore these topics in a theatrical way; awareness of body, space, and wording informs students in best approaches to interviewing individuals with ASD and those that support those individuals.

    The learning objectives of this course are for students to demonstrate understanding of the science of ASD and theatre methodology (playwriting, performing), and connect and listen to the other, testing empathy skills, and gain a truer sense of one’s own humanity. To meet these objectives, we designed the course to engage students in the following activities:

    • ·       Class discussions focused on topics about the science of ASD (neurobiological etiology, symptoms, and therapeutic interventions), as well as theatrical portrayals and storytelling. The basis for these discussions are assigned readings, including both fiction and nonfiction sources, scientific articles, case studies, guest speakers, and current event topics.
    • ·       Short writing assignments that scaffold students through the writing process by requiring students to submit specific creative writing pieces drawn from scientific literature. This begins with having students write a letter based on a scientific article, then a short story, and eventually multiple scenes of a play.
    • ·       Interviews with either individuals with ASD or those that work with individuals with ASD, including caregivers and family members, teachers, doctors, intervention specialists, case workers, etc.
    • ·       Field trips to various locations to explore aspects of ASD. Past field trips have included visiting the local Living and Learning Community that provides occupational therapy for adults with ASD and New York City to see the play The Curious Incident of the Dog in the Nighttime on Broadway.
    • ·       Media portrayals that depict various aspects of ASD. Past feature films have included Rain Main, Temple Grandin, and Ben-X, as well as the documentary Autism is a World. We also have students view clips from TV shows that highlight characters either overtly diagnosed with ASD (e.g., Parenthood) and those that are exhibiting common characteristics associated with ASD (e.g., The Big Bang Theory). These portrayals are the basis for class discussions on accuracy of portrayals, the ethics of presenting characters with ASD in often stereotypical ways, how these portrayals either promote or hinder the idea of neurodiversity, as well as to inform students on how to connect with the characters they are developing in their final performance piece and present in both an accurate and sensitive way.
    • ·       The final performance piece requires students to draw upon all the class activities to develop a brief (approximately 10 minute) play focused on a specific ASD topic. Students work in small groups (5 students per group, on average) to write and perform their piece.

    Assessing Our Course

    During the fall 2015 offering of Exploring Ability and Disability: ASD, we administered a voluntary pre- and post-test survey to students enrolled in the course to assess changes in knowledge of ASD, as well as to inform us on the students perceived effectiveness of the course activities described above. A total of 25 of our 31 enrolled students completed both the pre- and the post-test set of questionnaires.

    At pre-test, we administered a prior experience survey which revealed participants had, on average, approximately five years of experience interacting with an individual with ASD, typically a classmate, friend, co-worker, or relative.

    At both the pre- and the post-test we administered the Autism Knowledge Survey-Revised (AKS-R) developed by Stuart, Swiezy, and Ashby (2008). This questionnaire of 20 statements about ASD provided a measure of baseline and change in knowledge of ASD, as participants indicated on a 6-point Likert-type scale how much they agreed or disagreed with each statement. We found that overall students did increase in their knowledge of ASD compared to pre-course baseline. However, there were a couple items that did not show the same increase in knowledge, highlighting our need to address those topics more clearly in future offerings of the course. One example included the item “Children with autism do not show attachments, even to parents/caregivers,” to which the correct response should be “Fully Disagree.” Upon reflection, we identified areas where we could emphasize the fuller range of emotion and attachments that children with ASD do express.

    We also administered the Openness Scale, adapted from Harnum et al. (2007), at both pre- and post-test. This scale first presented a vignette depicting characteristic behaviors of an individual that may be diagnosed with ASD, and then presented a series of statements regarding reactions to and the willingness to interact with that individual, to which respondents rated on a 5-point Likert-type scale how much they agreed or disagreed with each statement. With this measure, we found that participants remained at their initial high openness to interact levels pre-to-post, indicating that there may have been potential bias. This bias may be from demand characteristics – who would want to admit that they would not want to interact with an individual who clearly displays behaviors of the disorder for which this course is based upon? – or from the self-selecting nature of taking a course on ASD, or both.

    Lastly, we surveyed participants on their class experience with a series of open-ended questions. They all took the form of “Reflect on how the [assignment/activity] affected your understanding of individuals with Autism Spectrum Disorder.” Representative responses are below.

    Final performance pieces.

    • ·       “It helped me understanding people with autism because it allowed me to imagine what it would be like to actually be involved in a family with children with autism.”
    • ·       “There were many different views of autism portrayed. It reminded me that everyone experiences the disorder differently.”
    • ·       “I think it helped show how people took their own version of what they saw autism as and turned it into a play. Each play had different aspects to it which showed all the things we've learned.”

    Short writing assignments.

    • ·       “They allowed me to express what I have learned in different ways from monologues to poems. Sometimes things are hard to express so this gave me the chance to try different ways.”
    • ·       “The SWA's were the most influential piece for my learning in this course. I learned a ton through the articles and reflecting in a creative way.”
    • ·       “I never thought I could be creative when talking about autism.”

    Class discussions.

    • ·       “It allowed me to see and compare my thoughts with my peers and fellow classmates. I got to see and hear that I wasn't the only person with confusions and thoughts about people with autism.”
    • ·       “They allowed me to see many different opinions from everyone in class. Not everyone has the same type and amount of experiences so this class gave me the chance to see what others see and think.”

    Interviews.

    • ·       “It made me face something about my friend. And myself.”
    • ·       “I learned SO much about my interviewee and ASD in general. I had known the person for years, yet never thought to ask these questions or care to listen for the answers. This was a crucial part of the course.”

    Field trips and media portrayals.

    • ·       “It definitely drew my attention to the fact that many types of the disorder are not reflected in the media at all.”
    • ·       “I loved them all, all of them provided me with a learning experience that helped me gain an understanding of ASD. It also reminded me that this is something I want to do for the rest of my life.”
    • ·       “This was a great idea as it allowed us to gain realistic perspectives of people with ASD. Experiencing something in real life is much different than in a classroom or through a book.”

    As with any course assessment, however, not all our responses were quite so positive. A few respondents indicated they thought the short writing assignments and class discussions were repetitive, and that the final performance pieces showed more stereotypical representations of ASD. Overall, though, students generally rated each of the activities as valuable at some level, and attributed the activities and assignments to enhancing their knowledge of ASD.

    Lessons Learned

    From the open-ended responses, both indicating “success” as well as negative aspects to consider for future modifications to the course, we have identified key lessons learned when teaching a course on a complex and sensitive topic, particularly when you are expecting them to demonstrate their understanding through creative methods.

    First, it is crucial to provide your students lots of examples. Luckily, we have taught this course a handful of times now and have built up a repository of good examples (with those students’ permission, of course) to share with our students, particularly for the short writing assignments. One area we identified could use more examples is with plays and performances in different formats. By having students practice what we are expecting to have them complete by the end of the course – i.e., a full performance piece – we greatly enhance the success of achieving that learning goal.

    Second, whenever possible incorporate experiential learning opportunities. As evidenced by the responses we received about the field trips and interviews, students learn so much by doing rather than just through merely reading or being lectured to. We are fortunate to have the Living and Learning Community for adults with ASD within walking distance of our campus, but there are other ways to incorporate these experiences into any course. For example, reach out to the local community for guest speakers, such as special education instructors, the director of the local disability services office, and parents of children with ASD. In our experience, many of the individuals in these positions are wanting to share their experiences to promote greater understanding. We also found value from engaging our students in simulation exercises, such as simulating sensory overload. However, such exercises need to be placed in the correct context and introduced and discussed in a way to not promote feelings of pity in the students, but rather to promote understanding (Nario-Redmond, Gospodinov, & Cobb, 2017).

    Third, and we would argue most important, be encouraging! For some students, this is their first creative experience and they are anxious. In our course, many of our students did not come having had acting experience, or even creative writing experience. It was important for us to emphasize that the class space was a safe space to explore both the theme and content of the course, as well as how to express themselves in a creative way. This led to our favorite response provided above: “I never thought I could be creative when talking about autism.”

    References

    American Psychiatric Association. (2013). Autism Spectrum Disorder. In Diagnostic and statistical manual of mental disorders (5th ed.), 50–59.

    Harnum, M., Duffy, J., & Ferguson, D. A. (2007). Adults’ versus children’s perceptions of a child with autism or attention deficit hyperactivity disorder. Journal of Autism and Developmental Disorders, 37, 1337-1343.

    Nario-Redmond, M. R., Gospodinov, D., & Cobb, A. (2017, March 13). Crip for a day: The unintended negative consequences of disability simulations. Rehabilitation Psychology. Advance online publication. http://dx.doi.org/10.1037/rep0000127

    Putrienė, N. (2015). The links between competences acquired through interdisciplinary studies and the needs of the labour market. Social Sciences (1392-0758), 88(2), 54-64. doi:10.5755/j01.ss.88.2.12741

    Swaim, K. F., & Morgan, S. B. (2001). Children’s attitudes and behavioral intentions toward a peer with autistic behaviors: Does a brief educational intervention have an effect? Journal of Autism and Developmental Disorders, 31, 195-205

    Stuart, M., Swiezy, N., & Ashby, I. (February 2008). Autism Knowledge Survey:  Understanding Trends in Autism Spectrum Disorders. Poster presented at the 2nd annual ABA Autism Conference, Atlanta, GA.


Powered by Wild Apricot Membership Software