Society for the Teaching of Psychology
Division 2 of the American Psychological Association

GSTA Blog

Welcome to the GSTA blog! 

In an effort to keep the Graduate Student Teaching Association (GSTA) blog current, we regularly welcome submissions from graduate students as well as full-time faculty. Recently we have made the decision to expand and diversify the blog content to include submissions ranging from new research in the area of the Scholarship of Teaching and Learning (SoTL), public interest topics related to teaching and psychology, occasional book reviews, as well as continuing our traditional aim by including posts about teaching tips. The blog posts are typically short, ranging from about 500-1000 words, not including references. As it is an online medium, in-text hyperlinks, graphics, and even links to videos are strongly encouraged!

If you are interested in submitting a post, please email us at gsta.cuny@gmail.com. We are especially seeking submissions in one of the four topic areas:

  • Highlights of your current SoTL research
  • Issues related to teaching and psychology in the public interest
  • Reviews of recent books related to teaching and psychology
  • Teaching tips and best practices for today's classroom

We would especially like activities that align with APA 2.0 Guidelines!

This blog is intended to be a forum for graduate students and educators to share ideas and express their opinions about tried-and-true modern teaching practices and other currently relevant topics regarding graduate students’ teaching.

If you would like for any questions to be addressed, you can send them to gsta.cuny@gmail.com and we will post them as a comment on your behalf.

Thanks for checking us out,

The GSTA Blog Editorial Team:

Teresa Ober and Charles Raffaele


Follow us on twitter @gradsteachpsych or join our Facebook Group.


<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 
  • 18 Oct 2017 10:30 AM | Anonymous member (Administrator)

    By Peri Yuksel, Ph.D., New Jersey City University (Email: pYuksel@njcu.edu, Twitter: @drperi_)

     “Bodily exercise, when compulsory, does no harm to the body; but knowledge which is acquired under compulsion obtains no hold on the mind.” 

    -Plato (student of Socrates)



    By now it is no secret that very few students complete their required textbook readings before coming to class and a large number of students only start to read their textbook when preparing for an exam (Clump, Bauer, & Bradley, 2004; Phillips & Phillips, 2007). So why do we still believe that assigning required textbook readings is an effective means of making learning stick? Reading a textbook is viewed as essential to build a factual knowledge base, especially for introductory classes. Without such content knowledge, we assume it is not possible for students to develop critical thinking and writing skills. Students have the tendency to think that instructors will explain the whole textbook and tell them what will be on the test. Yet great teachers inspire and teach their discipline beyond the textbook, allowing students to reflect and connect their academic context to real-life settings.

    Given the difficulties of motivating students to read the textbook, I suggest that you assign a textbook that is affordable to your students and complement assigned readings with homework, group presentations, and in-class activities that require students to utilize their textbook. For example, when teaching Developmental Psychology, I use an older version of Berk’s Development through the Lifespan textbook and organize activities and assessments around the text to improve students’ memory by spacing learning over time (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). When students spend more time with the course material, they attend more closely to their own learning and develop metacognition. Here are four simple techniques I have used to augment the textbook reading experience.


    a)    Complement the Textbook with TED Talks

    I ask my students to watch ten TED Talks that complement the assigned textbook readings and I prompt them to draw connections between the TED Talks and textbook content/course discussions (Yuksel, 2017). TED speakers portray an array of diverse backgrounds and offer multifaceted perspectives and cutting edge research findings on topics that are also covered by a course textbook. Each of these elements helps students to understand global issues and fosters their understanding of the subject they are studying. By going beyond the textbook, TED Talks can inspire students to think about their own passions and conceive of ways to develop their own paths.


    b)    Demonstrate and Use the Textbook for In-Class Activities

    Reading about research methodology and theory can be dry and daunting, especially for students who have not taken a psychology course before and who have never seen a research lab. Periodically, I ask my students to bring in their textbooks and use it to complete an activity package (Experiments with Infants and Toddlers) that I have designed. Students see the exact same textbook images demonstrating the research paradigm (e.g., violation-of-expectancy, deferred imitation) and are asked to fill in information about the research question, study design, age of children, overall findings, and developmental explanations. In class, students also watch short video-clips that illustrate the relevant experiments. These clips go beyond the textbook, create memorable visual images from real lab settings, and foster deeper learning and understanding of hypothesis testing (Berk, 2009).


    c)     Encourage Group Presentation Targeting the Textbook

    From a list of topics selected from the textbook, students pick one and give a short group presentation. In addition to creating a set of PowerPoint slides, students also submit a one-page summary paper discussing the relevance of the chosen topic to their current or future professional goals. By doing so they are signaling that this topic has self-reference and is worth remembering (Wade, Tavris, & Garry, 2014). The group presentations go beyond the textbook and allow students to collaborate on a focused project and apply ideas from the textbook to important societal problems. Students also gain insights into socio-political issues and learn techniques that help them make healthy and ethical choices.


    d)    Let Students Create Their Own Mind Maps to Organize Textbook Content

    Especially in the beginning of the semester when the first exam is approaching, students often remind me that we have not covered the entirety of each assigned textbook chapter. I give them a simple answer: it is not important that we cover everything but that you discovered something. I provide them with simple learning strategies and tools to organize information from the textbook, such as outlining the chapters with relevant vocabulary and creating mind maps, i.e., visual diagrams that manage, summarize, and highlight their notes.

    There are many reasons why students do not read the textbook. If you explicitly integrate the textbook into your course activities and assessments and make reading relevant to psychological discoveries that go beyond the classroom setting, then students will be inspired to read and expand their views on the everyday science of psychology. They will come to understand that knowledge is power and contributes to creativity and imagination.


    References

    Berk, L. E. (2014). Development through the Lifespan. New York: Pearson.

    Berk, R. A. (2009). Multimedia teaching with video clips: TV, movies, YouTube, and mtvU in the college classroom. International Journal of Technology in Teaching and Learning5(1), 1-21.

    Clump, M. A., Bauer, H., & Breadley, C. (2004). The extent to which psychology students read textbooks: A multiple class analysis of reading across the psychology curriculum. Journal of Instructional Psychology31(3), 227-232.

    Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning and comprehension by using effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14, 4–58.

    Phillips, B. J., & Phillips, F. (2007). Sink or skim: Textbook reading behaviors of introductory accounting students. Issues in Accounting Education22(1), 21-44.

    Wade, C., Tavris, C., & Garry, M. (2014). The Nine Secrets of Learning. Psychology (11th ed.). Upper Saddle River, N.J.: Pearson Education, Inc.

    Yuksel, P.  (2017). Ten TED Talk Thinking Tasks: Engaging College Students in Structured Self-Reflection to Foster Critical Thinking. In R. Obeid, A. Schwartz, C. Shane-Simpson, & P. J. Brooks, (Eds). (2017). How We Teach Now: The GSTA Guide to Student- Centered Teaching.  Retrieved from Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/.

  • 16 Oct 2017 10:00 AM | Anonymous member (Administrator)

    By Tanzina Ahmed, Ph.D., Department of Social Sciences, Bronx Community College CUNY (Email: tanzina.ahmed@bcc.cuny.edu)


    Do you know who Betsy DeVos is?

    Chances are, if you’re reading a blog on education, you know that name and exactly why it’s so revered (or reviled) in American schools today. Furthermore, you probably know how she, as the current Secretary of the United States Department of Education, is shaping the policies and practices of schools across the entire country. You may or may not be enthused about her political priorities but you know of the pivotal role she plays in the education and lives of millions of American enrolled all the way from pre-K to graduate school.

    Yet, if you asked a typical undergraduate class on child psychology or human development to identify her... what would you hear? From my experience, a chorus of crickets. A few students might recognize the name, but most don't know who she is or how her decisions profoundly influence students and their families across a wide span of ages and institutions. Even once they know who she is, they’re often at a loss to understand how her educational policies affect the lives of hundreds of millions of American students. Such would also be the case for previous Secretaries of the Department of Education, such as Arne Duncan and John King.

    It's easy to overlook the influence of people like Betsy DeVos because their influence is so widespread and pernicious. Yet given recent political events, it's more important than ever to help psychology students in undergraduate institutions understand how the Secretary of Education and the administration she represents wield their power over the life of others. Professors teaching developmental psychology classes may have a special responsibility to help their students understand how matters discussed in political science classes might shape people’s academic and career trajectories over a lifetime.

    However, it can be difficult to bring politics into the study of psychology in a way that helps students understand the issues at stake. Lecturing students on this issue usually doesn't help, especially since the intersection between politics and psychology can be abstract and murky. Thus, to help students put some “skin in the game” and understand how families are affected by the decisions of the Department of Education, I’ve created an activity called “Being Betsy DeVos.”

    In this assignment, I break students out into small groups to work together on the American education system. I tell them that they are now all Betsy DeVos and need to answer critical questions on funding and promoting education within our country. Their challenges include the following:

    1. President Trump’s 2017 budget plan would take away $2.4 billion for teacher training grants and $1.2 billion for funding summer-school and after-school programs, weaken or eliminate funding for 20 educational programs, and cut $200 million in federal programs that help low-income, first-generation and disabled students (Bendix, 2017). What recommendations would you make to President Trump about funding these programs? What arguments would you make on what programs to keep and what to cut?
    2. Under the 2017 budget, the Trump administration would like to spend $1.4 billion to expand use of vouchers in public and private schools, eventually spending $20 billion a year in funding vouchers (Bendix, 2017). Many of these funds will go toward private and/or charter schools that hold different educational standards from public schools (for instance, some of these private and charter schools may choose to teach creationism rather than evolutionary biology). Do you support the President’s proposal to pull money away from public schools and redirect it toward private and/or charter schools? What are the pros and cons of his proposal?
    3. Should charter schools funded with public money have the same academic accountability standards as other public schools when they are all competing for the same students and resources? Should charter schools have the right to teach material they want to teach (such as leaving out evolutionary biology to teach creationism) and to not publicly report their students' grade and test scores on statewide exams?
    4. The Trump administration has argued that the federal government should get rid of Obama-era restrictions on giving federal funding to for-profit colleges (like University of Phoenix and DeVry) that allegedly use predatory sales techniques and are more likely than non-profit institutions (like CUNY) to leave students with large debts but no degree (Mitchell, 2017). Do you believe that the federal government should allow students to use federal funding for whatever college institution they wish to go to, even if these institutions have poor track records?

    Once students are sorted out into small groups to answer these questions, they work together for 30 to 45 minutes. They must write a set of notes on the major ideas and examples that come up during their discussion. They must also collaborate to present a 5-minute oral report on their answers to the rest of their classmates at the end of the class hour.

    In the past I have asked students to produce both the written and oral reports in order to challenge them to come up with strong supporting arguments to their answers. These two activities help them sharpen their collaborating, note-taking, and presenting skills.

    From my personal experience, students often end up vigorously debating the policies and priorities of the United States Department of Education when asked to answer these questions. In doing so, they investigate both their prospective policies and priorities and end up questioning those of others. For instance, I have known several religious students who have argued for more funding for charter schools and for the right to decide what their children should learn. On one memorable occasion, a student proclaimed that: “Parents should have the choice to send their children to whatever schools they want. If I want my child to have a Christian education, they should have one!”

    Needless to say, she got pushback from several other students in her group, many of whom were concerned about students who would have to remain in traditional public schools while money was hoovered away into private and charter schools. As another student said, “[the] Government has to make rules to help every child, not just the lucky children who win the lottery [for vouchers or placement in private/charter schools]!”

    It can sometimes be a struggle to keep the class both excited and civil—and to ask them to be respectful toward the diverse views of their fellow-classmates. Yet students who partake in this activity often become more interested in understanding how the Department of Education works after engaging with the debates on education that are raging in the country today. In having to argue for one side or another in these thorny debates, students are confronted with the understanding that the policies they – or Betsy DeVos – promote will affect the lives of millions of American students.

    Betsy DeVos will not always be the Secretary of Education anymore than Donald Trump will always be the president. Still, as the years go by and Secretaries of Education come and go, psychology professors can continue to modify this activity to engage students in discussions of how politics shapes the lives of students from pre-K to the college level. By offering students the chance to work on the thorny questions of education policy today from a vantage point of power, professors can help them better understand how political science intersects into everyone’s development.

    Furthermore, by contrasting students’ decisions about funding or promoting educational policies and programs against the actual decisions of the Secretary of Education an the President of the United States, students can better understand how their priorities are not necessarily the priorities of those in power. This is a realization that can shock students, but also help them understand the ramifications of political progress (or lack thereof) in their lives and the lives of others. Ultimately, this activity can help students understand how they are similar to or different from the people whose policies rule their lives, and helps them better understand how contemporary politics affects their development in many ways.

    References

    Bendix, A. (2017, March). Trump’s education budget revealed. The Atlantic. Retrieved from https://www.theatlantic.com/education/archive/2017/03/trumps-education-budget-revealed/519837/

    Mitchell, J. (2017, June). Trump administration scraps Obama-era rules on for-profit colleges. Wall Street Journal. Retrieved from https://www.wsj.com/articles/trump-administration-to-scrap-obama-era-rules-on-for-profit-colleges-1497458305

  • 05 Oct 2017 10:00 AM | Anonymous member (Administrator)

    By Karyna Pryiomka, Doctoral Candidate, The Graduate Center, CUNY


    In four years of teaching statistical methods in psychology, I have noticed that students often experience difficulty recognizing the relationship between a hypothetical construct, its operational definition, and the interpretation of results. This often leads to over-generalization, incorrect inferences, and other interpretive mistakes. Operational definitions of hypothetical constructs represent an important component of research in psychology. Operationally defining constructs and understanding the implications of these definitions for data interpretation then constitute key competencies for a psychology student. While operationalism is widely taught in research methods courses, its discussion in statistics courses is often reduced to a few paragraphs in an introductory chapter. To help my students better understand the collaborative, iterative, and context-bound process of creating appropriate operational definitions, I employ a low-stakes group activity during which students work in groups of 3 or 4 to create operational definitions of hypothetical constructs, such as confidence, in two distinct contexts: individual-level decision making and research design. The learning objective of the activity is to demonstrate the role of context in deciding how to operationalize a given construct and to illustrate the process of developing consensus about the meaning of constructs and their operational definitions.

    Here are the steps that I take to implement this activity:

    1.  I begin by assigning students into groups of 3 or 4, depending on class size. Ideally, at least 2 groups should work on the same problem. Each group receives only one variation of the problem. Below are examples of the two prompts. I give students about 15 minutes to work on the task in their groups.

    Individual-Level Decision-Making Context: A growing cat food company, Happy Kibble, is expanding its sales department and asks you, a group of industrial-organizational and personality psychologists, to use your expertise and help them hire the best sales people so that they can convince cat owners around the country to switch to Happy Kibble. You know from research that people who are confident often make good sales people. How would you define and operationalize confidence in this context in order to select a good employee? What questions would you ask candidates? What behavior would you pay attention to during an interview? Assume that the human resources office has pre-selected the candidates so they all qualify for the job based on the minimum education and professional experiences requirements.

    Research Context: A growing cat food company, Happy Kibble, has partnered with your research team to investigate if there is any relationship between the confidence level of a sales person and their professional success. Happy Kibble wants to conduct a real scientific study to answer this question. The company needs your expertise in defining and measuring confidence; however, you are on a tight budget so conducting individual interviews might not be an option if you want to collect a large enough sample to draw meaningful conclusions. How would you define and operationalize confidence in this context in order to be able to measure this trait in as many people as you can.


    2.  Once groups have created their definitions, a representative from each group is invited to write their definitions and measurement/assessment plan on the board.

    3.  I like to begin the discussion by emphasizing the differences between the two contexts. We then focus on establishing consensus among groups that worked on the same problem. We discuss the similarities and differences between the operational definitions produced by these groups, discuss the strengths and limitations of the proposed measurement/assessment plan, and reconcile any differences. We then compare the consensus definition produced for the interview context with the consensus definition produced for the research context. We outline key differences in contexts, discuss what type of evidence can be collected in each, and how the context influences the interpretation of data.
    For example, students in both contexts often mention eye contact as one of the behaviors representing confidence. We then discuss how they would measure/observe eye contact in the context of a job interview compared to a research study. Students in a job interview context point out that they would be direct qualitative judgments, made as they engage with the interviewees. Students in a research context often say that they would use video equipment to observe how sales representatives establish eye contact with their customers. In this context then, unlike their colleagues conducting job interviews, students are less likely to make direct qualitative judgments about individual people, but would rather observe, record, and quantify their behavior remotely.

    In my experience, students eagerly engage in the discussion, justifying their decisions and challenging those of others. They also begin to ask questions and think critically about the inferences that could be made based on the operational definitions they have proposed.  For instance, a group once suggested that a particular speech pattern or the use of specific words could constitute a variable to assess confidence. This suggestion led to a discussion of the relationship between language and existing standardized assessments like IQ or potential bias against non-native English speakers, making students question whether the proposed operational definition would fairly and accurately reflect someone’s confidence instead of another potentially related trait.

    Overall, I found this activity to be a great way to engage students in the discussion of important principles of research design, while promoting critical thinking about the role of operational definitions and measurement procedures in data collection and subsequent interpretation


  • 29 Sep 2017 5:00 PM | Anonymous member (Administrator)

    By Teresa Ober, The Graduate Center CUNY

    Dr. Jon E. Grahe is Professor of Psychology at the Pacific Lutheran University. His research interests include interpersonal perception, undergraduate research, and crowd-sourcing science. The GSTA editing team recently had a chance to talk with Dr. Grahe about his views on how innovations in undergraduate education can be used to address some of the current problems facing psychological science. In speaking with him, we learned quite a lot about Open Science, the Replication Crisis, and the Collaborative Replication and Education Project. Here are some of our burning questions about these topics and an edited summary of Dr. Grahe’s responses.

    Teresa Ober: Let’s start with a simple question. What exactly is “Open Science”?

    Jon Grahe: There are two levels here when we talk about “Open Science.” At one level, we might be referring to open-access resources, which is not my primary focus, but it refers to making sure everyone can read all publications. At another level, we are talking about transparency in the research process. Transparency in the research process can be manifested in at least three ways, including: 1) sharing hypotheses and analysis plans, 2) sharing information about the data collection procedures; and 3) sharing data and the results of the research process.

    There are certain tools available today that allow researchers to conduct open science according to the second level mentioned. Many of these tools are being developed by the people at the Center for Open Science. The Center for Open Science was formed by Brian Nosek and Jeffrey Spies to encourage more scientific transparency.  One of their products is the Open Science Framework, an electronic file cabinet with interactive featers that makes it easier for researchers to be transparent during the research process and serves as a hub where researchers can document, store, and share content related to the process of their research projects.

    TO: Why is Open Science so important?

    JG: When we learn about science as children, we are taught that replication and reproducibility is a big part of the scientific process. To achieve the possibility of replicating research, accurate documentation and transparency are necessary parts of the methods. Transparency is mainly what open science is about, and it is important because it allows us to test and retest our hypotheses. It is just fundamental for the scientific process of iterative hypothesis testing and theory development.

    TO: There has been discussion around the transparency of “Open Science” as a kind of revolution in the philosophy of science? What are your thoughts about this? Do you view it as a radical transformation, or a natural continuation given technological advancements or changed world views that make people more disposed towards sharing information in ways not previously possible?

    JG: The recent interest in openness in the scientific process has likely emerged due to the calls for the improved quality of science, which hit a critical juncture after the replication crisis. Transparency in science also became more feasible with advances in technology that allowed researchers to document and share research materials with relative ease. Before digital storage was cheap, it was very difficult to share such information.  Social networking platforms also encourage more distant connections and allow for better working relationships between people who never meet face to face. The digital age allow us to experience this revolution.

    TO: Tell us a little more about the “Replication Crisis.”

    JG: When we talk about the replication crisis, it is important that we recognize that it affects psychological science, but not exclusively. Though the field of psychology emerged as the center of attention for this issue, other scientific disciplines are likewise affected, and in some ways, the crisis of replication happened to affect psychology sooner.

    The Replication Crisis in psychology seemed to emerged around 2011 as a result of three events. The first event involved a set of serious accusations against a researcher who had reportedly fabricated data on multiple studies. The second issue was the publishing of findings that seemed outrageous and a misuse of proper statistical procedures. The third issue was a general swelling of the volume of research that had been shown to fail to replicate. In general, when many doctoral students and other researchers attempted to replicate published, and supposedly established, research findings, they were unable to do so. Since then, a lot of looking around has evolved in other fields, as well. These issues have led some researchers to speculate that as many as half of all published findings are false.

    TO: How are “Open Science” initiatives such as the Open Science Framework are attempting to address this issue?

    JG: By promoting transparency in the scientific process, replication becomes more feasible. In my own experience, I approached the replication crisis as a research methods instructor seeing a wasted resource in the practical training that nearly all undergraduate students must undertake. Before the crisis, my colleagues and I had been arguing for large-scale collaborate undergraduate research that was practical and involved efforts on the part of students to replicate research findings that had previously been published, see Grahe et al., (2012), see School Spirit Study Group (2004).

    TO: We’ve talked about how “Open Science” is good for research, but I am wondering if you could elaborate how such initiatives can be good preparing undergraduate and graduate students as researchers?

    JG:  Over 120,000 students graduate each year with a psychology degree, of whom approximately 90-95% must take a research methods class to fulfill their degree requirements. Of those, it is estimated that approximately 70-80% also complete a capstone or honors project and about 50% collect actual data to complete the project. Thus, there are tens of thousands of such projects that involve data collection each year in the U.S. alone. As a research methods instructor, I am concerned about making sure that my students have practical training that will help them professionally and allows them to learn about the research process more meaningfully. Further, by having class projects contribute to science, my work as an instructor was more clearly valued in tenure and promotion. In my classes, participating in “authentic” research projects is always a choice, and in my experience, many students embrace the chance to conduct an actual study and collect data and are also excited to receive training on conducting open science. 

    TO: This sounds very interesting. Tell us more about the Collaborative Replication and Education Project (CREP)?

    JG: CREP is actually the fourth project that I have undertaken to engage undergraduates in “authentic” research experiences  within a pedagogical context. The CREP is specifically geared towards replication, whereas  the earlier projects were oriented toward  getting students’ to contribute to science while learning to conduct research.

    As far as I know, the first-ever crowd-sources  study in psychology was published in a 2004 issue of the Teaching of Psychology (School Spirit Study Group, 2004; http://www.tandfonline.com/doi/abs/10.1207/s15328023top3101_5). That project leader found collaborators by invited them to measure school spirit at both an institutional level and an individual level. Students could use the individual data for their class papers, and the different types of units of analysis made for interesting classroom examples.

    The same year this was published, the same project leader, Alan Reifman invited us again to collectively administer a survey, this time it was about Emerging Adulthood and Politics (Reifman & Grahe, 2016). Because the primary hypothesis was not supported from about 2005 until about 2012, no one bothered to examine the data. However, when I was starting to focus on increasing participation in these projects, I saw this data set (over 300 variables from over 1300 respondents from 10 different locations) as a valuable demonstration of the project potential. We organized a special issue of the Emerging Adulthood Journal where nine different authors each answered a distinct research question using the data set. A follow up study called the EAMMi2 collected similar data from over 4000 respondents from researchers at 32 different locations. Both of these survey studies demonstrate that students can effectively help answer important research questions.

    Another undergraduate focused survey project that occurred just before CREP was launched Psi Chi collaborated with Psi Beta on their National Research Project (Grahe, Guillaume, & Rudmann, 2013). For this project, contributors administered the research protocol from David Funder’s International Situations Project to respondents in the United States.    

    In contrast to these projects, the CREP focuses on students completed experimental projects and students take greater responsibility for the project management. While I had one earlier attempt at this type of project, it didn’t garner much interest until the Replication Crisis occurred. At that point, there was greater interest from other individuals about the argument that students could help contribute to testing the reproducibility of psychological sciences.  Of note, one of the earliest contributors was a graduate student teaching research methods. As we have developed over the past 5 years and learned how to best manage the system, I’m now curious to see if there are potential partners in other sciences. There is nothing in the name that says psychology and the methods should generalize well to other disciplines

    TO: Why is the Logo for the CREP a bunch of Grapes?

    JG: The logo for CREP consists of a grape, which helps prime people to say the acronym as a rhyme for grapes, but is also a useful metaphor for replication studies in science. When you think of replications, you can think about a bunch of grapes. Even though each of the grapes consists of the same genetic material, there is some variation in the size and shape of each grape. Each grape in a bunch is like the results of a replication study. While grapes of the same genetics can differ in relative size, replications examining the same question will also vary in sample size yielded different sized confidence intervals. And replications can’t be exact, they are only close. So while grapes on the same vine may have slight differences in taste due to variability in ripeness, replication studies can have subtle differences in their conclusions, while striving to test the same underlying phenomenon. Replication studies can only be close never exact because of differences in participants or researchers conducting the study, , research contexts of time, location, slight variations in materials, and so forth. These differences can produce vastly different results even if effect is still there. Conducting a successful replication study doesn’t mean you’re guaranteed to find the same effect. And of course, there are varieties of grapes, just as there are varieties of replications. Close replications and conceptual replications are trying to address different questions just as different varieties have different flavors. The CREP has a Direct+ option where contributors can add their own questions to the procedure as long as it is after the original study or collected as additional conditions. This more advanced option provides different flavors of research for the CREP. There are many comparisons that make grapes a good metaphor for replication science, and I hope that the CREP continues to help students contribute to science while learning its methods.

    TO: If I can ask a follow-up question, then what could be considered a “successful replication”?

    JG: For researchers, a successful replication is one that, to the best of a researcher’s abilities, is as close to the methods of the original study. It is not about whether a finding comes out a certain way. When considering students, a successful replication study is further demonstrated when  the students demonstrates understanding of the hypothesis and why this study was designed to test that hypothesis. Can they reproduce the results correctly and can they interpret the findings appropriately. In other words, did they learn to be good scientists while generating useful data?

    TO: If you are working with students on a CREP replication study, do you allow them the freedom to choose a paper to replicate, or should instructors be a little more direct in this process?

    JG: The selection process for choosing replications is straightforward. We tend to select several highly cited articles each year, or about 36 studies total. We then code them for feasibility of undergraduate replication and selected those which were most feasible. We do this not based on the materials that are available, because often the researchers are willing to provide these, but rather to identify important studies that students can complete during a class.

    In my classes, students have complete choice on what studies they want to conduct, and often there are options beyond the CREP. However, I know others who provide students a list of studies that will be replicated or limit choice in other ways. There are many methods and the instructor should find a system they like the best.

    TO: How can graduate student instructors become more involved in the CREP initiative?

    JG: The CREP website gives instructions on what to do. In my chapter in the recent GSTA handbook, I talk about conditions for authentic research to be successful. If there is no IRB currently in place for conducting the research with undergraduates, then it simply cannot happen. The institution, department, and any supervising research staff need to be on board with it. When considering authentic research opportunities, it is always a good idea to talk to the department chair.

    For graduate students who would like to get involved with CREP, we are always looking for reviewers. The website contains some information about how to apply as a reviewer.

    Another thing that graduate student instructors can do is to take the CREP procedures and implement them into the course. The Open Science Framework is a great tool and even if an instructor cannot use CREP for whatever reason, they could try to use of the Open Science Framework to mimic the open science trajectory. Even if data never leave the class, there is information on the CREP website about workflows and procedures.

    TO: What sorts of protections are there for intellectual property under the Open Science Framework?  Can you briefly explain how the Creative Commons license protects the work of researchers who practice “Open Science”?

    JG: The Open Science Framework allows you to choose licenses for your work. In terms of further protections, the Open Science Framework itself doesn’t really provide protections on intellectual property, but rather the law itself does. If a research measure is licensed and published, there is still nothing that protects it except for the law. In any case, following APA guidelines and reporting research means that you are willing and interested in sharing what you do and your findings.

    TO: We see that you just recently came back from the “Crisis Schmeisis” Open Science Tour. Tell us how that went and about the music part.

    JG: Earlier this year, I agreed to conduct a workshop in southern Colorado.  Because I’m on sabbatical, I decided to drive instead of fly and then scheduled a series of stops throughout several states. These travels became the basis of the “Crisis Schemisis” tour (https://osf.io/zgrax). In total, there were 13 meetings, workshops, or talks at 7 different institutions. I had the chance to speak with provosts and deans, as well as students in research methods classes or at Psi Chi social events. During these visits, I showed how to use the Open Science Framework for courses or research, or gave talks presenting about the CREP or EAMMi2 project.  As demonstrations of ways to interface with the Open Science Framework.

    I somewhat jokingly called this the “Crisis Schmeisis” tour to help explain that even if someone doesn’t believe there is a replication crisis, the tools that emerged are beneficial and worthwhile to all. Throughout the year, I will continue the tour by extending existing trips to visit interested institutions.

    The Crisis Schmeisis tour almost looks like a musical tour, is that intentional?


    It is, I am also planning to write a series of songs about Scientific Transparency. Because it is an “Open Science Album, I’m putting the songs on the OSF (https://osf.io/y2hjc/). There is a rough track of the first song titled “Replication Crisis.” The lyrics of the song convey the basic issues of the crisis and I’m hoping that other Open Scientists will add their own tracks so that there is a crowd-sourced recording. I’m currently practicing “Go Forth and Replicate” and have a plan for a song about pre-registration. My goal is to complete 12 songs and to play them live at the Society for Improving Psychology Science conference next summer (http://improvingpsych.org/).

    TO: What happened in your career as a researcher or teacher that inspired you to become an advocate for the OSF?

    JG: During my first sabbatical, I was very unhappy with my place as a researcher and scholar. Did you know that the modal number of citations for all published manuscripts is exactly zero? That means that most published work is never cited, even once. As a researcher, I thought about my frustrations around working on something that would not matter, and as a teacher, I was concerned that students were not getting good practical training.

    At one point during my first sabbatical, I became frustrated in the process of revising a manuscript after receiving feedback from an editor. Instead of being angry about a manuscript that might never get cited anyway, I thought about areas where I was passionate and might be able to make a difference. I decided there was a better way to involve undergraduates in  science and that there were likely many research methods instructors like me who were also feeling underused and undervalued. After that point, my career changed directions. At the time, I was formulating these ideas, it was not about open science, per se, it was really about undergraduates making contribution and gaining experience from it.

    TO: Beyond replicability--what is the next crisis facing psychological science and how can we prepare?

    JG: I would like to see an interest in more expressive behaviors rather than key-strokes that typically define the research process. So much of the research that is conducted in a psychological lab is pretty far removed from daily interactions and I would like to see psychologists work harder to demonstrate meaningful effect sizes in authentic settings. The size of some of the effects we find in research are quite small, and it seems that we spend a lot of time talking about effect sizes that explain less than 3% of the variability in a given outcome variable.

    TO: Any final thoughts?

    JG: Just a note about the distinction between preregistration and preregistered reports. I think these often get confused in the Open Science discourse. Preregistration is the act of date stamping hypotheses and research plans. Preregistered Reports are a type of manuscript where the author submits an introduction, methods, and preregistered analysis plan. The editors make a decision to publish based on this information because the study is important regardless of the outcome of the results findings. There is also the possibility to write and submit an entire manuscript that has a preregistration as part of it. I see a lot of confusion about this topic.


    References

    Bhattacharjee, Y. (2013, April). The mind of a con man. The New York Times [Online]. Retrieved from http://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-fraud.html

    Carey, B. (2011, January). Journal’s paper on ESP expected to prompt outrage. The New York Times [Online]. Retrieved from http://www.nytimes.com/2011/01/06/science/06esp.html

    Grahe, J. E. (2017). Authentic Research Projects Benefit Students, their Instructors, and Science. In R. Obeid, A. Schwartz, C. Shane-Simpson, & P. J. Brooks (Eds.) How We Teach Now: The GSTA Guide to Student-Centered Teaching, p. 352-368. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/

    Grahe, J. E., Reifman, A., Hermann, A. D., Walker, M., Oleson, K. C., Nario-Redmond, M., & Wiebe, R. P. (2012). Harnessing the undiscovered resource of student research projects. Perspectives on Psychological Science7(6), 605-607.

    Hauhart, R. C., & Grahe, J. E. (2010). The undergraduate capstone course in the social sciences: Results from a regional survey. Teaching Sociology38(1), 4-17.

    Hauhart, R. C., & Grahe, J. E. (2015). Designing and teaching undergraduate capstone courses. John Wiley & Sons.

    Ioannidis, J. P. (2005). Why most published research findings are false. PLoS medicine2(8), e124.

    Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence?. Perspectives on Psychological Science7(6), 528-530.

    School Spirit Study Group. (2004). Measuring school spirit: A national teaching exercise. Teaching of Psychology31(1), 18-21.

  • 28 Sep 2017 5:00 PM | Anonymous member (Administrator)

    By Regan A. R. Gurung, University of Wisconsin-Green Bay

    There are many ways to learn. I like to think that armed with a curious mind and the right resources and motivation, anyone can learn by themselves. Of course, when we think of learning we don’t think of the solo pursuits of motivated individuals. We tend to think of schools and colleges. While master teachers can inspire with their passion and masterfully deliver content, most students rely heavily on course materials the faculty assign (though the students may not always read all of it) to solidify content acquisition. Consequently, the quality of course material is of tantamount importance. For years, faculty required students to buy textbooks. Students mostly bought them (and sometimes read them). Now there are a variety of free resources available. How do they compare to the expensive versions?  Are they all created equal?

     Once upon a time, you could rely on the simple heuristic that “pricey equals quality.” After all, standard textbooks (STBs) have the backing of major publishing companies who invest large sums of money to ensure quality products. The development editors, slew of peer reviewers examining every draft of every chapter, and focus groups should ensure a quality product. Then there are the bells and whistles.  STBs are packed with pictures, cartoons, and come with a wide array of textbook technology supplements (online quizzes, etc.; Gurung, 2015). Many believe that given a STB is put out by a publisher whose name is recognizable it must be good.  If an author who is familiar writes an STB, it must be good.  In fact, these are all empirical questions that are never really tested. The market research that big publishers cite and the student and faculty endorsements peppering the back covers and promotional materials of STBs rarely (if ever) represent true empirical comparisons of learning. To be fair, true comparisons of learning are difficult. A variety of factors- the student, the teacher, the textbook- all influence learning, which makes such research difficult.

    Are all STBs equal? In one study I did some years ago students rated a number of most adopted textbooks in the introductory psychology market (Gurung & Landrum, 2012). Students did differentiate between texts rating some books better than others but does the student preference matter? In a number of national studies, colleagues and I had students using different textbooks take a common quiz (ours) so we had a common measure of learning (Gurung, Daniel, & Landrum, 2012; Gurung, Landrum, & Daniel, 2012). Quiz scores did not vary.  Students seem to learn similarly from different textbooks regardless of the company. But now for the big question: Given that STBs are extremely expensive (and students complain) what about textbooks for free?

    Enter Open Educational Resources (OERs). OERs provide students and faculty with free electronic materials. For a great review of the growth of the OER movement see Jhangiani, and Biswas-Diener (2017). The OER movement sprouted from the creation of MERLOT by California State University in 1997. MERLOT provided access to curriculum materials for higher education, and Open Access and the Budapest Open Access Initiative further fueled the rise of the OER movement.  OER strode into the public consciousness when MIT, with funding from the Mellon and Hewlett foundations created OpenCourseWare, online courses designed to be shared for free. Are OERs better than STBs?

    The best studies using standardized or similar exams show no differences in exam scores between OER users and STB users. Sadly, the bulk of the studies available are fraught with limitations and validity issues. In an attempt to transcend the limitations of extant studies, I recently published a study (Gurung, 2017) comparing a group of OER users to STB users. In two large, multi-site studies, I compared students using OERs with students using STBs, and measured key student variables such as study techniques, time spent studying, ratings of the instructor, and rating of the quality and helpfulness of the textbook. All students completed a standardized test using a subset of items from a released Advanced Placement exam.

    In both studies, students using an OER scored lower on the test after controlling for ACT scores. Study 2 also compared book format (hard copy or electronic) and showed OER hard copy users scored lowest. Using books predicted significant variance in learning over and above ACT scores and students variables. Results provide insight into the utility of OERs and the limitations of current attempts to assess learning in psychology. On the upside, students using an OER rated the material as more applicable to their lives.

    When we talk about quality in higher education we tend to rely on the credibility of authors and the peer review process. While my findings urge caution in using OERs, it sheds light on how little learning outcome data there is for the use of STBs. Faculty still adopt these books, requiring students to pay thousands of dollars a year in textbook costs.

    Well-curated OERs, those where the writing and content is monitored and reviewed by peers and contributed by credible sources, deserve to likewise bask in the reflected glory of STBs. While OERs are ready for their time in the spotlight, scholars of teaching and learning need to work to assess true quality of all educational resources. OERs present the opportunity for every member of the public to learn for no cost. We all need to pay attention to what we can get for free but also to ensure materials are tested for effectiveness.


    References

    Gurung, R. A. R. (2015). Three investigations of the utility of textbook teaching supplements. Psychology of Learning and Teaching, 1, 48-59.

    Gurung, R. A. R. (2017). Predicting learning: Comparing an open education research and standard textbooks. Scholarship of Teaching and Learning, 3, 233-2498. http://dx.doi.org/10.1037/stl0000092

    Gurung, R. A. R., Daniel, D.B., & Landrum, R. E. (2012). A multi-site study of learning: A focus on metacognition and study behaviors. Teaching of Psychology, 39, 170-175. doi:10.1177/0098628312450428

    Gurung, R. A. R., & Landrum, R. E. (2012). Comparing student perceptions of textbooks: Does liking influence learning? International Journal of Teaching and Learning in Higher Education, 24, 144-150.

    Gurung, R. A. R., Landrum, R. E., & Daniel, D. B. (2012). Textbook use and learning: A North American perspective. Psychology of Learning and Teaching, 11, 87-98.

    Jhangiani, R. S., & Biswas-Diener, R. (Eds.). (2017). Open: The philosophy and practices that are revolutionizing education and science. Retrieved from http://dx.doi.org/10.5334/bbc

  • 28 Sep 2017 10:00 AM | Anonymous member (Administrator)

    By Jessica Murray, The Graduate Center CUNY

    The relentless forward march of technology can be overwhelming at times, for students and teachers alike. It doesn't help that some public universities can fall behind in keeping up with the latest technology because of limited financial resources, or choose proprietary tools which become familiar, only to replace them with cheaper options later on. The Futures Initiative started out a few years ago with a mission to reshape higher education. One of their key aims was to use network and communications tools to build community and foster greater access to technology. At the time, the CUNY Academic Commons (built in WordPress) was available only for graduate students, so the Futures Initiative created a new WordPress multisite, or network of sites, that was open for graduate students to develop course sites that they could use with their undergraduate students. As part of my role as a fellow for the Futures Initiative, I maintain this website and I teach people how to create their own site on our network. Many schools now host platforms like ours and the CUNY Academic Commons. If your school doesn't offer a place for you to create your own website, you can also create one on WordPress.org. This post offers a brief introduction to WordPress, but more importantly, encouragement, or what I'm calling my "pep-talking points." Hopefully by the time you finish reading, I will have convinced you to create your own course website on WordPress.

    WordPress is a free and open source content management system that has grown from being a small blogging platform in 2003 to being the most commonly used website creation platform in the world, accounting for more than 25% of all the websites on the entire internet. For those unfamiliar with the lingo, open source means that the core software, over 50,000 plugins that extend the core functionality, and thousands of themes which control the look and feel of WordPress sites, are developed by a community of programmers around the world. Content management system describes the very act of putting a website together (managing and displaying different types of content), and more importantly, is a tool designed so people without coding experience can create and edit the content of their site with a web browser. Before WordPress and other content management systems, we had to create static web pages in HTML, including placing text, images, and hyperlinks into the appropriate spots, styling the pages with CSS, uploading all of the files via FTP, and testing to see how it displayed on different browsers. Back then, if your web designer went on vacation, you may have had to wait for their return so they could fix a typo, but today, you can login to your site, fix the error and publish the changes in a few minutes without special software. This may be appreciated more by people who remember the old way of doing things (myself included), but it also demonstrates pep-talking point number one: technology is getting easier, not harder. Once you get started, you'll see how easy it really is.

    The Futures Initiative now hosts more than 50 course websites, some of which have more than 30 users, which illustrates pep-talking point number two: if hundreds of people at CUNY can create dozens of course websites in only a few years time, you can, too! Here at CUNY, some teachers have chosen to use WordPress instead of Blackboard because it can do all of the same things. One major benefit is that teachers have control over how they can use the site once their class is finished. Sharing documents securely, having a place for your syllabus, and creating discussion forums are some of the functions that can be replicated on WordPress. There are also some things that WordPress can do that Blackboard can't – a major one being, the opportunity for your students to write public posts. This is directly related to pep-talking point number three: creating content with WordPress is empowering! I have witnessed the undeniable look of satisfaction on the faces of many a workshop participant when they figure out how to add a header image to a page, publish their first test post, and see their changes happen in real-time. Once that happens, they're hooked. Making a website doesn't have to be daunting, and it won't be once you start creating your content. And, while you're doing it (pep-talking point number four): you and your students are learning valuable, marketable skills that will not only be a great addition to your CV, but also give you the tools to create your own online identity that won't cost you a dime. If I still haven't convinced you, and you don't know where to start, let me give you pep-talking point number five: start anywhere! WordPress has a pretty limited number of menu options. The key is to realize that you won't break anything that you can't fix, and the very best way to learn any software is to try things and see what happens. If you get stuck, Google your question and you'll find countless resources from a massive online community. There is a lot you can do with WordPress, but the most important thing is to publish that first post, bask in the glow of satisfaction that can only come from creating your own little sliver of the internet, and plan to inspire that confidence in your students by creating a shared course website in WordPress. 

  • 15 Sep 2017 10:00 AM | Anonymous member (Administrator)

    By Aaron S. Richmond, Ph.D., Metropolitan State University of Denver (Email: arichmo3@msudenver.edu, Twitter: @AaronSRichmond)

    Yes, it is that time of year—the first week of classes and start of a new semester. Many of us struggle with what to do. We could easily be that person that Gannon (2016) suggests starts the semester the absolute worst way—yes—it’s Syllabus Day! That is, most of our students expect this and have a script for the first day. It usually follows this formula: come to class, sit down, take roll, maybe do an ice-breaker that every other teacher does, get the syllabus, the teacher reads the syllabus, or if they are daring maybe puts it in a PowerPoint, engage in a brief discussion, then class is dismissed—hopefully early. I’m here to implore you—nay challenge you—to break this mold and do something different. There are so many different things that you can do to engage your students and to make a strong and positive first impression and I hope after reading this blog you will step out of your comfort zone and try a few of them.


    First Impressions are Important!

    As Legg and Wilson (2009) have demonstrated, first impressions even in an email, can lead to students having more positive beliefs about you as a person. So, what do you do to create a strong positive first impression? Legg and Wilson would suggest, prior to the start of the semester, sending out an email introducing yourself and the class. The email should be less formal, and more about you and who you are as a teacher. You can include the syllabus so that they can read it beforehand. Lyons et al. (2003) suggested that you should arrive early to class on first day and informally talk to students. At the same time, linger after class and answer any questions and talk to your students. Additionally, dress professionally and at the same time be comfortable and true to who you are (Gurung et al., 2014). If you can, change the physical environment of the classroom by rearranging desks to be more inclusive (e.g., circles or U shapes if possible as opposed to rows). Weimer (2015) suggests that you should discuss your commitment to teaching. Why do you do it? What do you love about it? To be a great teacher, what do you need from students (i.e., expectations)?  Lastly, share your story. It is important to humanize yourself and let your students know that you are just a person like anyone else. For example, on the first day of class, I show a picture of my three girls, lovely wife, dogs, bunnies, horses and all the other creatures on our farm. I do this to convey that I, like them, have a life outside the classroom. And that I will be flexible and respectful of the fact that sometimes life just happens to both me and the students. Remember, this first day can truly impact the tone and culture of the class for the rest of the semester so make it count.


    Be Active! Activities for the First Day

    The godmother of teaching and learning, Maryellen Wiemer (2017) suggests that being active on the first day can create a positive and productive climate for learning. She suggests that you do activities such as:

    • Best and worst classes: In this activity have students write on a piece of paper or on the board, what the best class they’ve had and what the teacher did to cause it to be the best class. Conversely, have student’s write about the worst class they’ve had and how the teacher caused it to be horrible. Then discuss and pledge to students how you will try to improve the course to become the best class J
    • First day graffiti: In this activity, place flip charts all around the room with different sentence stems. For instance, I learn best in classes where the teacher... Or Here’s something that makes it easy to learn in a course… Have students walk around the room and respond to each stem, discuss their answers with one another, then debrief as a class.
    • Syllabus Speed Dating: With syllabus in hand, have student sit across from one another and ask each other a question about the syllabus OR a question about themselves. Then, have students shift one seat down and ask another classmate one of the two questions.
    • Irritating Behaviors: Theirs and Ours: Put students into groups and have them list the five things that teachers do to make it easy to learn. Share their answers with the class. Then below the list, list the 5 things that you and your colleagues have found students do that make it difficult to teach. Discuss how teaching is a reciprocal process and what you will do to make this relationship productive, respectful, and enjoyable.

    Additionally, Lyons et al. (2003) suggested that there are several activities you can do in order to “whet students’ appetites for course content.”  For example, have students individually list what topics or concepts they think are associated with your textbook title. Then have them get with another student to share their ideas and categorize each idea into chapter or module-like units. Have the dyads or groups name their chapters, then have them arrange them as a table of contents. Then discuss these table of contents with your students. Often it helps you identify misconceptions about the course, but also provides an opportunity for students to actively engage in the content of the course and with one another. Another idea is to connect course content by bringing in current news that is related to your course content. For instance, DACA is a very relevant issue on our campus and I was teaching educational psychology. I related the social, emotional, and cognitive impacts of DACA with students in the K-higher education setting. Finally, Linda Nilson (2003) suggests that teachers should develop a “common sense inventory” that students complete to highlight common course content or common misconceptions (e.g., right vs. left-brained thinkers in educational psychology). The moral of this story is that instead of reading the syllabus, engage students in activities that demonstrate the course content, your pedagogical beliefs, and how you will engage students throughout the course.

     

    Set the Tone that will Last All Semester

    If you normally read from your notes and only lecture, then maybe you should just read the syllabus. However, if this isn’t how you teach, why do it? Instead, teach something that is not on the syllabus and in the manner in which you normally teach. That is, if you do a lot activities—do activities. If you use humor as a pedagogical tool—then try to be funny—no seriously! If you use experiential learning in your class, do it on the first day! If you use the Socratic method, have a discussion with your students about the course, what they will do, etc. If you use cooperative learning a lot in your course, model this by doing a jigsaw puzzle or a think-pair-share. The point is, you get one chance at a first impression, so make it count and make it accurate to who you are as a teacher. 


    Additionally, Lyons and colleagues (2003) suggested that there are specific things that you can do to set the tone. For instance, establish a culture of feedback. I discussed this earlier, but let students know you are very interested in how they are doing in the course and how you are doing teaching the class. Typically, this is done in anonymous fashion, but the point is to create a partnership of learning between you and your students. Although, some disagree with this, I suggest making “homework 0 a mandatory office visit”—meaning, give them some low-stakes incentive to come and meet with you in your office. In this meeting, don’t necessarily talk about the class. Rather, get to know your students.

     

    Moving Beyond the First Week

    Now that you’ve established a positive, engaged, and productive culture of learning during that first week, what do you do the second week? Joyce Povlacs Lunde (n.d.) has several—in fact 101--things you can do beyond the first week of class. Povlacs Lunde divides what you can do to keep students engaged beyond the first week into seven categories. First, try to help students transition into your class. This includes things like how much time they will need to study for the course, give sample test questions and answers, talk to different students each class period to learn a little about them. Second, she suggests that we should direct students’ attention to the class. For example, give low-stakes pretests to reward students for reading, ask students to write down what they think the important issues are. Third, challenge students. Have student write down their own expectations for the course and their goals for learning. Engage in problem-based learning. Fourth, provide support. You can do this by providing study guides, be redundant, use non-graded feedback, etc. Fifth, encourage active learning. Use think-pair-shares, ask a lot of questions and wait for their answers, use classroom assessment techniques such as muddy points to understand where students are struggling, etc. Sixth, build a community. This is one of my most important goals. Learn their names. I know this is difficult in big classes but what I do is I have students give me a 3 X 5 card with a picture on the back and their name, year in school, major, and something that they like to do for fun. I then study them like flashcards. I guarantee they will appreciate it and feel like they are part of something special. Seventh, get their feedback on your class. There are several ways to do this. You can ask them to provide anonymous feedback on how to improve lessons and assessments. You can give them inventories such as the Professor-Student Rapport Scale (Wilson et al., 2010) or the Student-Engagement Questionnaire (Handelsman et al., 2005), or the Learning Alliance Inventory (Rogers, 2015). You can then use results from these to improve your instruction.

    Never Stop Breaking the Mold!

    Ok, so when do you discuss the syllabus? As I’ve discussed previously, send the syllabus to them prior to the start of the semester. I promise, they can read, but if you don’t assess them on it, they won’t. So, give a syllabus quiz. My colleagues and I (2016) suggest that you should create a syllabus quiz that requires students to be the teacher and ask questions that students typically ask (e.g., Professor, can I turn in assignments late?) We also suggest that you revisit the syllabus often. This should not be a one-shot lesson. In fact, I have my students’ pull-out the syllabus at least once a week to check in what is due, reading for next week, etc.

    In the end, it is important to evolve and adapt your instructional practices to new students, new cultures, new and different courses. As such, you will develop some really great ways to break the mold based on what I discussed in the class, BUT you need to do more and will likely need to modify what you do next semester or quarter on the first day of class. I would like to leave you with a list of really good reads that further explain and provide more ways to change your script for that very important first day of class.


    References and Must Reads!

    Buirs, B. A. (2016, January 4th). First impressions: Activities for the first day of class. Faculty Focus: Higher Ed Teaching Strategies from Magna Publications. Retrieved from https://www.facultyfocus.com/articles/effective-teaching-strategies/first-impressions-activities-for-the-first-day-of-class/

    Gannon, K. (2016, August 3rd). The absolute worst way to start the semester. The Chronicle of Higher Education. Retrieved from https://chroniclevitae.com/news/1498-the-absolute-worst-way-to-start-the-semester

    Gurung, R. A., Kempen, L., Klemm, K., Senn, R., & Wysocki, R. (2014). Dressed to present: Ratings of classroom presentations vary with attire. Teaching of Psychology41, 349-353.

    Handelsman, M. M., Briggs, W. L., Sullivan, N., & Towler, A. (2005). A measure of college student course engagement. The Journal of Educational Research98(3), 184-192.

    Legg, A. M., & Wilson, J. H. (2009). E-mail from professor enhances student motivation and attitudes. Teaching of Psychology36(3), 205-211.

    Lyons, R., McIntosh, M., & Kysilka, M. (2003). Teaching college in an age of accountability. Boston: Allyn and Bacon.

    Morris, T., Gorham, J., Cohen, S., & Huffman, D. (1996). Fashion in the classroom: Effects of attire on student perceptions of instructors in college classes. Communication Education, 45, 135-148.

    Nilson, L. (2003). Teaching at its best: A research-based resource for college instructors (2nd ed.). Bolton, MA: Anker Publishing.

    Povlacs Lunde, J. (n.d.). 101 things you co in the first three weeks of class. Retrieved from http://www.unl.edu/gradstudies/current/teaching/first-3-weeks

    Provitera McGlynn, A. (2001.) Successful beginnings for college teaching: Engaging students from the first day. Madison, WI: Atwood Publishing.

    Raiscot, J. (1986). Silent sales. Minneapolis, MN: AB Publications.

    Richmond, A. S. Gurung, R. A. R., & Boysen, G. (2016).  An evidence-based guide to college and university teaching: Developing the model teacher. New York, NY: Routledge.

    Rogers, D. T. (2015). Further validation of the learning alliance inventory: The roles of working alliance, rapport, and immediacy in student learning. Teaching of Psychology42, 19-25.

    Weimer, M. (2013, August 13th) Five things to do on the first day of class. The Teaching Professor Blog. Retrieved from https://www.facultyfocus.com/articles/teaching-professor-blog/five-things-to-do-on-the-first-day-of-class/

    Weimer, M. (2015, August 9th). The first day of class: A once-a-semester opportunity.  Faculty Focus: Higher Ed Teaching Strategies from Magna Publications. Retrieved from https://www.facultyfocus.com/articles/teaching-professor-blog/the-first-day-of-class-a-once-a-semester-opportunity/

    Weimer, M. (2017, July 19th). The first day of class activities for creating a climate for learning. Faculty Focus: Higher Ed Teaching Strategies from Magna Publications. Retrieved from https://www.facultyfocus.com/articles/teaching-professor-blog/first-day-of-class-activities-that-create-a-climate-for-learning/

    Wilson, J. H., Ryan, R. G., & Pugh, J. L. (2010). Professor–student rapport scale predicts student outcomes. Teaching of Psychology37(4), 246-251.


  • 06 Sep 2017 12:00 PM | Anonymous member (Administrator)

    By Janie H. Wilson, Ph.D., Georgia Southern University

    I will begin by admitting that I started teaching 25 years ago as part of my graduate-school assistantship. At that time, I asked the department chair to avoid assigning me to teach statistics because I had seen many years of student struggles, including my own. I agreed to teach research methods, where I could share my passion for experimental and correlational designs. A few weeks into teaching the course, I realized my mistake. I explained to students that the two-group design we were using could be analyzed with a t-test. They stared blankly. I briefly explained that a t-test analyzed two groups when the dependent variable represented interval or ratio data. They glanced around the room at other students, clearly wondering if anyone knew what the heck I was talking about. One student raised her hand and assured me that she “kind of” remembered it.

    Based on the curriculum, I knew research methods had a prerequisite: statistics. How could they not remember a t-test? That term, I had to reteach the t-test, ANOVA, Pearson’s r, simple regression, and chi square. I did not do a good job of teaching the topics – I simply was not prepared to tackle detailed statistics in research methods.

    After the term ended, I gave a lot of thought to teaching statistics. Clearly I would have to teach analyses in research methods, so why not tackle the prerequisite course? To prepare, I looked back on the way I learned statistics. Mainly the focus had been on hand-calculations. Thinking about it now, I believe the approach made sense at the time. After all, when I began college, we typed term papers on a typewriter, not a PC! Computer labs popped up on campus pretty quickly, but even then, undergraduates did not learn statistical software as part of a statistics course. Later when I attended graduate school, hand-calculations remained the focus, including exams covering matrix algebra.

    Based on my undergraduate and graduate training, I prepped my statistics course with a heavy emphasis on hand-calculations. When I taught statistics for the first time, I spent a week helping students work through their math anxiety as best I could. On exams, I graded based on the process rather than the final answer because students usually made minor math errors along the way. Throughout the term, after students struggled through hand-calculations, I showed them the magic of statistical software. When the answer appeared in a matter of seconds, students often asked me why they had learned all of the math. My answer was always the same: If you know the math, you will understand the analysis better.

    As the years passed, I stood by my decision to focus on hand-calculations. Even when my (younger) colleagues urged me to consider focusing on computer software so I could spend class time on theory and more examples, I gave the same response: If they know the math, they will understand the analysis better.

    It turned out that my noble intentions had no substance. When my colleagues taught research methods, students who had taken my statistics course did not remember how to analyze data using – you guessed it – a t-test. The simplest analysis was lost in the fog of a summer or holiday break. I had done nothing to solve the problem of students forgetting statistics. In fact, I had to be honest with myself that I never had any evidence that my students understood analyses better after going through hand-calculations.

    I wish I could say my course immediately changed, but that would be untrue. I can say that my eyes had been opened, and I started watching what was really happening in my classroom. I would go through how to calculate standard deviation by hand, appreciating the “aha!” moment when students understood that we were obtaining an average spread of values. But rather than feel the elation of a job well done, I wondered what the point was. They were never going to work in a lab where they would calculate values by hand. In today’s world, most students have a powerful computer in their pockets.

    As I continued to take students through hand-calculations, I noticed that about 50% of my class time was used when students worked through examples. Sure, the activity kept them awake, but they often produced the wrong answer and became so bogged down in the math that the big picture was never clear to them. I began to ask them to put down their pencils and talk through an example with me. I explained that whether or not they remembered to take the square root of the final number was not the point; they needed to understand what the number meant for research. No matter what I said, they grabbed their pencils as soon as they could and dove into the problem again, determined to conquer the math.

    Although I have been slow to change the way I teach, the process has begun. And with so much class time free from hand-calculations, I can work with students on research design to provide context for each analysis. We have time to work through more examples, and I have even started incorporating APA style. I remain determined to help students build a solid foundation in our discipline with knowledge of statistics and research methods, the backbone of psychology as a science.

    I am open to change. My next goal is to fully integrate research methods and statistics. Even if the curriculum continues to offer statistics and methods as separate courses, I can integrate methods into statistics for context, and I can integrate statistics into the methods course for repetition and more complete examples. Integration enhances student retention of the information, and I am delighted that Psychology Departments are beginning to rethink the curriculum and abandon sequenced courses in favor of integration. By letting go of hand-calculations, we make room for the important context offered by research methods.


    Recommended Readings

    Barron, K. E., & Apple, K. J. (2016). Debating curricular strategies for teaching statistics and research methods: What does the current evidence suggest? Teaching of Psychology, 41(3), 187-194. DOI: 10.1177/0098628314537967

    Pliske, R. M., Caldwell, T. L., Calin-Jageman, R. J., & Taylor-Ritzler, T. (2016). Demonstrating the effectiveness of an integrated and intensive research methods and statistics course sequence. Teaching of Psychology, 42(2), 153-156. DOI: 10.1177/0098628315573139

    Stranahan, S. D. (1995). Sequence of research and statistics courses and student outcomes. Western Journal of Nursing Research, 17(6), 695-699.

    Wilson, J. H. (2017). Teaching challenging courses: Focus on statistics and research methods. In Obeid, R., Schwartz, A. M., Shane-Simpson, C., & Brooks, P. J. (Eds.), How we teach now: The GSTA guide to student-centered teaching. Society for the Teaching of Psychology e-book http://teachpsych.org/ebooks/howweteachnow

    Wilson, J. H., & Joye, S. W. (2017). Demonstrating interobserver reliability in naturalistic settings. In Stowell, J. R. & Addison, W. E. (Eds.), Activities for teaching research methods and statistics in psychology: A guide for instructors. Washington, DC: American Psychological Association.

    Wilson, J.H., & Joye, S.W. (2017). Research methods and statistics: An integrated approach. Thousand Oaks, CA: Sage Publications.

  • 06 Sep 2017 10:00 AM | Anonymous member (Administrator)

    By Jonathan E. Westfall, Ph.D., Delta State University

    The term “deliverable” is not one often heard in education, it being more at home in a project management context. Deliverables are tangible or intangible products that are delivered to customers. The closest thing we may have in education are “learning outcomes.” In a certain sense, a deliverable captures attention and sparks memory and association in ways that we don’t always consider. Over the past five years, I’ve attempted to use Open Educational Resources (OER) to create deliverables in my classroom, producing tangible products that my students can refer to long after our class has concluded. The goal in mind: provide something that keeps the content alive in some way. To do that, I’ll discuss three methods using OER.

    The Custom Textbook We Published

    David Wiley, from Lumen Learning, relates a story about the custom textbook. The idea is simply to take an OER textbook that allows derivative work (which is specified by using a Creative Commons license that does not specify “no derivatives”) and have students expand the work, customizing it for niche classes that otherwise would not have a specific text. Over a number of semesters, Wiley’s students have created such a book that becomes the book for the class, which students can download in PDF format.

    However a PDF can sometimes lack the “realness” and “concreteness” of a book. We hold books to be standards of information, and while the PDF is quickly becoming a similar standard, there is something fulfilling about holding a book or seeing a book in print. Several years ago, I challenged students in a Learning & Memory class to write a parenting manual based upon the learning concepts they’d just mastered (e.g., classical conditioning, operant conditioning, modeling, incidental learning, etc…). Students were given sheets of acid free paper and asked to illustrate their tip or suggestion. These were then scanned in and a PDF created which could be uploaded to a print-on-demand service. The result was “My Future Parenting Manual: Advice from Childless Me” (http://amzn.to/2vIn4Mt), a collection of work from the class that they could download for free, or order a print copy for a small fee. Indeed, today anyone can order it, as it has an ISBN number and is stocked at Amazon.com and others. An added bonus to this is that it can also be used as a fundraiser for a group or class, with profits going toward a group activity.

    The Class Slide Deck

    Students often struggle to remember what specifically they learned in each course. Therefore, one method I’ve used is to ask students to create a PowerPoint slide (or several slides) with the big ‘take aways’ from the semester. I’ve asked them to include their photo and name on the slide as well. I then assemble the slides together and we go over them in class. But most importantly, I make the slides available for download to the entire group. This allows students to have a tangible memory, in electronic format, from the semester. It also includes the people they took the class with, allowing them to tap into memories not just of material, but also student reactions. Coupling this activity with an OER resources (for example, material that could be modified/expanded upon, or freely distributed) and a self-publication service, can again create a tangible item that the students can keep. The modern “yearbook,” only specific to a course, department, or discipline.

    The Learning Tools We Build

    Statistics can be a difficult course because of the many concepts learned. These concepts tend to be best learned when integrated into examples and visualized. For many years we’ve been dependent on publishers to create such examples, or to build visualization engines that allow students to see how, for example, a distribution changes based upon established parameters.

    Today, however, a set of open-source software tools exists that can change that. R, the statistical language (http://www.r-project.org), provides sophisticated statistical operations to anyone at no cost. RStudio (http://www.rstudio.com) along with Shiny (http://shiny.rstudio.com) allow students and instructors alike to create immersive statistical examples. A classic example of this is the “Old Faithful” dataset app (http://shiny.rstudio.com/gallery/faithful.html) which allows students to change the size of the bins in a histogram to see how it affects the data visualization. The code that runs it is shown on the right. With practice, one can easily create these apps on their own. In my classes, I’ve used Shiny to produce analyses that would be too sophisticated for a student to run on their own, but not too sophisticated to interpret. Seeing their data come alive in a series of inferential tests or descriptive plots adds a level of realism to my statistics and upper-level seminar classes. Future plans include prompting students to write their own scripts and apps, to show off their research.

    Deliverables Revisited

    Through these examples, I hope that you’ve seen what I mean by the term “deliverables” in the classroom. By providing these physical or electronic products to students, we not only make information more memorable, but also enhance their skills and backgrounds. Remember that the student who helps build onto an open source textbook is not only your student, but also now an author. The student who uses R to analyze her data is not only going to do well in your statistics course, but also can now run complex calculations for her employer without the common complaint of “If I only had SPSS installed.” By working together to integrate OER and deliverables to our classes, we enrich our students, our institutions, and our disciplines.


  • 12 Aug 2017 4:00 PM | Anonymous member (Administrator)

    By Stephen L. Chew, Ph.D., Samford University

    Every beginning instructor discovers sooner or later that his first lectures were incomprehensible because he was talking to himself, so to say, mindful only of his point of view. He realizes only gradually and with difficulty that it is not easy to place one’s self in the shoes of students who do not yet know about the subject matter of the course.

    -Jean Piaget (1962)

    I came into the test really confident that I knew the material but it didn't show that on the test.

    -Student Email Message to Me


    This blog post is about egocentrism, on the part of both you the teacher and your students. Both teachers and students are subject to misunderstanding how well the students are comprehending and learning the course concepts. In teaching, we talk about metacognition, which is a more general term than egocentrism. Metacognition is a person’s awareness of their own thought processes. For the purpose of teaching, we can define metacognition as a people’s awareness of their level of understanding of a concept (Ehrliger & Shane, 2014). Students with good metacognition have an accurate understanding of how well they understand a concept. Student with poor metacognitive awareness lack a true grasp of how well they understand a concept. Typically, students with poor metacognition are grossly overconfident. They believe they have a deep, thorough understanding when their grasp is actually superficial and riddled with gaps. They fail to distinguish between popular beliefs they may have brought to the class with them and the empirically supported concepts presented in class. Egocentrism, then, is a form of poor metacognition.

    A form of egocentrism can affect instructors as well. Teachers often overestimate the level of understanding of the class. This is known as the curse of knowledge (Fisher & Kelli, 2016). As far as the teacher is concerned, he or she has explained concepts clearly, carefully, and completely. The teacher, however, no longer remembers how challenging it was to learn the concepts for the first time. Because students lack the expertise of the teacher, the teacher may have gone faster than the students could follow, or left out critical aspects of a concept because it seemed obvious to the teacher. To the teacher, there may have been only one possible interpretation of what he or she said, the correct one. To the students, however, without any prior knowledge of a concept, there may be multiple ways to interpret the class presentation through faulty assumptions or inferences.

    Most all veteran teachers have experienced following scenario. The teacher believes he or she has explained the material clearly and the students have understood it well. The students have attended class and studied the material on their own. On the exam, however, the students do poorly. The teacher is disappointed in the students, and may think, “Those lazy students. They must not have studied.” The students are also disappointed. They think, “That sneaky teacher. The test was full of obscure and tricky questions.” Each blames the other, but both teacher and students may be wrong. Neither may have had an accurate awareness of the students’ actual level of understanding.

    So how do we detect this egocentrism on the part of the teacher and student? We use formative assessments to gauge and promote student learning. Formative assessments are brief, low or no stakes assessments that are given before a high stakes exam (Angelo & Cross, 1993). They reveal the level of student understanding to both student and teacher. Formative assessments come in many varieties, such as think-pair-share, minute or muddiest point papers, or so called “clicker questions” (e.g. Angelo & Cross, 1993; Ritchart, Church, & Morrison, 2011; Barkley & Major, 2016).

    For example, say you are covering Piaget’s stages of cognitive development. After your presentation of all the stages, you can check on the class’s comprehension using a conceptest (Chew, 2004; Mazur, 2001). Present the class with the question below.

    Jean calls Papa Johns and orders a small pizza. “Do you want that cut into 6 slices or 8 slices?” asks the clerk. “Oh 6 slices,” says Jean, “I could never eat 8 slices.” Jean is showing

    1. Egocentrism
    2. Lack of object permanence
    3. Lack of conservation
    4. Assimilation

    Have everyone determine their answer silently. Then, on a signal, have everyone in the class raise their hand with the number of fingers indicating their answer. Both they and you can look around and gauge the frequency of different answers. Next have them discuss their answer with someone around them, preferably with someone who had a different response. After a few minutes of discussion, poll them using hand signals again. Then call on people with different answers and ask them to explain their reasoning. (I’d say the answer was #3.) Conceptests follow the specific procedure above (poll-discuss-poll-explain). Not only do conceptests give both teacher and students a sense of their level of understanding, they have been shown to be highly effective in promoting student learning, even when students get the answer wrong (Smith et al., 2009). You may recognize the question as a “clicker question” that you can use with a student response system, but the pedagogy ensures that students process and reflect on their answers. You can do conceptests with “clickers”, but often just a show of hands is faster, simpler, and just as effective.

    Here is an example of a Think-Pair-Share you could use.

    I was walking in a parking lot holding my 3-year old son over my shoulder. He was facing backwards and looking behind me. “Watch out, Dad,” he said, “There is a car behind you.” I was very impressed by this statement. Why?

    You can present the item to students and let them think about it, then pair off with another student and think about it, then share as a class. Once again, you can get a sense of the level of understanding of students. In this case the child realized his Dad couldn’t see the car behind him. Preoperational children are supposed to be egocentric.

    And that’s not all. Formative assessments are useful for achieving many desirable learning goals. Here is a list:

    • Improving metacognition for students and teachers
    • Addressing and countering tenacious student misconceptions
    • Illustrating the desired level of understanding of knowledge for students (especially in preparation for exams)
    • Promoting student learning and understanding through retrieval practice and peer learning
    • Promoting rapport and trust between teacher and student
    • Modeling critical thinking and problem solving

    Teachers who have never tried formative assessments often ask me: If the formative assessments are low stakes, why would students be motivated to do them. I can give several reasons. First, they are engaging and students find them fun to do. Second, they preview the kinds of questions and problems students will see on exams. Finally, the students recognize this is a learning opportunity that will help them master the material. Some teachers tell me that they have too much to cover to use formative assessments. What is the value of covering material if students don’t understand it? Formative assessments make learning visible. If you want to learn more about using formative assessments, check out my video series on the Cognitive Principles of Effective Teaching (http://bit.ly/1LDovLp).


    References

    Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco, CA: Jossey-Bass.

    Barkley, E. F., & Major, C. H. (2016). Learning Assessment Techniques: A Handbook for College Faculty, San Francisco: Jossey-Bass.

    Chew, S. L. (2004). Using ConcepTests for formative assessment. Psychology Teacher Network, 14(1), 10-12 http://www.apa.org/ed/precollege/ptn/2004/01/issue.pdf

    Crouch, C. H., & Mazur, E. (2001). American Journal of Physics, 69, 970-977. doi: 10.1119/1.1374249

    Ehrlinger, J., & Shane, E. A. (2014). How Accuracy in Students’ Self Perceptions Relates to Success in Learning. In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.). Applying science of learning in education: Infusing psychological science into the curriculum. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/asle2014/index.php

    Fisher, M., & Keil, F. C. (2016). The curse of expertise: When more knowledge leads to miscalibrated explanatory insight. Cognitive Science, 40, 1251-1269. doi: 10.1111/cogs.12280

    Piaget, J. (1962). Comments on Vygotsky’s critical remarks concerning The Language and Thought of the Child and Judgment and Reasoning in the Child. Addendum in L. S. Vygotsky, Thought and Language. Cambridge, MA: MIT Press. Downloaded from https://www.marxists.org/archive/vygotsky/works/comment/piaget.htm

    Ritchart, R., Church, M., & Morrison K. (2011). Making thinking visible: How to promote engagement, understanding, and independence for all learners. San Francisco: Jossey-Bass.

    Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su. T. T. (2009). Why peer discussion improves student performance on in-class concept questions. Science, 323, 122-124. doi: 10.1126/science.1165919

<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 
Powered by Wild Apricot Membership Software