Three years ago, as I was preparing to start as a lecturer in the Department of Psychology at the University of Bath, I reflected on my own undergraduate training. What should I imitate? What would I like to improve? The “reproducibility crisis” was in full swing. Many of the standard research practices that I had been taught have now been found to be flawed, from P-value hacking for “HARKing” – making assumptions after results are known – and over-reliance on undersized studies (i.e. drawing oversized conclusions from undersized samples). dimensioned).
I was struck by the fact that the research paper that the final year students do is almost a boot camp to instill these bad habits. Large number of projects, limited time and resources, small sample sizes, the potential for undisclosed analytical flexibility (P-hacking) and takes priority over novelty: together, a recipe for non-reproducible results.
Most undergraduate theses turn into exercises that reflect the limitations of research design, which is frustrating for the student and the supervisor. However, every year a few students get lucky and post which gives them a huge CV advantage. I wondered what lesson this taught. Have we built in a culture that rewards random results rather than robust methods?
In an effort to disrupt this culture, I created the GW4 Undergraduate Psychology Consortium with colleagues from the universities of Bath, Bristol, Cardiff and Exeter. We wanted to integrate rigorous research practices into undergraduate teaching, incorporating procedures such as pre-enrolling study protocols, designing studies with sufficient statistical power, and transparent reporting of methods and results.
The difficulty was knowing how. Rigorous research methods often require more time and resources than a student project allows. Our solution was collaboration. By working together, students could pool their efforts in data collection to achieve sample sizes sufficient for meaningful analyzes.
The Consortium is now entering its third year. We are still evolving, but we have settled into a productive routine. It works best if a doctoral student or postdoctoral fellow develops the main research question that undergraduates need to tackle, writes a “bare bones” study protocol, and manages the study. During summer holidays in the UK, this protocol is distributed to undergraduates (typically two to five students at each institution), and each of them plans a secondary research question and appropriate method.
At the start of the last year of the undergraduates (the first week of October), we hold the first consortium meeting, where the students present their secondary questions and decide who will bring them into the study. For example, if the main question of the study is the effect of impulse control training on reducing unhealthy food choices, an undergraduate student might suggest investigating whether the effects are moderated by traits of personality such as impulsiveness. The student will then propose a measure to evaluate this trait and will propose an analysis to test his hypothesis. This way, each student has some contribution to the design, but the sample size and research integrity of the main project is maintained. In addition, each student can focus on a slightly different question and thus meet the requirements of an individual assessment. The study protocol is publicly prerecorded (in our case, to the Open Science Framework at https://osf.io), and data collection spans four months, from November to March.
In April, students present their findings to the group and collectively discuss the main findings of the study. They reach consensus on the findings and draft the results for wider dissemination.
There are fees. Consortium studies take longer to set up and more effort to coordinate than the standard student project. But these costs are a small price to pay for giving students the opportunity to network with their peers and with researchers from other institutions, exposure to best practices, and a sense of being part of a valued team. We academics have an interest in aligning our teaching with our practice.
This is an example of how, with a little creative thinking, we can overcome some of the pitfalls of the current model when it comes to training the next generation to do quantitative experimental research. A handful of publications are in preparation.
The open science movement and the growth of online platforms for behavioral tasks and quizzes have made the work of psychologists easier in all institutions. By using them we can be sure that we are performing the same experimental procedures at all sites.
Obviously, this approach is not suitable for all types of research. This can be more difficult for wet lab studies, for example, where consumables are expensive, and the idiosyncratic setup of the labs makes it more difficult to standardize operating procedures. Yet working collaboratively can be even more beneficial when it is more difficult to establish methods of generalization or harmonization, especially since students entering graduate school can sometimes spend years trying to replicate ideas. published works before developing them.
Early collaborative training could also bring comfort and creativity to similar approaches later in students’ research careers. Although real-world research is increasingly collaborative, it lacks conventions on how to properly recognize and reward individuals’ contributions to research. There may be broader lessons to be learned from the way we have designed our approach to align rigorous consortium research methods with university requirements for individual assessment.