Our Research Methods

Theory Meets Context-Focused Strategy


Theory and Beyond…

Research across many fields of psychology demonstrates that individuals’ mindsets –their attitudes, beliefs, and perceptions about themselves and the world – play a key role in their ability to fulfill their potential. Psychological theory can help us understand the types of mindsets most likely to be relevant in a given setting, but psychological theory alone is not enough to customize effective solutions for specific individuals and varied contexts. In order to understand how to best leverage the power of mindsets in a particular setting, we use a variety of key research methods.


A Brief overview:

Often referred to as the “gold standard” of experimental design, randomized controlled trials are a research method that involves randomly assigning participants to receive the treatment condition (i.e., an intervention) or the control condition (i.e., the condition the treatment condition is compared to). The purpose of this method is to test whether a particular intervention or program causes differences between people on a particular outcome.

Randomization allows researchers to weed out all systematic differences between the treatment and control groups except one: the experimental group they are assigned to. In this way, randomized controlled trials give researchers more confidence that the outcomes they observe are, in fact, attributable to their intervention.

Infographic | PDF



a brief overview:

Because researchers can’t directly observe students’ thoughts and feelings, they measure it by asking many different questions in surveys. The problem is that completing long surveys can take up valuable time that could otherwise be spent on instruction or other types of enrichment. Teachers may be interested in their students’ motivation, but they may not be able or willing to give up 30 minutes of class time for a survey. On the other hand, well-designed surveys that ask more questions – and usually take longer to complete – usually provide more accurate information.

The pragmatic measurement approach tries to strike a balance between the brevity of short measures and the precision of longer ones. To accomplish this balance, pragmatic measurement systematically identifies items that best represent an overall scale. Items are often selected because they are most strongly related to outcomes or sensitive to change. For instance, we created a 10-item scale of students’ expectancy, value, and cost that could be administered to middle school students in under seven minutes. Pragmatic measurement can help make surveys more realistic to implement in real-world settings and help us draw valid conclusions.

Infographic | PDF


Person oriented approach.png

Conceptual overviews:

An integrative perspective for studying motivation and engagement

Linnenbrink-Garcia, L., & Wormington, S. V. (in press). Key challenges and potential solutions for studying the complexity of motivation in school: A person-oriented integrative motivational perspective. British Journal of Educational Psychology.


a brief overview:

One goal of educational research is to understand which factors are most strongly related to student learning in real-life settings, like a middle school classroom. Decades of research suggest that student learning is a combination of personal factors (e.g., motivation, gender, background knowledge) and situational factors (e.g., teaching practices, peer engagement, school safety. To understand the motivational dynamics of classrooms, researchers often use a variable-oriented approach, which focuses on how each individual factor relates to an outcome (e.g., Will I get better grades if I feel like I belong in math class?). In reality, however, student learning is likely a complex combination of many factors working together.

A person-oriented approach tries to account for this complexity by considering how a number of factors work together to influence an outcome. Rather than focusing on how each factor works on its own to relate to an outcome, person-oriented approaches compare people with a common combination of factors, or profile, on an outcome of interest (e.g., Will I get better grades if I feel like I belong in math class, think I can succeed, and have a low level of anxiety? Or is it more important for me to feel like I belong in math class, see value in what I’m learning, and have friends who also like math?). Because person-oriented approaches try to explain how multiple factors work together, they can be a useful tool for understanding the complexities of the real world.


Often, researchers assume that their programs or interventions will be implemented and received the same way each time. However, that is rarely the case; the way an intervention is implemented usually differs from the way the intervention was designed. Computers stop working, facilitators go off script, and participants draw different conclusions from the same experience. Any one of these factors has the potential to undermine the effectiveness of an intervention.

The extent to which an intervention or program is implemented and received as intended is known as intervention fidelity (a.k.a. program integrity, implementation fidelity). For example, we designed an intervention to encourage high school students to enroll in more math and science courses by educating parents about the value of taking math and science courses. We intended for parents to receive a brochure about the topic, access a companion website with more information, and have conversations with their children about the relevance of math and science to their current and future lives.

When an intervention or program is implemented, it will be effective to the extent that it instigates two types of processes (See Figure 1). Psychological processes refer to the changes in participants, such as depth of knowledge and attitudes (e.g., parents perceiving more value for math and science courses, thereby having more conversations with their teen about the importance of math and science). Psychological processes, in turn, lead to the desired outcome (e.g., students taking more math and science courses in high school). Intervention processes are the core components of the intervention theorized to drive changes in participants (e.g., materials distributed to parents explaining why math and science courses are valuable for high school students). Intervention processes impact key psychological processes within the participants. 

For a successful intervention, assessing intervention fidelity can help program developers, researchers, and practitioners know which aspects of a program are necessary for successful outcomes (e.g., whether providing a brochure, website, or both resources to parents is effective). For an unsuccessful intervention, assessing intervention fidelity can help researchers understand why an intervention or program did not work as intended (e.g., lack of specific information in brochures, broken link to website, parents not being able to effectively influence their high school students’ decisions). Without examining intervention fidelity, it is difficult to determine whether unfavorable intervention results are due to an ineffective intervention design or incomplete implementation of the program.


a brief overview:

To better understand complex issues in education, Motivate Lab employs mixed methods research. This involves collecting and analyzing both qualitative (e.g., focus groups responses) and quantitative data (e.g., numeric survey questions). By integrating both types of data, mixed methods research provides us with more comprehensive insights than either could provide alone. Because there is no one standard way to conduct mixed-methods research, below is an example of how Motivate Lab approaches integrating quantitative and qualitative data to better understand belonging uncertainty at a Tennessee community college.

Belonging uncertainty refers to the extent to which students feel insecure in their learning environments. Higher levels of uncertainty undermine student learning outcomes. We use quantitative data from student surveys to better understand average levels of students’ belonging uncertainty, it’s changes overtime, connections to academic outcomes, and whether certain groups of students experience more belonging uncertainty than others.

These are important questions to answer, but they don’t give us the full picture. Qualitative data gives a voice to our quantitative data by allowing students’ own perspectives to be captured. For example, conducting focus groups can help us better understand students’ individual experiences and how those might contribute to belonging uncertainty.

In this approach to mixed methods research, quantitative data allows us to establish the problem and understand patterns of belonging uncertainty, whereas qualitative data gives us deeper insight into its causes and potential solutions. If we exclusively focus on qualitative or quantitative data, we may miss part of the larger picture. By integrating them, we can develop more nuanced understandings of issues in education and how to address them.


a brief overview

Qualitative coding is the process of identifying themes in essay responses, interviews, and other qualitative data sources. While qualitative data on its own may give a researcher more in-depth insights into why a particular phenomenon is happening (see Mixed Methods section), these insights rely on the individual researcher’s interpretation of the data. In the same way that two people can walk away from the same conversation with very different understandings of what was said, two researchers can read the same student essay and interpret it in vastly different ways. Although there are a multitude of approaches to qualitative coding, the purpose is to make the interpretation of text and other non-numeric data systematic, transparent, and replicable.

Put more simply, qualitative coding pushes researchers to come up with rules for interpreting their data. At Motivate Lab, we accomplish this through a rigorous process of iteratively developing a “coding scheme” to categorize qualitative data using both psychological theory and practically-informed insights. While this work is time-intensive, qualitative coding allows us to not only interpret our data with greater confidence, but also use conclusions from the coding to inform future work.

Qualitative Coding (1).png

For instance, if we ask students to write about how math is related to their real lives, qualitative coding would allow us to more objectively interpret their responses. We might have coding rules for what constitutes a connection between math and real life, what criteria an essay needs to meet to be considered a good connection, and what type of connection students made. Through the consistent application of these rules, we would be better equipped to understand how students think that math is related to their lives. By coding qualitative data in a systematic and rigorous way, we can be confident that the information we glean from it is more objective and actionable.