Double responding
A relatively persistent phenomenon observed in decision-making tasks is the occurrence of double responses, where participants, after making an initial response, rapidly make a second response for an alternative option. For example, in a classic lexical decision task, participants are asked to judge whether a presented string of letters is a word or a non-word. A double response might occur if a participant first presses a key corresponding to word but immediately follows up with another key press corresponding to non-word. In most experiments, double responses are not of primary interest and are typically excluded or not recorded at all.
However, Evans et al. (2020) argued that double responses might provide additional information about the underlying decision process. They proposed modifications to a selection of evidence accumulation models (EAMs) to account for this phenomenon. For most of the models examined by Evans et al. (2020), the analytic likelihood is unknown, requiring the authors to use extensive simulation-based estimation, where the likelihood is approximated via a kernel density estimate on simulated data.
In this project, you could select a specific model of double responses proposed by Evans et al. (2020) and implement it in BayesFlow. Following this, you may reproduce the analysis using data from Dutilh et al. (2009). Alternatively, you can implement two versions of one model - one which accounts for double responses and one which does not. Investigate under which circumstances including information about double responding brings valuable information about the model parameters (e.g., by investigating whether including double responses increases posterior contraction).
Download data from Dutilh et al. (2009), reported by Evans et al. (2020) at in the OSF repository:
SWIFT model
Cognitive models are widely used in decision-making research, where participants respond to stimuli by making choices. However, their applications extend far beyond this domain (Rasanan et al., 2024).
One area where cognitive models have advanced our understanding is visual processing. These models predict how gaze direction is driven by the cognitive state of the observer, but also their environment. However, eye movement behavior is complex, and many models produce intractable likelihoods, making amortized inference a promising approach.
A notable example is SWIFT, a dynamic model of eye movement control during reading. The original version (Engbert et al., 2005) is challenging to fit, but a simplified variant (Engbert & Rabe, 2024) allows for more efficient likelihood calculation, making SWIFT an excellent starting point for implementing models of eye movement behavior with BayesFlow.
In this project, you can explore amortized inference for the SWIFT model using the simplified version from Engbert & Rabe (2024). The OSF repository of the tutorial paper provides relevant datasets, including individual participant data and a language corpus. For instance, you can download data from one participant from https://osf.io/teyd4 and load it into Python:
import pandas as pd
=pd.read_csv("fixseqin_PB2expVP10.dat",
df="\t",
delimiter=["sentID", "wordID", "duration"],
names=[0, 1, 3]) usecols
where sentID
and wordID
columns indicate the sentence id and word id that was fixated, and duration
column represents fixation duration (in ms).
To obtain word frequencies, download the corpus data from https://osf.io/nj2mf and load it into Python:
import pandas as pd
=pd.read_csv("Rcorpus_PB2.dat", delimiter="\t", usecols=range(5)) corpus
The relevant columns are sentID
and wordID
that allow you to match the fixated word to a word from the corpus, and freq
which gives the word frequency.