This repository contains the code of the cannon task. The task is used to examine adaptive learning under uncertainty and in changing environments.
The confetti-cannon task is the official task of the Research Unit 5389 on "Contextual influences on dynamic belief updating in volatile environments: Basic mechanisms and clinical implications". The research unit is a collaboration between University of Hamburg, UKE Hamburg, Freie Universität Berlin, Humboldt-Universität zu Berlin, and Universität Jena.
The task was also used in:
- Nassar, M.R., Bruckner, R., & Frank, M.J. (2019). Statistical context dictates the relationship between feedback-related EEG signals and learning. eLife, 8:e46975 Link
The repository is frequently updated in the context of additional studies that use the task. Go to branch „franklabEEG“ (commit number a0b782e) for the version that was used in the paper.
For the research unit and related projects, we have different task versions.
The common version is used across most projects and allows us to compare adaptive learning under uncertainty across different populations and with different methods. Currently, the task is compatible with
- Pupillometry
- fMRI (PI: Lars Schwabe)
- EEG (PI: Anja Riesel)
- MEG (PI: Tobias Donner).
To run the common version, we recommend using a local config file. Since the version is shared across many labs, and every lab has its own local settings, we implemented a local config file that is specific to each lab. To create your own config file, use al_commonConfettiConfigExample as a template and update it with your own settings. Please put it outside of the git path to ensure that it is not pushed to GitHub (but remains a local config file just for your own purpose).
There is also a more specific EEG version focusing on social versus monetary feedback. The task is still under construction, and a few updates will be implemented in the next couple of weeks. PIs: Anja Riesel and Tania Lincoln.
To run this version, you should also use a config file. Start from the example al_confettiEEGConfigExample and follow the same procedure as explained above.
The asymmetric-reward version gives different kinds of feedback inducing an interesting reward bias. Work in progress. PI: Jan Gläscher.
Run the version using RunAsymRewardVersion. If this version is used eventually, we will create a separate config file as well.
This version examines the role of different degrees of variability or risk and working memory. Also work in progress. PI: Jan Gläscher.
Run the version using RunVarianceWorkingMemoryVersion. If this version is used eventually, we will create a separate config file as well.
This is a preliminary version with a helicopter instead of a cannon and might be used in the future to examine OCD.
Run the version using RunLeipzigVersion. If this version is used eventually, we will create a separate config file as well.
We combined the cannon task with a sleep-deprivation manipulation in Magdeburg.
Run the version using RunSleepVersion.
We are currently scanning the cannon task in Magdeburg.
Run the version using RunMagdeburgFMRIVersion.
The very first version of the cannon task developed in Dresden and at Brown University.
Run the version using RunDresdenVersion.
Two classes implement crucial unit tests
- al_unittests: Important functions
- al_testTaskDataMain: Class for the outcomes and data storage
We currently have two integration tests
- al_commonConfettiIntegrationTest
- al_sleepIntegrationTest
All research-unit versions will ultimately be tested. You can also run all unit- and integration tests at the same time: al_runAllTests.
Since the research unit aims to compare the results across different experiments and task versions, it is crucial to share important task parameters. In the next couple of days, we will document these parameter settings here.
- Rasmus Bruckner - GitHub - Freie Universität Berlin - Universität Hamburg
Over the years, several people have contributed to the task:
Matt Nassar, Ben Eppinger, Lennart Wittkuhn, Owen Parsons, Jan Gläscher, and the research-unit team
This project is licensed under the MIT License - see the LICENSE file for details