Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 20 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,25 +11,38 @@ conda activate behapy
pip install -e .
```

## Examples
## Preprocessing

There is a MedPC event reading example in the examples subfolder. This example by default assumes MedPC data files are in the same folder as the notebook file.
To use behapy, the TDT proprietary formatted source data needs to first be converted BIDS-like raw data format. This assumes session_map.csv is in the sourcedata_root folder:

`tdt2bids [session_fn] [experiment_fn] [bidsroot]`

## Preprocessing
Alternatively, you can specify the source data directory with --sourcedata_root. (Note that this path is relative to session_fn and not the current directory.):

Convert TDT source data to BIDS-like raw data format:
`tdt2bids [--sourcedata_root sourcedata_root] [session_fn] [experiment_fn] [bidsroot]`

`tdt2bids [session_fn] [experiment_fn] [bidsroot]`
This will output raw data files in
`[bidsroot]/rawdata`

Open the preprocessing dashboard and confirm rejected regions of the recording:
Next open the preprocessing dashboard and identify regions of the recording for exclusion from analysis:

`ppd [bidsroot]`

Write the preprocessed data to the `derivatives/preprocess` tree:
The preprocessing dashboard will open in a browser window. Select a recording by index to pull up an interactive bokeh plot of raw fluorescence and normalised fluorescence. Then use the box-select tool to exclude time points from analysis.

![example interactive dashboard](data/behapy-ppd.png)

Finally, write the preprocessed data to the `[bidsroot]/derivatives/preprocess` tree:

`preprocess [bidsroot]`

## Demo Notebooks

A sample annotated analysis pipeline is available in `examples/analyse.ipynb`

There is a MedPC event reading example in `examples/showevents.ipynb`


## Contributors

behapy is maintained by [Chris Nolan](https://github.com/crnolan).
Expand Down
Binary file added data/behapy-ppd.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
232 changes: 232 additions & 0 deletions examples/analyse.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,232 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## behapy analysis pipeline\n",
"\n",
"This notebook walks through a basic behapy analysis pipeline using demo data that has been preprocessed with tdt2bids. We perform a linear regression to identify fluoresence signal response to various event types. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load modules"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"import logging\n",
"from pathlib import Path\n",
"import pandas as pd\n",
"import numpy as np\n",
"from behapy.utils import load_preprocessed_experiment\n",
"from behapy.events import build_design_matrix, regress, find_events\n",
"import statsmodels.api as sm\n",
"import seaborn as sns\n",
"sns.set_theme()\n",
"\n",
"logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load example data\n",
"\n",
"This pipeline is designed to be dropped into your experiment folder with structure:\n",
"```\n",
"└── bids_root/\n",
" ├── derivatives\n",
" ├── etc\n",
" ├── rawdata\n",
" ├── scripts/\n",
" │ └── analyse.py\n",
" └── sourcedata\n",
"```\n",
"\n",
"It will also run in-place as-is with demo data from Github."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"BIDSROOT = Path('..')\n",
"pre = load_preprocessed_experiment(BIDSROOT)\n",
"dff_id = ['subject', 'session', 'task', 'run', 'label']\n",
"dff_recordings = pre.recordings.loc[:, dff_id].drop_duplicates()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"# z-score the dff\n",
"dff = pre.dff.copy()\n",
"dff['dff'] = dff.dff.groupby(dff_id, group_keys=False).apply(lambda df: (df - df.mean()) / df.std())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"# Map IPSI and CONTRA\n",
"def _map_ipsi_contra(row):\n",
" r = row.iloc[0]\n",
" if r.label == 'RDMS':\n",
" events = pre.events.loc[(r.subject, r.session, r.task, r.run, r.label)].replace({'rlp': 'ipsilp', 'llp': 'contralp'})\n",
" elif r.label == 'LDMS':\n",
" events = pre.events.loc[(r.subject, r.session, r.task, r.run, r.label)].replace({'rlp': 'contralp', 'llp': 'ipsilp'})\n",
" else:\n",
" raise ValueError(f'Unknown label {r.label}')\n",
" return events.sort_index(level='onset')\n",
"\n",
"\n",
"events = dff_recordings.groupby(dff_id).apply(_map_ipsi_contra)\n",
"events"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"# Map events to individual recordings\n",
"def _get_nonevent(events, sub_events):\n",
" nonevent = events.loc[:, ['duration']].merge(sub_events.loc[:, ['latency']], how='left', left_index=True, right_index=True, indicator=True)\n",
" return nonevent.loc[nonevent._merge == 'left_only', ['duration', 'latency']]\n",
"\n",
"\n",
"REWmag = find_events(events, 'mag', ['pel', 'suc'])\n",
"NOREWmag = _get_nonevent(events.loc[events.event_id == 'mag', :], REWmag)\n",
"first_ipsilp = find_events(events, 'ipsilp', ['ipsilp', 'contralp', 'mag'], allow_exact_matches=False)\n",
"first_ipsilp = first_ipsilp.loc[first_ipsilp.latency < pd.to_timedelta('2s')]\n",
"# first_ipsilp = first_ipsilp.loc[first_ipsilp.latency < 2]\n",
"notfirst_ipsilp = _get_nonevent(events.loc[events.event_id == 'ipsilp', :], first_ipsilp)\n",
"first_contralp = find_events(events, 'contralp', ['ipsilp', 'contralp', 'mag'], allow_exact_matches=False)\n",
"first_contralp = first_contralp.loc[first_contralp.latency < pd.to_timedelta('2s')]\n",
"# first_contralp = first_contralp.loc[first_contralp.latency < 2]\n",
"notfirst_contralp = _get_nonevent(events.loc[events.event_id == 'contralp', :], first_contralp)\n",
"new_events = pd.concat([REWmag, NOREWmag, first_ipsilp, notfirst_ipsilp, first_contralp, notfirst_contralp],\n",
" keys=['REWmag', 'NOREWmag', 'first_ipsilp', 'notfirst_ipsilp', 'first_contralp', 'notfirst_contralp'],\n",
" names=['event_id'])\n",
"new_events = new_events.reset_index('event_id').loc[:, ['duration', 'event_id']]\n",
"events = pd.concat([events, new_events]).sort_index()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"plot_meta = {'Magazine': ['REWmag', 'NOREWmag']}\n",
"# plot_meta = {'Magazine': ['REWmag', 'NOREWmag'],\n",
"# 'Reward': ['pel', 'suc'],\n",
"# 'First press': ['first_ipsilp', 'first_contralp'],\n",
"# 'Other press': ['notfirst_ipsilp', 'notfirst_contralp']}\n",
"event_ids_of_interest = sum(plot_meta.values(), [])\n",
"events_of_interest = events.loc[events.event_id.isin(event_ids_of_interest), :]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"def _build_design_matrix(row):\n",
" r = row.iloc[0]\n",
" return build_design_matrix(\n",
" dff.loc[(r.subject, r.session, r.task, r.run, r.label), :],\n",
" events_of_interest.loc[(r.subject, r.session, r.task, r.run, r.label), :],\n",
" (-1, 2))\n",
"design_matrix = dff_recordings.groupby(dff_id).apply(_build_design_matrix).fillna(False).astype(bool)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"dm_filt filters on dff task"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"idx = pd.IndexSlice\n",
"dm_filt = design_matrix.loc[idx[:, :, ['FI15', 'RR5', 'RR10'], :, :, :], :].sort_index()\n",
"\n",
"\n",
"def _regress(df):\n",
" return regress(df, dff.loc[df.index, 'dff'], min_events=25)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"r1 = dm_filt.loc[:, idx[sum(plot_meta.values(), []), :]].groupby(level=('subject', 'task'), group_keys=True).apply(_regress)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%\n",
"# s1 = r1.stack(0).stack()\n",
"s1 = r1.copy()\n",
"s1.name = 'beta'\n",
"s1 = s1.reset_index()\n",
"s1['event_type'] = s1.event.map({v: k for k, l in plot_meta.items() for v in l})\n",
"sns.relplot(data=s1, x='offset', y='beta', hue='event', row='event_type',\n",
" col='task',\n",
" kind='line', hue_order=sum(plot_meta.values(), []), aspect=2)\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "behapy",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
11 changes: 5 additions & 6 deletions examples/showevents.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@
"source": [
"# MedPC event reading example\n",
"\n",
"Demonstration of loading a set of subjects data from a MedPC format and plotting by subject and event."
"Demonstration of loading a set of subjects data from a MedPC format and plotting by subject and event.\n",
"\n",
"This example assumes MedPC data files are in the same folder as the notebook file."
]
},
{
Expand Down Expand Up @@ -127,11 +129,8 @@
}
],
"metadata": {
"interpreter": {
"hash": "98bbee547cde878ca2d72f8442bd9530f0cc78c2a794beb5e8b4d2804654d41b"
},
"kernelspec": {
"display_name": "Python 3.9.12 ('behapy')",
"display_name": "behapy",
"language": "python",
"name": "python3"
},
Expand All @@ -145,7 +144,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
"version": "3.12.7"
},
"orig_nbformat": 4
},
Expand Down
Empty file added poisson_test.py
Empty file.