This Python package streamlines, optimizes, and enforces best open-science practices for processing and analyzing behavioral data (primarily voice and speech, but also text and video) using robust reproducible pipelines and utilities.
from senselab.audio.data_structures import Audio
from senselab.audio.tasks.preprocessing import resample_audios
from senselab.audio.tasks.features_extraction import extract_features_from_audios
from senselab.audio.tasks.speech_to_text import transcribe_audios
audio = Audio(filepath='path_to_audio_file.wav')
print(audio.sampling_rate)
# ➡️ 44100
[resampled_audio] = resample_audios([audio], resample_rate=16000)
print(resampled_audio.sampling_rate)
# ➡️ 16000
audio_features = extract_features_from_audios([audio])
print(audio_features[0].keys())
# ➡️ dict_keys(['opensmile', 'praat_parselmouth', 'torchaudio', 'torchaudio_squim', ...])
transcript = transcribe_audios([audio])
print(transcript)
# ➡️ "The quick brown fox jumps over the lazy dog."For more detailed information, check out our Documentation and our Tutorials.
💡 Tip: Many tutorials include Google Colab badges and you can try them instantly without installing anything on your local machine.
- Modular design: Easily integrate or use standalone transformations for flexible data manipulation.
- Pre-built pipelines: Access pre-configured pipelines to reduce setup time and effort.
- Reproducibility: Ensure consistent and verifiable results with fixed seeds and version-controlled steps.
- Easy integration: Seamlessly fit into existing workflows with minimal configuration.
- Extensible: Modify and contribute custom transformations and pipelines to meet specific research needs.
- Comprehensive documentation: Detailed guides, examples, and documentation for all features and modules.
- Performance optimized: Efficiently process large datasets with optimized code and algorithms.
- Interactive examples: Jupyter notebooks provide practical examples for deriving insights from real-world datasets.
- senselab AI: Interact with your data through an AI-based chatbot. The AI agent generates and runs senselab-based code for you, making exploration easier and giving you both the results and the code used to produce them (perfect for quick experiments or for users who prefer not to code).
-
If on macOS, this package requires an ARM64 architecture due to PyTorch 2.2.2+ dropping support for x86-64 on macOS.
❌ Unsupported systems include:
- macOS (Intel x86-64)
- Other platforms where dependencies are unavailable
To check your system compatibility, please run this command:
python -c "import platform; print(platform.machine())"If the output is:
arm64→ ✅ Your system is compatible.x86_64→ ❌ Your system is not supported.
If you attempt to install this package on an unsupported system, the installation or execution will fail.
-
FFmpegis required by some audio and video dependencies (e.g.,torchaudio). Please make sure you haveFFmpegproperly installed on your machine before installing and usingsenselab(see here for detailed platform-dependent instructions). -
CUDA libraries matching the CUDA version expected by the PyTorch wheels (e.g., the latest pytorch 2.8 expects cuda-12.8). To install those with conda, please do:
conda config --add channels nvidiaconda install -y nvidia/label/cuda-12.8.1::cuda-libraries-dev
- Docker is required and must be running for some video models (e.g., MediaPipe-based estimators). Please follow the official installation instructions for your platform: Install Docker.
- Some functionalities rely on HuggingFace models, and increasingly, models require authentication and signed license agreements. Instructions on how to generate a Hugging Face access token can be found here: https://huggingface.co/docs/hub/security-tokens
- You can provide your HuggingFace token either by exporting it in your shell:
export HF_TOKEN=your_token_here - or by adding it to your
.envfile (see.env.examplefor reference).
Install this package via:
pip install 'senselab[all]'Or get the newest development version via:
pip install 'git+https://github.com/sensein/senselab.git#egg=senselab[all]'If you want to install only audio dependencies, you do:
pip install 'senselab'To install articulatory, video, text, and senselab-ai extras, please do:
pip install 'senselab[articulatory,video,text,senselab-ai]'poetry install --extras "senselab-ai"
poetry run senselab-aipip install 'senselab[senselab-ai]'
senselab-aiOnce started, you can open the provided JupyterLab interface, setup the agent and chat with it, and let it create and execute code for you.
For a walkthrough, see: tutorials/senselab-ai/senselab_ai_intro.ipynb.
We welcome contributions from the community! Before proceeding with that, please review our CONTRIBUTING.md.
senselab is mostly supported by the following organizations and initiatives:
- McGovern Institute ICON Fellowship
- NIH Bridge2AI Precision Public Health (OT2OD032720)
- Child Mind Institute
- ReadNet Project
- Chris and Lann Woehrle Psychiatric Fund
senselab builds on the work of many open-source projects. We gratefully acknowledge the developers and maintainers of the following key dependencies:
- PyTorch, Torchvision, Torchaudio deep learning framework and audio/vision extensions
- Transformers, Datasets, Accelerate, Huggingface Hub training and inference utilities plus (pre-)trained models and datasets
- Scikit-learn, UMAP-learn machine learning utilities
- Matplotlib visualization toolkit
- Praat-Parselmouth, OpenSMILE, SpeechBrain, SPARC, Pyannote-audio, Coqui-TTS, NVIDIA NeMo, Vocos, Audiomentations, Torch-audiomentations speech and audio processing tools
- NLTK, Sentence-Transformers, Pylangacq, Jiwer text and language processing tools
- OpenCV, Ultralytics, mediapipe, Python-ffmpeg, AV computer vision and pose estimation
- Pydantic, Iso639, PyCountry, Nest-asyncio validation, and utilities
- Ipywidgets, IpKernel, Nbformat, Nbss-upload, Notebook-intelligence Jupyter and notebook-related tools
We are thankful to the open-source community for enabling this project! 🙏
