▶︎ Click or tap the video window above to play the demo — please start the video before reading the rest of this README.
PhD Research Project Proposal
MishMash WP1: AI for Artistic Performances
Focus: Human-AI Interaction in Live Music Performance
BrainJam is a real-time musical performance system exploring human-AI co-performance through brain-computer interfaces. Unlike traditional AI music generation, BrainJam positions AI as a responsive co-performer rather than an autonomous generator, emphasizing performer agency and expressive control.
- How can AI act as a responsive co-performer rather than an autonomous generator?
- Can brain signals serve as expressive control inputs while maintaining performer agency?
- What interaction patterns emerge when humans and AI collaborate musically in real-time?
- Hybrid Adaptive Agent: Combines symbolic logic (reliability) + optional ML (personalization)
- Real-time Performance: <30ms total latency for live performance
- Performer-Led Design: AI never generates autonomously—all outputs modulate performer input
- BCI as Control: EEG/fNIRS/EMG signals treated as expressive inputs, not semantic decoding
┌──────────────────────────────────────────────────────────────────────────────┐
│ BrainJam Architecture (v0.2) │
├──────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌── Biofeedback ──────┐ ┌── Real-Time Pipeline ──┐ ┌── Output ──────┐ │
│ │ EEG (EEGNet) │ │ EventBus (pub/sub) │ │ Piano Synth │ │
│ │ EMG │──►│ StreamingPipeline │──►│ Guitar Synth │ │
│ │ HRV (IBI→features) │ │ PerformanceState │ │ Beat Generator │ │
│ │ GSR (tonic/phasic) │ └────────┬────────────────┘ │ Spatial Audio │ │
│ └─────────────────────┘ │ └────────────────┘ │
│ ▼ │
│ ┌── Embodied Interaction ─┐ ┌── AI Co-Performer ──────┐ │
│ │ GestureController │ │ HybridAdaptiveAgent │ │
│ │ SpatialAudioMapper │─►│ InteractionModeManager │ │
│ │ InteractionModes │ │ MultimodalFusion │ │
│ └─────────────────────────┘ └──────────────────────────┘ │
│ │ │
│ ┌──────▼───────────┐ │
│ │ Evaluation │ │
│ │ • Synchrony │ │
│ │ • Responsiveness │ │
│ │ • SessionLogger │ │
│ └──────────────────┘ │
└──────────────────────────────────────────────────────────────────────────────┘
- Hybrid Adaptive Agent 🧠: Three behavioral states (calm/active/responsive), <5ms inference
- Sound Engines 🎵: DDSP Piano, Guitar, Beat Generator
- Agent Memory 💭: GRU-based dialogue learning (JSB Chorales)
- EEG Mapper 🔬: EEGNet architecture, OpenMIIR compatible
- Real-Time Pipeline ⚡: Event-driven streaming with pub/sub EventBus, configurable pipeline stages with latency tracking, centralized PerformanceState with synchrony computation
- Biofeedback Fusion 🫀: Multimodal physiological signal integration (EEG, EMG, HRV, GSR) with staleness detection, three mapping strategies (direct, affective circumplex, adaptive)
- Embodied Interaction 🤲: Gesture classification (sustained/percussive/sweeping/tremolo), 3D spatial audio mapping, four interaction modes (responsive/collaborative/autonomous/mirroring)
- Evaluation Framework 📊: Real-time interaction quality metrics (synchrony, responsiveness, adaptation rate, musical coherence), structured session logging with JSON export
# Clone and install
git clone https://github.com/curiousbrutus/brainjam.git
cd brainjam
pip install -r requirements.txt
# Run interactive GUI
streamlit run streamlit_app/app.pyfrom performance_system.agents import HybridAdaptiveAgent
from performance_system.sound_engines import DDSPPianoSynth, BeatGenerator
from performance_system.realtime import EventBus, StreamingPipeline, PipelineStage
from performance_system.biofeedback import MultimodalFusion, ModalityConfig, HRVProcessor
from performance_system.embodied import GestureController, InteractionModeManager, InteractionMode
from performance_system.evaluation import InteractionMetrics, SessionLogger
# Initialize real-time pipeline
bus = EventBus()
pipeline = StreamingPipeline(event_bus=bus)
agent = HybridAdaptiveAgent()
metrics = InteractionMetrics()
# Configure biofeedback fusion
fusion = MultimodalFusion()
fusion.add_modality(ModalityConfig(name='eeg', weight=1.0, n_features=2))
fusion.add_modality(ModalityConfig(name='hrv', weight=0.8, n_features=1))
# Performance loop with real-time interaction
for controls in signal_stream:
fusion.update('eeg', {'arousal': controls['arousal'], 'focus': controls['focus']})
physio_state = fusion.get_fused_state()
response = agent.respond(controls)
metrics.record_performer(arousal=physio_state.get('arousal', 0.5), valence=0.5)
metrics.record_agent(density=response['note_density'], tension=response['harmonic_tension'])brainjam/
├── performance_system/ # Core system
│ ├── agents/ # Hybrid adaptive agent, memory (GRU)
│ ├── sound_engines/ # DDSP synths, beat generator, parametric
│ ├── mapping_models/ # EEGNet, MLP, linear, expressive mappers
│ ├── feature_shaping/ # Autoencoder, PCA, temporal smoother
│ ├── realtime/ # Event bus, streaming pipeline, performance state
│ ├── biofeedback/ # Multimodal fusion, HRV, GSR, physio mapper
│ ├── embodied/ # Gesture controller, spatial audio, interaction modes
│ └── evaluation/ # Interaction metrics, session logger
├── streamlit_app/ # Interactive GUI (8 pages)
├── examples/ # Usage demos
├── tests/ # Unit tests (70 tests)
├── docs/ # Documentation
│ ├── architecture/ # Technical design
│ └── research/ # Research context, evaluation methods
├── models/ # Model info
└── literature/ # Academic references
Research Focus: Human-AI collaboration in creative contexts
Key Questions:
- How to maintain performer agency with AI assistance?
- Can BCIs enable expressive musical control?
- What makes AI "feel" like a musical partner?
- Performer-Led Systems (Tanaka, 2006): AI responds, never overrides
- Interactive ML (Fiebrink, 2011): Real-time adaptation with user control
- BCMIs (Miranda & Castet, 2014): Brain signals as expressive input
- Hybrid Agent Architecture: Symbolic + ML with guaranteed agency
- Real-time BCI Integration: <30ms latency, graceful fallbacks
- Musical Co-Performance: Learned dialogue patterns from Bach chorales
- Multimodal Biofeedback Pipeline: Unified fusion of EEG, EMG, HRV, and GSR with staleness-aware weighting and affective circumplex mapping
- Embodied Interaction Framework: Gesture classification, spatial audio feedback, and four interaction modes for exploring human-AI agency balance
- Research Evaluation Toolkit: Real-time synchrony, responsiveness, adaptation rate metrics with structured session logging
- Fully functional prototype
- Evaluation framework for agency/flow/responsiveness
- Comprehensive documentation and demos
Planned User Studies:
- Agency Assessment (SAM + custom scales)
- Flow State (FSS-2 questionnaire)
- Performance Quality (expert + audience ratings)
- Learning Curve (longitudinal study)
See docs/research/interaction_measures/ for details.
For Researchers: docs/research/ - Ethics, limitations, evaluation
For Developers: docs/architecture/ - Technical design, components
For Users: QUICK_START.md, examples/
Project Status:
LIMITATIONS.md- Key limitations and appropriate use casesIMPROVEMENTS.md- Suggested improvements and development roadmap
- Python 3.9+, NumPy/SciPy, PyTorch (optional)
- Streamlit GUI, scikit-learn
- Performance: <30ms latency, 44.1kHz audio, 10Hz control rate
- Hybrid Adaptive Agent with 3 states
- DDSP Piano/Guitar + Beat Generator
- Agent Memory (GRU) + EEG Mapper (EEGNet)
- Interactive GUI + documentation
- Real-time event-driven streaming pipeline (EventBus, StreamingPipeline)
- Multimodal biofeedback fusion (EEG, EMG, HRV, GSR)
- Physiological-to-musical mapping (direct, affective, adaptive strategies)
- Embodied gesture controller (sustained, percussive, sweeping, tremolo)
- Spatial audio mapping from 3D performer position
- Interaction mode manager (responsive, collaborative, autonomous, mirroring)
- Research evaluation metrics (synchrony, responsiveness, adaptation, coherence)
- Session logger with JSON export for post-hoc analysis
- 70 unit tests across all modules
- User study design
- Model training (JSB Chorales, OpenMIIR)
- Real EEG hardware integration
BCI Music: Tanaka (2006), Miranda & Castet (2014)
Interactive ML: Fiebrink (2011), Lawhern et al. (2018)
Audio Synthesis: Engel et al. (2020), Karplus & Strong (1983)
Physiological Computing: Fairclough (2009), Picard (1997)
Embodied Interaction: Dourish (2001), Leman (2007)
Synchrony & Evaluation: Zamm et al. (2018), Jordà (2005)
Affective Computing: Russell (1980), Eerola & Vuoskoski (2013)
See literature/ for detailed summaries.
Project: BrainJam - AI-Mediated Musical Performance
Purpose: PhD Research Application
eyyub.gvn@gmail.com
Academic research project for PhD application. Contact for usage permissions.
Built with 🧠 + 🎵 + 🤖 for exploring human-AI musical collaboration
