A collection of conversational AI applications built using the CLTL (Computational Lexicology & Terminology Lab) framework. This repository demonstrates the modular, event-driven architecture of the CLTL framework through complete application examples for building communication robots and interactive agents.
Each application in this repository showcases how to compose sophisticated conversational agents from modular CLTL components. The apps share a common architecture with pluggable components for:
- Speech processing: Automatic Speech Recognition (ASR), Voice Activity Detection (VAD)
- Conversational AI: Dialogue management with different implementations
- Multimodal interaction: Text and voice-based interfaces
- Data storage: EMISSOR framework for structured multimodal interaction data
- Flexible deployment: Local Python applications or Docker Compose with distributed messaging
A conversational AI application implementing the classic ELIZA chatbot with modern speech recognition capabilities.
Key Features:
- Pattern-based conversation using the
cltl-elizacomponent - Classic ELIZA-style responses with rule-based matching
- Voice and text input support
- Serves as a simple introduction to the CLTL framework
Use Case: Educational demonstrations, testing the framework, simple rule-based conversations
A sophisticated conversational AI application powered by Large Language Models (LLMs) including Llama and Qwen.
Key Features:
- Natural, context-aware conversations using the
cltl-llmcomponent - Support for multiple LLM backends (Ollama, local GGUF models)
- Configurable system prompts, temperature, and conversation history
- Advanced dialogue capabilities with state-of-the-art language models
Use Case: Advanced conversational agents, research applications, production deployments requiring natural language understanding
Both applications share the same modular, event-driven architecture:
┌─────────────────────────────────────────────────────────────┐
│ User Interaction Layer │
│ (Voice Input / Web Chat Interface) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Event Bus Layer │
│ (In-Memory or RabbitMQ Distributed) │
└─────────────────────────────────────────────────────────────┘
│
┌───────────────────┼───────────────────┐
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌──────────────┐
│ ASR │ │ VAD │ │ Conversational│
│ (Speech)│ │ (Voice) │ │ Module │
└─────────┘ └──────────┘ └──────────────┘
│
┌─────┴─────┐
▼ ▼
┌────────┐ ┌─────┐
│ Eliza │ │ LLM │
└────────┘ └─────┘
The key difference:
- Eliza App uses pattern-based responses (
cltl-eliza) - LLM App uses language models for natural conversations (
cltl-llm)
Each application can be run in two ways:
cd eliza-app/py-app # or llm-app/py-app
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python app.pycd eliza-app/docker-app # or llm-app/docker-app
docker-compose up --buildSee individual app READMEs for detailed setup instructions and configuration options.
- Python: 3.8 to 3.10
- System Libraries: portaudio, libsndfile, ffmpeg (for audio processing)
- Docker: (Optional) For containerized deployment
- LLM Models: (For llm-app only) Ollama or local GGUF model files
See individual app READMEs for complete prerequisites and installation instructions.
cltl-apps/
├── eliza-app/ # ELIZA-based conversational app
│ ├── py-app/ # Local Python application
│ ├── docker-app/ # Docker Compose deployment
│ └── README.md # Detailed documentation
│
├── llm-app/ # LLM-powered conversational app
│ ├── py-app/ # Local Python application
│ ├── docker-app/ # Docker Compose deployment
│ └── README.md # Detailed documentation
│
└── README.md # This file
Contributions are welcome! Each application follows the coding conventions outlined in their respective CLAUDE.md files. When contributing:
- Follow Clean Code principles
- Maintain consistency with existing code style
- Update documentation for any new features
- Test both local and Docker deployments
- CLTL Combot Framework - Core framework for building conversational agents
- CLTL LLM Module - LLM integration for the CLTL framework
- EMISSOR Framework - Multimodal interaction data representation
- Leolani Platform - Complete conversational robot platform
Distributed under the MIT License. See individual app LICENSE files for more information.
If you use these applications or the EMISSOR framework in your research, please cite:
@inproceedings{emissor:2021,
title = {EMISSOR: A platform for capturing multimodal interactions as Episodic Memories and Interpretations with Situated Scenario-based Ontological References},
author = {Selene Baez Santamaria and Thomas Baier and Taewoon Kim and Lea Krause and Jaap Kruijt and Piek Vossen},
url = {https://mmsr-workshop.github.io/programme},
booktitle = {Proceedings of the MMSR workshop "Beyond Language: Multimodal Semantic Representations", IWSC2021},
year = {2021}
}