Turn LLMs into boring, effective semantic processors.
This repository demonstrates architectural patterns for building reliable LLM applications using Apache Camel. Rather than relying on brittle prompt engineering alone, these examples show how to orchestrate interactions to ensure structured outputs, correct routing, and contextual integrity.
📖 Read the companion article: Making LLMs Boring: From Chatbots to Semantic Processors
- Architectural Patterns
- Prerequisites
- Quick Start
- Repository Structure
- How to Run
- Using Adapters
- Recommended Models
- Developer Tips
- Learn More
- Contributing
- Generative Parsing: Constraining LLM output to valid formats (JSON, XML, POJOs) for seamless integration with downstream systems.
- Semantic Routing: Directing traffic flow based on the intent of the user's prompt rather than static headers.
- Grounded Pipelines: Injecting context to ensure the LLM responds based on specific, retrieved data rather than hallucinations.
Before running the examples, ensure you have the following:
- Java 17 or 21
- Inference Server: Any server exposing OpenAI-compatible endpoints.
You need the Camel CLI to run these examples. Camel Launcher Documentation
Linux / macOS
wget https://repo1.maven.org/maven2/org/apache/camel/camel-launcher/4.17.0/camel-launcher-4.17.0-bin.zip
unzip camel-launcher-*-bin.zip
cd camel-launcher-*/
chmod +x bin/camel.sh
mkdir -p $HOME/.local/bin
ln -sf "$PWD/bin/camel.sh" "$HOME/.local/bin/camel"Windows
- Download and unzip the package.
- Add the
camel-launcher/bindirectory to your SystemPATH.
Verify Installation
camel --version1. Configure your environment:
export OPENAI_API_KEY=your-api-key
export OPENAI_BASE_URL=http://localhost:11434/v1 # Ollama example
export OPENAI_MODEL=ministral-3:8bNote: If using the real OpenAI API, set
OPENAI_BASE_URLtohttps://api.openai.com/v1.
2. Run your first example:
cd generative-parsing/classify-leaf-node
echo "I lost my credit card and need to block it immediately" | camel run --source-dir=./3. See structured output:
{
"rationale": "The user explicitly states they lost their credit card and requests immediate blocking, which is a critical security action to prevent unauthorized use. This falls under the highest priority category of 'Security_and_Access' due to the urgency and potential fraud risk associated with a lost card.",
"path": "Security_and_Access > Fraud_and_Disputes > Report_Lost_or_Stolen_Card",
"confidence": 1.0,
"status": "ACCEPTED"
}The project is organized by pattern. Each directory contains a standalone quickstart with its own README and runnable Camel YAML files.
├── generative-parsing/ # Pattern 1: Structured Data Extraction
│ ├── classify-leaf-node/ # Deep taxonomy classification
│ ├── entity-resolution/ # Fuzzy matching to canonical IDs
│ └── pii-redaction/ # Identify and mask PII
├── semantic-routing/ # Pattern 2: Intent-based Routing
│ ├── detect-gaps/ # Compliance gap analysis
│ ├── moderation-policy/ # Content safety filtering
│ └── risk-scoring/ # Quantitative risk assessment
├── grounded-pipelines/ # Pattern 3: Context Injection
│ └── database-query/ # Air-gapped SQL querying
└── adapters/ # Pluggable Input/Output definitions
Navigate to a specific pattern directory and follow its README.md to use the camel run command.
Example: Running the Leaf Node Classification example
cd generative-parsing/classify-leaf-node
echo "I noticed a charge from a vendor in London that I never visited." | camel run --source-dir=./Quiet Mode (No Logging)
If you want to focus on the output without Camel logs:
camel run --source-dir=./ --logging-level=OFFBy default, all examples use the Console Adapter (Standard Input/Output) for simple CLI interactivity.
You can switch the interface to HTTP, Kafka, or File by replacing the adapter route in the *.camel.yaml file. See the adapters/README.md for detailed instructions.
| Adapter | Use Case | Endpoint |
|---|---|---|
| Console | CLI testing, piped input | stream:in / stream:out |
| HTTP | REST API integration | platform-http:/api/... |
| Kafka | Event-driven streaming | kafka:topic-name |
| File | Batch processing | file:data/inbox |
These patterns work with any OpenAI-compatible model. For cost-effective local processing, we recommend:
| Model | Size / Active | Notes |
|---|---|---|
| Ministral-3-8B | 8B | Excellent for structured output tasks |
| Qwen3-VL-8B | 8B | Strong reasoning, multilingual |
| Granite-4.0-H-Small | 32B/9B | IBM's enterprise-focused model |
Don't just write YAML by hand! Use Kaoto for designing your Camel routes visually, or leverage AI coding assistants (Claude, Cursor, Copilot) with prompts like:
Create a Camel 4.17 YAML route that monitors a folder for new text files,
sends small files (<5KB) to the camel-openai component for summarization,
and saves the response to an output folder.
Convert these examples to Maven/Gradle projects for Quarkus or Spring Boot:
camel export --runtime=quarkus --directory=./my-project- Camel JBang Guide: Official Documentation
- Testing: How to write tests for Camel JBang
- Kubernetes: Deploying these routes to K8s
- Exporting: Convert these scripts to Maven
Contributions are welcome! Please read Contributing Guide for details on:
- Setting up your development environment
- Submitting bug reports and feature requests
- Creating new patterns
This project is licensed under the MIT License - see the LICENSE file for details.