A Python-based AI service for generating contextual replies using LangGraph workflows.
- User Summary Workflow: Analyzes user data to generate keyword profiles and embeddings
- Reply Generation Workflow: Three-step process for intelligent reply generation
- Intent Analysis (GPT-4)
- Content Discovery (GPT-4)
- Reply Generation (GPT-3.5)
- Embeddings Workflow: Generates text embeddings with optional preprocessing
- Install Poetry (package manager):
curl -sSL https://install.python-poetry.org | python3 -- Install dependencies:
poetry install- Create a
.envfile with your OpenAI API key:
OPENAI_API_KEY=your_api_key_herepoetry run python example.pyThis will demonstrate all three workflows with sample data.
poetry run uvicorn app.main:app --reloadThe API will be available at http://localhost:8000.
POST /user-summary
{
"user_data": {
"username": "string",
"bio": "string",
"recent_casts": ["string"],
"interests": ["string"],
"engagement_stats": {
"avg_replies": number,
"avg_likes": number,
"top_channels": ["string"]
}
}
}POST /generate-reply
{
"cast_text": "string",
"available_feeds": [
{
"text": "string",
"url": "string",
"author": "string",
"timestamp": "string"
}
]
}POST /generate-embeddings
{
"input_data": {
"title": "string",
"content": "string",
"tags": ["string"]
}
}.
├── app/
│ ├── __init__.py
│ ├── main.py # FastAPI application
│ ├── nodes.py # LangGraph node implementations
│ ├── prompts.py # Centralized prompt management
│ └── workflows/ # Workflow implementations
│ ├── __init__.py
│ ├── user_summary.py
│ ├── reply_generation.py
│ └── embeddings.py
├── example.py # Example usage script
├── pyproject.toml # Poetry configuration
└── README.md
- Uses Poetry for dependency management
- FastAPI for the REST API
- LangGraph for workflow orchestration
- OpenAI's GPT-4 and GPT-3.5 for different processing steps
- text-embedding-3-small for embeddings generation
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request