A production-ready customer support auto-reply system built with LangGraph, OpenAI, and Pylon integration.
- Intelligent Auto-Replies: Uses OpenAI GPT-5-nano to generate contextual customer support acknowledgments
- Pylon Integration: Automatically posts draft replies to your Pylon support system
- Production Ready: Comprehensive error handling, retry logic, and logging
- Type Safe: Full TypeScript-style typing with Python TypedDict
- Well Tested: Complete test suite with integration and unit tests
- Python 3.9+
- OpenAI API key
- Pylon API token (optional, for posting replies)
- UV package manager (recommended) or pip
- Clone and install dependencies:
git clone <repository-url>
cd langgraph-auto-reply
uv sync
# or with pip: pip install -e .- Set up environment variables:
cp .env.example .env
# Edit .env with your API keysRequired environment variables:
OPENAI_API_KEY=your_openai_api_key_here
PYLON_API_TOKEN=your_pylon_token_here # Optional
LANGSMITH_API_KEY=your_langsmith_key_here # Optional for tracingOptional configuration:
PYLON_API_URL=https://api.usepylon.com # Default
HTTP_TIMEOUT=30 # Default
MAX_RETRIES=3 # Default- Start the development server:
langgraph dev- Open LangGraph Studio and test your workflow visually
Deploy using LangGraph Cloud:
langgraph deploy --name customer-support-agentThe agent expects the following state structure:
{
"issue_id": "ticket_123",
"contact_first_name": "John",
"latest_message_text": "I need help with installation",
"messages": [],
"draft": ""
}Response includes:
{
"issue_id": "ticket_123",
"contact_first_name": "John",
"latest_message_text": "I need help with installation",
"messages": [...], # Conversation history
"draft": "Hello John, Thank you for contacting...",
"error": None # or error message if something failed
}The system implements a simple 2-node workflow:
compose- Generates acknowledgment reply using OpenAIpost_pylon- Posts draft to Pylon support system
graph LR
A[Input State] --> B[Compose Reply]
B --> C[Post to Pylon]
C --> D[Final State]
Run the complete test suite:
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=src
# Run specific test categories
uv run pytest tests/unit_tests/
uv run pytest tests/integration_tests/├── src/agent/
│ ├── __init__.py
│ └── graph.py # Main LangGraph workflow
├── tests/
│ ├── integration_tests/ # End-to-end workflow tests
│ └── unit_tests/ # Component unit tests
├── pyproject.toml # Dependencies and configuration
├── langgraph.json # LangGraph deployment config
└── README.md
The system is highly configurable through environment variables:
| Variable | Default | Description |
|---|---|---|
OPENAI_API_KEY |
Required | OpenAI API key for GPT model |
PYLON_API_TOKEN |
Optional | Pylon support system token |
PYLON_API_URL |
https://api.usepylon.com |
Pylon API endpoint |
HTTP_TIMEOUT |
30 |
HTTP request timeout in seconds |
MAX_RETRIES |
3 |
Number of retry attempts for failed requests |
LANGSMITH_API_KEY |
Optional | For request tracing and monitoring |
- API keys are loaded from environment variables only
- Input validation on all state fields
- Comprehensive error handling prevents data leakage
- HTTP timeouts prevent hanging requests
- Retry logic with exponential backoff
The system includes comprehensive logging:
# Enable debug logging
logging.getLogger("agent.graph").setLevel(logging.DEBUG)Integration with LangSmith for request tracing (optional):
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=your_key_hereThe system gracefully handles:
- Missing or invalid input data
- OpenAI API failures
- Pylon API connectivity issues
- Network timeouts
- Authentication errors
All errors are logged and included in the response state for debugging.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make changes and add tests
- Run the test suite:
uv run pytest - Run linting:
uv run ruff check - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Documentation: LangGraph Documentation
- Issues: GitHub Issues
- Community: LangChain Discord
- Initial release with OpenAI GPT-5-nano integration
- Pylon API integration
- Complete test suite
- Production error handling and monitoring