Skip to content

liam-langchain/Customer-Support-Agent

Repository files navigation

LangGraph Auto Reply System

A production-ready customer support auto-reply system built with LangGraph, OpenAI, and Pylon integration.

🚀 Features

  • Intelligent Auto-Replies: Uses OpenAI GPT-5-nano to generate contextual customer support acknowledgments
  • Pylon Integration: Automatically posts draft replies to your Pylon support system
  • Production Ready: Comprehensive error handling, retry logic, and logging
  • Type Safe: Full TypeScript-style typing with Python TypedDict
  • Well Tested: Complete test suite with integration and unit tests

📋 Requirements

  • Python 3.9+
  • OpenAI API key
  • Pylon API token (optional, for posting replies)
  • UV package manager (recommended) or pip

🛠️ Installation

  1. Clone and install dependencies:
git clone <repository-url>
cd langgraph-auto-reply
uv sync
# or with pip: pip install -e .
  1. Set up environment variables:
cp .env.example .env
# Edit .env with your API keys

Required environment variables:

OPENAI_API_KEY=your_openai_api_key_here
PYLON_API_TOKEN=your_pylon_token_here  # Optional
LANGSMITH_API_KEY=your_langsmith_key_here  # Optional for tracing

Optional configuration:

PYLON_API_URL=https://api.usepylon.com  # Default
HTTP_TIMEOUT=30  # Default
MAX_RETRIES=3  # Default

🎯 Usage

Development with LangGraph Studio

  1. Start the development server:
langgraph dev
  1. Open LangGraph Studio and test your workflow visually

Production Deployment

Deploy using LangGraph Cloud:

langgraph deploy --name customer-support-agent

API Usage

The agent expects the following state structure:

{
    "issue_id": "ticket_123",
    "contact_first_name": "John",
    "latest_message_text": "I need help with installation",
    "messages": [],
    "draft": ""
}

Response includes:

{
    "issue_id": "ticket_123",
    "contact_first_name": "John", 
    "latest_message_text": "I need help with installation",
    "messages": [...],  # Conversation history
    "draft": "Hello John, Thank you for contacting...",
    "error": None  # or error message if something failed
}

🏗️ Architecture

The system implements a simple 2-node workflow:

  1. compose - Generates acknowledgment reply using OpenAI
  2. post_pylon - Posts draft to Pylon support system
graph LR
    A[Input State] --> B[Compose Reply]
    B --> C[Post to Pylon]
    C --> D[Final State]
Loading

🧪 Testing

Run the complete test suite:

# Run all tests
uv run pytest

# Run with coverage
uv run pytest --cov=src

# Run specific test categories
uv run pytest tests/unit_tests/
uv run pytest tests/integration_tests/

📁 Project Structure

├── src/agent/
│   ├── __init__.py
│   └── graph.py           # Main LangGraph workflow
├── tests/
│   ├── integration_tests/ # End-to-end workflow tests
│   └── unit_tests/       # Component unit tests
├── pyproject.toml        # Dependencies and configuration
├── langgraph.json        # LangGraph deployment config
└── README.md

⚙️ Configuration

The system is highly configurable through environment variables:

Variable Default Description
OPENAI_API_KEY Required OpenAI API key for GPT model
PYLON_API_TOKEN Optional Pylon support system token
PYLON_API_URL https://api.usepylon.com Pylon API endpoint
HTTP_TIMEOUT 30 HTTP request timeout in seconds
MAX_RETRIES 3 Number of retry attempts for failed requests
LANGSMITH_API_KEY Optional For request tracing and monitoring

🔒 Security

  • API keys are loaded from environment variables only
  • Input validation on all state fields
  • Comprehensive error handling prevents data leakage
  • HTTP timeouts prevent hanging requests
  • Retry logic with exponential backoff

📊 Monitoring

The system includes comprehensive logging:

# Enable debug logging
logging.getLogger("agent.graph").setLevel(logging.DEBUG)

Integration with LangSmith for request tracing (optional):

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=your_key_here

🚨 Error Handling

The system gracefully handles:

  • Missing or invalid input data
  • OpenAI API failures
  • Pylon API connectivity issues
  • Network timeouts
  • Authentication errors

All errors are logged and included in the response state for debugging.

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make changes and add tests
  4. Run the test suite: uv run pytest
  5. Run linting: uv run ruff check
  6. Commit changes: git commit -m 'Add amazing feature'
  7. Push to branch: git push origin feature/amazing-feature
  8. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🆘 Support

🔄 Changelog

v1.0.0

  • Initial release with OpenAI GPT-5-nano integration
  • Pylon API integration
  • Complete test suite
  • Production error handling and monitoring

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published