Add Claude AI integration with tools, agents, and REST API#15
Add Claude AI integration with tools, agents, and REST API#15
Conversation
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This pull request adds comprehensive Anthropic Claude integration alongside the existing OpenAI support, introducing a unified AI abstraction layer with autonomous agents, REST API endpoints, and a TypeScript frontend client. The PR successfully implements dual AI provider support with chat completions, streaming, template-based generation, and specialized AI agents for Web3 development tasks.
Changes:
- Added Claude AI provider support with unified AIToolsManager interface for OpenAI and Claude
- Implemented REST API endpoints for chat, streaming, template generation, and AI agent execution
- Created TypeScript AI client with support for all backend endpoints
- Added configuration validation for new AI provider settings on both frontend and backend
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 16 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/app/settings.py | Added Anthropic API key, Claude model name, and AI provider selection with validation |
| backend/app/ai_tools.py | New AIToolsManager class providing unified interface for OpenAI and Claude with chat, streaming, and template generation |
| backend/app/ai_routes.py | New REST API endpoints for AI operations including chat, streaming, template generation, and agent execution |
| backend/app/ai_agents.py | New Web3AIAgent class and AIToolkit with specialized agents for code analysis, blockchain analysis, and development assistance |
| backend/app/main.py | Integrated AI routes and exposed AI configuration in API info endpoint |
| backend/requirements.txt | Added langchain-anthropic, langchain-core, langchain-community, and anthropic SDK dependencies |
| backend/.env.example | Added example configuration for Anthropic API key, Claude model, and AI provider selection |
| frontend/lib/config.ts | Added validation for Claude model name and AI provider configuration |
| frontend/lib/ai-client.ts | New TypeScript client for all AI endpoints with streaming support |
| frontend/.env.example | Added frontend environment variables for Claude model and AI provider |
| backend/tests/test_config_validation.py | Added tests for Claude model name and AI provider validation |
| backend/tests/test_ai_tools.py | Comprehensive tests for AIToolsManager with both providers |
| backend/tests/test_ai_routes.py | Tests for all AI API endpoints including error cases |
| backend/README.md | Updated documentation with AI endpoints, usage examples, and project structure |
| README.md | Enhanced documentation with comprehensive AI/LLM configuration guide |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| class ChatRequest(BaseModel): | ||
| """Chat request model.""" | ||
|
|
||
| messages: list[ChatMessage] = Field(..., description="List of chat messages") |
There was a problem hiding this comment.
Missing validation to ensure the messages list is not empty. An empty messages array could cause unexpected behavior or errors in the AI model. Consider adding a Field validator with min_length=1 to ensure at least one message is provided.
| messages: list[ChatMessage] = Field(..., description="List of chat messages") | |
| messages: list[ChatMessage] = Field( | |
| ..., description="List of chat messages", min_length=1 | |
| ) |
| class TemplateRequest(BaseModel): | ||
| """Template generation request.""" | ||
|
|
||
| template: str = Field(..., description="Prompt template with variables") |
There was a problem hiding this comment.
Missing validation to ensure the template string is not empty. An empty template could cause unexpected behavior. Consider adding a Field validator with min_length=1 to ensure a valid template is provided.
| template: str = Field(..., description="Prompt template with variables") | |
| template: str = Field( | |
| ..., description="Prompt template with variables", min_length=1 | |
| ) |
| export interface ChatMessage { | ||
| role: string; |
There was a problem hiding this comment.
The role field accepts any string value but should be validated to only allow valid message roles. Consider using a union type of string literals ('user' | 'assistant' | 'system') to enforce valid role values and improve type safety.
| export interface ChatMessage { | |
| role: string; | |
| export type ChatMessageRole = 'user' | 'assistant' | 'system'; | |
| export interface ChatMessage { | |
| role: ChatMessageRole; |
| response = await ai_tools.generate_with_template( | ||
| template=request.template, | ||
| variables=request.variables, |
There was a problem hiding this comment.
The template string is passed directly from user input to ChatPromptTemplate.from_template without sanitization. While the PR mentions security patches for LangChain (0.3.81), consider adding validation to restrict template complexity or implementing a whitelist of allowed templates to prevent potential template injection attacks. At minimum, add rate limiting and consider restricting this endpoint to authenticated users only.
| verbose=True, | ||
| handle_parsing_errors=True, |
There was a problem hiding this comment.
The AgentExecutor is initialized with verbose=True which will log detailed execution information to stdout. This could expose sensitive information (API keys, user inputs, internal reasoning) in production logs. Consider making verbose configurable via settings or defaulting to False in production environments.
| """Chat request model.""" | ||
|
|
||
| messages: list[ChatMessage] = Field(..., description="List of chat messages") | ||
| provider: str = Field(default="claude", description="AI provider (openai/claude)") |
There was a problem hiding this comment.
The provider field accepts any string value but should be validated against available providers. Consider using a Literal type or adding validation to restrict values to 'openai' or 'claude' only, preventing invalid provider names from being accepted by the API.
|
|
||
| template: str = Field(..., description="Prompt template with variables") | ||
| variables: dict[str, Any] = Field(..., description="Variables to fill template") | ||
| provider: str = Field(default="claude", description="AI provider (openai/claude)") |
There was a problem hiding this comment.
The provider field accepts any string value but should be validated against available providers. Consider using a Literal type or adding validation to restrict values to 'openai' or 'claude' only, preventing invalid provider names from being accepted by the API.
| // Use configured AI provider or default to Claude | ||
| const config = getConfig(); | ||
| this.defaultProvider = defaultProvider || (config.NEXT_PUBLIC_AI_PROVIDER === 'openai' ? 'openai' : 'claude'); |
There was a problem hiding this comment.
The defaultProvider logic doesn't handle the 'both' configuration properly. When NEXT_PUBLIC_AI_PROVIDER is 'both', the code defaults to 'claude', but it should either select one of the available providers intelligently or throw an error requiring explicit provider selection. Consider documenting this behavior or changing the logic to be more explicit about how 'both' is handled.
| // Use configured AI provider or default to Claude | |
| const config = getConfig(); | |
| this.defaultProvider = defaultProvider || (config.NEXT_PUBLIC_AI_PROVIDER === 'openai' ? 'openai' : 'claude'); | |
| // Use explicitly provided provider, or derive from configuration when valid | |
| const config = getConfig(); | |
| if (defaultProvider) { | |
| this.defaultProvider = defaultProvider; | |
| } else { | |
| const configuredProvider = config.NEXT_PUBLIC_AI_PROVIDER; | |
| if (configuredProvider === 'openai' || configuredProvider === 'claude') { | |
| this.defaultProvider = configuredProvider; | |
| } else { | |
| throw new Error( | |
| `Invalid NEXT_PUBLIC_AI_PROVIDER "${configuredProvider}". Expected "openai" or "claude". ` + | |
| 'When using a different value (e.g., "both"), you must select a provider explicitly when creating AIClient.' | |
| ); | |
| } | |
| } |
| class AgentRequest(BaseModel): | ||
| """Agent request model.""" | ||
|
|
||
| input: str = Field(..., description="User input for the agent") |
There was a problem hiding this comment.
Missing validation to ensure the input string is not empty. An empty input could cause unexpected behavior when running the agent. Consider adding a Field validator with min_length=1 to ensure valid input is provided.
| input: str = Field(..., description="User input for the agent") | |
| input: str = Field(..., min_length=1, description="User input for the agent") |
| router = APIRouter(prefix="/api/ai", tags=["AI"]) | ||
|
|
||
|
|
||
| class ChatMessage(BaseModel): | ||
| """Chat message model.""" | ||
|
|
||
| role: str = Field(..., description="Message role (user/assistant/system)") | ||
| content: str = Field(..., description="Message content") | ||
|
|
||
|
|
||
| class ChatRequest(BaseModel): | ||
| """Chat request model.""" | ||
|
|
||
| messages: list[ChatMessage] = Field(..., description="List of chat messages") | ||
| provider: str = Field(default="claude", description="AI provider (openai/claude)") | ||
| system_prompt: str | None = Field(None, description="Optional system prompt") | ||
| stream: bool = Field(default=False, description="Whether to stream the response") | ||
|
|
||
|
|
||
| class ChatResponse(BaseModel): | ||
| """Chat response model.""" | ||
|
|
||
| response: str = Field(..., description="AI response") | ||
| provider: str = Field(..., description="Provider used") | ||
|
|
||
|
|
||
| class TemplateRequest(BaseModel): | ||
| """Template generation request.""" | ||
|
|
||
| template: str = Field(..., description="Prompt template with variables") | ||
| variables: dict[str, Any] = Field(..., description="Variables to fill template") | ||
| provider: str = Field(default="claude", description="AI provider (openai/claude)") | ||
|
|
||
|
|
||
| class AgentRequest(BaseModel): | ||
| """Agent request model.""" | ||
|
|
||
| input: str = Field(..., description="User input for the agent") | ||
| agent_type: str = Field( | ||
| default="general", | ||
| description="Agent type (general/code_analysis/blockchain_analyst/developer_assistant)", | ||
| ) | ||
| provider: str = Field(default="claude", description="AI provider (openai/claude)") | ||
| chat_history: list[dict[str, str]] | None = Field( | ||
| None, description="Optional chat history" | ||
| ) | ||
|
|
||
|
|
||
| class AgentResponse(BaseModel): | ||
| """Agent response model.""" | ||
|
|
||
| output: str = Field(..., description="Agent output") | ||
| intermediate_steps: list | None = Field(None, description="Intermediate reasoning steps") | ||
|
|
||
|
|
||
| class ProvidersResponse(BaseModel): | ||
| """Available providers response.""" | ||
|
|
||
| providers: list[str] = Field(..., description="List of available providers") | ||
|
|
||
|
|
||
| @router.get("/providers", response_model=ProvidersResponse) | ||
| async def get_providers(): | ||
| """Get available AI providers. | ||
|
|
||
| Returns: | ||
| List of configured AI providers | ||
| """ | ||
| providers = ai_tools.get_available_providers() | ||
| return ProvidersResponse(providers=providers) | ||
|
|
||
|
|
||
| @router.post("/chat", response_model=ChatResponse) | ||
| async def chat(request: ChatRequest): | ||
| """Send chat messages to AI model. | ||
|
|
||
| Args: | ||
| request: Chat request with messages and settings | ||
|
|
||
| Returns: | ||
| AI response | ||
|
|
||
| Raises: | ||
| HTTPException: If provider is not configured or request fails | ||
| """ | ||
| if request.stream: | ||
| raise HTTPException( | ||
| status_code=400, | ||
| detail="Streaming not supported in this endpoint. Use /api/ai/chat/stream instead", | ||
| ) | ||
|
|
||
| try: | ||
| # Convert messages to dict format | ||
| messages = [{"role": msg.role, "content": msg.content} for msg in request.messages] | ||
|
|
||
| # Get response | ||
| response = await ai_tools.chat( | ||
| messages=messages, | ||
| provider=request.provider, | ||
| system_prompt=request.system_prompt, | ||
| ) | ||
|
|
||
| return ChatResponse(response=response, provider=request.provider) | ||
|
|
||
| except ValueError as e: | ||
| raise HTTPException(status_code=400, detail=str(e)) | ||
| except Exception as e: | ||
| raise HTTPException(status_code=500, detail=f"AI request failed: {str(e)}") | ||
|
|
||
|
|
||
| @router.post("/chat/stream") | ||
| async def chat_stream(request: ChatRequest): | ||
| """Stream chat messages to AI model. | ||
|
|
||
| Args: | ||
| request: Chat request with messages and settings | ||
|
|
||
| Returns: | ||
| Streaming response with AI output | ||
|
|
||
| Raises: | ||
| HTTPException: If provider is not configured or request fails | ||
| """ | ||
| try: | ||
| # Convert messages to dict format | ||
| messages = [{"role": msg.role, "content": msg.content} for msg in request.messages] | ||
|
|
||
| async def generate(): | ||
| try: | ||
| async for chunk in ai_tools.stream_chat( | ||
| messages=messages, | ||
| provider=request.provider, | ||
| system_prompt=request.system_prompt, | ||
| ): | ||
| yield chunk | ||
| except Exception as e: | ||
| yield f"Error: {str(e)}" | ||
|
|
||
| return StreamingResponse(generate(), media_type="text/plain") | ||
|
|
||
| except ValueError as e: | ||
| raise HTTPException(status_code=400, detail=str(e)) | ||
| except Exception as e: | ||
| raise HTTPException(status_code=500, detail=f"AI request failed: {str(e)}") | ||
|
|
||
|
|
||
| @router.post("/generate", response_model=ChatResponse) | ||
| async def generate_with_template(request: TemplateRequest): | ||
| """Generate response using a prompt template. | ||
|
|
||
| Args: | ||
| request: Template request with template and variables | ||
|
|
||
| Returns: | ||
| AI response | ||
|
|
||
| Raises: | ||
| HTTPException: If provider is not configured or request fails | ||
| """ | ||
| try: | ||
| response = await ai_tools.generate_with_template( | ||
| template=request.template, | ||
| variables=request.variables, | ||
| provider=request.provider, | ||
| ) | ||
|
|
||
| return ChatResponse(response=response, provider=request.provider) | ||
|
|
||
| except ValueError as e: | ||
| raise HTTPException(status_code=400, detail=str(e)) | ||
| except Exception as e: | ||
| raise HTTPException(status_code=500, detail=f"Template generation failed: {str(e)}") | ||
|
|
||
|
|
||
| @router.post("/agent", response_model=AgentResponse) | ||
| async def run_agent(request: AgentRequest): | ||
| """Run AI agent with tools and reasoning. | ||
|
|
||
| Args: | ||
| request: Agent request with input and settings | ||
|
|
||
| Returns: | ||
| Agent response with output and reasoning steps | ||
|
|
||
| Raises: | ||
| HTTPException: If provider is not configured or request fails | ||
| """ | ||
| try: | ||
| # Create agent based on type | ||
| if request.agent_type == "code_analysis": | ||
| agent = AIToolkit.create_code_analysis_agent(provider=request.provider) | ||
| elif request.agent_type == "blockchain_analyst": | ||
| agent = AIToolkit.create_blockchain_analyst_agent(provider=request.provider) | ||
| elif request.agent_type == "developer_assistant": | ||
| agent = AIToolkit.create_developer_assistant_agent(provider=request.provider) | ||
| else: | ||
| # General agent | ||
| from app.ai_agents import Web3AIAgent | ||
|
|
||
| agent = Web3AIAgent(provider=request.provider) | ||
|
|
||
| # Run agent | ||
| result = await agent.run(input_text=request.input, chat_history=request.chat_history) | ||
|
|
||
| return AgentResponse( | ||
| output=result.get("output", ""), | ||
| intermediate_steps=result.get("intermediate_steps", []), | ||
| ) | ||
|
|
||
| except ValueError as e: | ||
| raise HTTPException(status_code=400, detail=str(e)) | ||
| except Exception as e: | ||
| raise HTTPException(status_code=500, detail=f"Agent execution failed: {str(e)}") |
There was a problem hiding this comment.
All AI endpoints are publicly accessible without authentication or authorization. This exposes the OpenAI and Claude API keys to potential abuse and could result in significant costs. Consider implementing authentication (e.g., API keys, JWT tokens) and rate limiting to protect these endpoints from unauthorized access.
Adds Anthropic Claude support alongside existing OpenAI integration with a unified abstraction layer, autonomous AI agents, and production-ready REST endpoints.
Backend
AI Tools (
ai_tools.py)AIToolsManager- unified interface for OpenAI/Claude with provider selectionAI Agents (
ai_agents.py)Web3AIAgent- LangChain-based agents with tool integrationREST API (
ai_routes.py)Configuration
AI_PROVIDER:openai|claude|bothANTHROPIC_API_KEY,CLAUDE_MODEL_NAMEadded to settingsFrontend
AI Client (
ai-client.ts)Configuration
NEXT_PUBLIC_AI_PROVIDER,NEXT_PUBLIC_CLAUDE_MODEL_NAMEaddedSecurity
Testing
28 tests covering AI tools, routes, config validation - all passing.
Usage Example
Original prompt
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.