Skip to content

Comments

Add Claude AI integration with tools, agents, and REST API#15

Open
Copilot wants to merge 5 commits intomainfrom
copilot/add-ai-tools-and-toolkits
Open

Add Claude AI integration with tools, agents, and REST API#15
Copilot wants to merge 5 commits intomainfrom
copilot/add-ai-tools-and-toolkits

Conversation

Copy link
Contributor

Copilot AI commented Jan 24, 2026

Adds Anthropic Claude support alongside existing OpenAI integration with a unified abstraction layer, autonomous AI agents, and production-ready REST endpoints.

Backend

AI Tools (ai_tools.py)

  • AIToolsManager - unified interface for OpenAI/Claude with provider selection
  • Chat completions, streaming, template-based generation
  • Type-safe response handling for multi-part LangChain content

AI Agents (ai_agents.py)

  • Web3AIAgent - LangChain-based agents with tool integration
  • Pre-configured agents: code analysis, blockchain analyst, developer assistant
  • Placeholder tools with TODO markers for Web3 integration

REST API (ai_routes.py)

POST /api/ai/chat          # Chat completions
POST /api/ai/chat/stream   # Streaming responses
POST /api/ai/generate      # Template-based generation
POST /api/ai/agent         # Run autonomous agents
GET  /api/ai/providers     # List available providers

Configuration

  • AI_PROVIDER: openai | claude | both
  • ANTHROPIC_API_KEY, CLAUDE_MODEL_NAME added to settings
  • Runtime validation via Pydantic

Frontend

AI Client (ai-client.ts)

  • TypeScript client for all AI endpoints
  • Configurable default provider from environment
  • Streaming support with async iterators

Configuration

  • NEXT_PUBLIC_AI_PROVIDER, NEXT_PUBLIC_CLAUDE_MODEL_NAME added
  • Zod schema validation

Security

  • Patched LangChain template injection vulnerabilities (0.3.28 → 0.3.81)
  • Patched LangChain XXE vulnerability (community 0.3.13 → 0.3.27)
  • CodeQL: 0 alerts

Testing

28 tests covering AI tools, routes, config validation - all passing.

Usage Example

# Backend - chat with Claude
from app.ai_tools import ai_tools

response = await ai_tools.chat(
    messages=[{"role": "user", "content": "Explain Ethereum"}],
    provider="claude"
)

# Frontend - run blockchain analyst agent
import { aiClient } from '@/lib/ai-client';

const result = await aiClient.runAgent({
    input: "Analyze transaction 0x123...",
    agent_type: "blockchain_analyst",
    provider: "claude"
});
Original prompt

Add an AI full Stack of AI and AI tools and toolkits for Claude


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Copilot AI and others added 4 commits January 24, 2026 11:06
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Copilot AI changed the title [WIP] Add AI full stack and toolkits for Claude Add Claude AI integration with tools, agents, and REST API Jan 24, 2026
Copilot AI requested a review from lippytm January 24, 2026 11:15
@lippytm lippytm requested review from Copilot and removed request for lippytm February 5, 2026 20:32
@lippytm lippytm marked this pull request as ready for review February 5, 2026 20:32
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds comprehensive Anthropic Claude integration alongside the existing OpenAI support, introducing a unified AI abstraction layer with autonomous agents, REST API endpoints, and a TypeScript frontend client. The PR successfully implements dual AI provider support with chat completions, streaming, template-based generation, and specialized AI agents for Web3 development tasks.

Changes:

  • Added Claude AI provider support with unified AIToolsManager interface for OpenAI and Claude
  • Implemented REST API endpoints for chat, streaming, template generation, and AI agent execution
  • Created TypeScript AI client with support for all backend endpoints
  • Added configuration validation for new AI provider settings on both frontend and backend

Reviewed changes

Copilot reviewed 15 out of 15 changed files in this pull request and generated 16 comments.

Show a summary per file
File Description
backend/app/settings.py Added Anthropic API key, Claude model name, and AI provider selection with validation
backend/app/ai_tools.py New AIToolsManager class providing unified interface for OpenAI and Claude with chat, streaming, and template generation
backend/app/ai_routes.py New REST API endpoints for AI operations including chat, streaming, template generation, and agent execution
backend/app/ai_agents.py New Web3AIAgent class and AIToolkit with specialized agents for code analysis, blockchain analysis, and development assistance
backend/app/main.py Integrated AI routes and exposed AI configuration in API info endpoint
backend/requirements.txt Added langchain-anthropic, langchain-core, langchain-community, and anthropic SDK dependencies
backend/.env.example Added example configuration for Anthropic API key, Claude model, and AI provider selection
frontend/lib/config.ts Added validation for Claude model name and AI provider configuration
frontend/lib/ai-client.ts New TypeScript client for all AI endpoints with streaming support
frontend/.env.example Added frontend environment variables for Claude model and AI provider
backend/tests/test_config_validation.py Added tests for Claude model name and AI provider validation
backend/tests/test_ai_tools.py Comprehensive tests for AIToolsManager with both providers
backend/tests/test_ai_routes.py Tests for all AI API endpoints including error cases
backend/README.md Updated documentation with AI endpoints, usage examples, and project structure
README.md Enhanced documentation with comprehensive AI/LLM configuration guide

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

class ChatRequest(BaseModel):
"""Chat request model."""

messages: list[ChatMessage] = Field(..., description="List of chat messages")
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing validation to ensure the messages list is not empty. An empty messages array could cause unexpected behavior or errors in the AI model. Consider adding a Field validator with min_length=1 to ensure at least one message is provided.

Suggested change
messages: list[ChatMessage] = Field(..., description="List of chat messages")
messages: list[ChatMessage] = Field(
..., description="List of chat messages", min_length=1
)

Copilot uses AI. Check for mistakes.
class TemplateRequest(BaseModel):
"""Template generation request."""

template: str = Field(..., description="Prompt template with variables")
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing validation to ensure the template string is not empty. An empty template could cause unexpected behavior. Consider adding a Field validator with min_length=1 to ensure a valid template is provided.

Suggested change
template: str = Field(..., description="Prompt template with variables")
template: str = Field(
..., description="Prompt template with variables", min_length=1
)

Copilot uses AI. Check for mistakes.
Comment on lines +7 to +8
export interface ChatMessage {
role: string;
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The role field accepts any string value but should be validated to only allow valid message roles. Consider using a union type of string literals ('user' | 'assistant' | 'system') to enforce valid role values and improve type safety.

Suggested change
export interface ChatMessage {
role: string;
export type ChatMessageRole = 'user' | 'assistant' | 'system';
export interface ChatMessage {
role: ChatMessageRole;

Copilot uses AI. Check for mistakes.
Comment on lines +172 to +174
response = await ai_tools.generate_with_template(
template=request.template,
variables=request.variables,
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The template string is passed directly from user input to ChatPromptTemplate.from_template without sanitization. While the PR mentions security patches for LangChain (0.3.81), consider adding validation to restrict template complexity or implementing a whitelist of allowed templates to prevent potential template injection attacks. At minimum, add rate limiting and consider restricting this endpoint to authenticated users only.

Copilot uses AI. Check for mistakes.
Comment on lines +75 to +76
verbose=True,
handle_parsing_errors=True,
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AgentExecutor is initialized with verbose=True which will log detailed execution information to stdout. This could expose sensitive information (API keys, user inputs, internal reasoning) in production logs. Consider making verbose configurable via settings or defaulting to False in production environments.

Copilot uses AI. Check for mistakes.
"""Chat request model."""

messages: list[ChatMessage] = Field(..., description="List of chat messages")
provider: str = Field(default="claude", description="AI provider (openai/claude)")
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provider field accepts any string value but should be validated against available providers. Consider using a Literal type or adding validation to restrict values to 'openai' or 'claude' only, preventing invalid provider names from being accepted by the API.

Copilot uses AI. Check for mistakes.

template: str = Field(..., description="Prompt template with variables")
variables: dict[str, Any] = Field(..., description="Variables to fill template")
provider: str = Field(default="claude", description="AI provider (openai/claude)")
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provider field accepts any string value but should be validated against available providers. Consider using a Literal type or adding validation to restrict values to 'openai' or 'claude' only, preventing invalid provider names from being accepted by the API.

Copilot uses AI. Check for mistakes.
Comment on lines +55 to +57
// Use configured AI provider or default to Claude
const config = getConfig();
this.defaultProvider = defaultProvider || (config.NEXT_PUBLIC_AI_PROVIDER === 'openai' ? 'openai' : 'claude');
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The defaultProvider logic doesn't handle the 'both' configuration properly. When NEXT_PUBLIC_AI_PROVIDER is 'both', the code defaults to 'claude', but it should either select one of the available providers intelligently or throw an error requiring explicit provider selection. Consider documenting this behavior or changing the logic to be more explicit about how 'both' is handled.

Suggested change
// Use configured AI provider or default to Claude
const config = getConfig();
this.defaultProvider = defaultProvider || (config.NEXT_PUBLIC_AI_PROVIDER === 'openai' ? 'openai' : 'claude');
// Use explicitly provided provider, or derive from configuration when valid
const config = getConfig();
if (defaultProvider) {
this.defaultProvider = defaultProvider;
} else {
const configuredProvider = config.NEXT_PUBLIC_AI_PROVIDER;
if (configuredProvider === 'openai' || configuredProvider === 'claude') {
this.defaultProvider = configuredProvider;
} else {
throw new Error(
`Invalid NEXT_PUBLIC_AI_PROVIDER "${configuredProvider}". Expected "openai" or "claude". ` +
'When using a different value (e.g., "both"), you must select a provider explicitly when creating AIClient.'
);
}
}

Copilot uses AI. Check for mistakes.
class AgentRequest(BaseModel):
"""Agent request model."""

input: str = Field(..., description="User input for the agent")
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing validation to ensure the input string is not empty. An empty input could cause unexpected behavior when running the agent. Consider adding a Field validator with min_length=1 to ensure valid input is provided.

Suggested change
input: str = Field(..., description="User input for the agent")
input: str = Field(..., min_length=1, description="User input for the agent")

Copilot uses AI. Check for mistakes.
Comment on lines +12 to +224
router = APIRouter(prefix="/api/ai", tags=["AI"])


class ChatMessage(BaseModel):
"""Chat message model."""

role: str = Field(..., description="Message role (user/assistant/system)")
content: str = Field(..., description="Message content")


class ChatRequest(BaseModel):
"""Chat request model."""

messages: list[ChatMessage] = Field(..., description="List of chat messages")
provider: str = Field(default="claude", description="AI provider (openai/claude)")
system_prompt: str | None = Field(None, description="Optional system prompt")
stream: bool = Field(default=False, description="Whether to stream the response")


class ChatResponse(BaseModel):
"""Chat response model."""

response: str = Field(..., description="AI response")
provider: str = Field(..., description="Provider used")


class TemplateRequest(BaseModel):
"""Template generation request."""

template: str = Field(..., description="Prompt template with variables")
variables: dict[str, Any] = Field(..., description="Variables to fill template")
provider: str = Field(default="claude", description="AI provider (openai/claude)")


class AgentRequest(BaseModel):
"""Agent request model."""

input: str = Field(..., description="User input for the agent")
agent_type: str = Field(
default="general",
description="Agent type (general/code_analysis/blockchain_analyst/developer_assistant)",
)
provider: str = Field(default="claude", description="AI provider (openai/claude)")
chat_history: list[dict[str, str]] | None = Field(
None, description="Optional chat history"
)


class AgentResponse(BaseModel):
"""Agent response model."""

output: str = Field(..., description="Agent output")
intermediate_steps: list | None = Field(None, description="Intermediate reasoning steps")


class ProvidersResponse(BaseModel):
"""Available providers response."""

providers: list[str] = Field(..., description="List of available providers")


@router.get("/providers", response_model=ProvidersResponse)
async def get_providers():
"""Get available AI providers.

Returns:
List of configured AI providers
"""
providers = ai_tools.get_available_providers()
return ProvidersResponse(providers=providers)


@router.post("/chat", response_model=ChatResponse)
async def chat(request: ChatRequest):
"""Send chat messages to AI model.

Args:
request: Chat request with messages and settings

Returns:
AI response

Raises:
HTTPException: If provider is not configured or request fails
"""
if request.stream:
raise HTTPException(
status_code=400,
detail="Streaming not supported in this endpoint. Use /api/ai/chat/stream instead",
)

try:
# Convert messages to dict format
messages = [{"role": msg.role, "content": msg.content} for msg in request.messages]

# Get response
response = await ai_tools.chat(
messages=messages,
provider=request.provider,
system_prompt=request.system_prompt,
)

return ChatResponse(response=response, provider=request.provider)

except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
except Exception as e:
raise HTTPException(status_code=500, detail=f"AI request failed: {str(e)}")


@router.post("/chat/stream")
async def chat_stream(request: ChatRequest):
"""Stream chat messages to AI model.

Args:
request: Chat request with messages and settings

Returns:
Streaming response with AI output

Raises:
HTTPException: If provider is not configured or request fails
"""
try:
# Convert messages to dict format
messages = [{"role": msg.role, "content": msg.content} for msg in request.messages]

async def generate():
try:
async for chunk in ai_tools.stream_chat(
messages=messages,
provider=request.provider,
system_prompt=request.system_prompt,
):
yield chunk
except Exception as e:
yield f"Error: {str(e)}"

return StreamingResponse(generate(), media_type="text/plain")

except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
except Exception as e:
raise HTTPException(status_code=500, detail=f"AI request failed: {str(e)}")


@router.post("/generate", response_model=ChatResponse)
async def generate_with_template(request: TemplateRequest):
"""Generate response using a prompt template.

Args:
request: Template request with template and variables

Returns:
AI response

Raises:
HTTPException: If provider is not configured or request fails
"""
try:
response = await ai_tools.generate_with_template(
template=request.template,
variables=request.variables,
provider=request.provider,
)

return ChatResponse(response=response, provider=request.provider)

except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
except Exception as e:
raise HTTPException(status_code=500, detail=f"Template generation failed: {str(e)}")


@router.post("/agent", response_model=AgentResponse)
async def run_agent(request: AgentRequest):
"""Run AI agent with tools and reasoning.

Args:
request: Agent request with input and settings

Returns:
Agent response with output and reasoning steps

Raises:
HTTPException: If provider is not configured or request fails
"""
try:
# Create agent based on type
if request.agent_type == "code_analysis":
agent = AIToolkit.create_code_analysis_agent(provider=request.provider)
elif request.agent_type == "blockchain_analyst":
agent = AIToolkit.create_blockchain_analyst_agent(provider=request.provider)
elif request.agent_type == "developer_assistant":
agent = AIToolkit.create_developer_assistant_agent(provider=request.provider)
else:
# General agent
from app.ai_agents import Web3AIAgent

agent = Web3AIAgent(provider=request.provider)

# Run agent
result = await agent.run(input_text=request.input, chat_history=request.chat_history)

return AgentResponse(
output=result.get("output", ""),
intermediate_steps=result.get("intermediate_steps", []),
)

except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
except Exception as e:
raise HTTPException(status_code=500, detail=f"Agent execution failed: {str(e)}")
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All AI endpoints are publicly accessible without authentication or authorization. This exposes the OpenAI and Claude API keys to potential abuse and could result in significant costs. Consider implementing authentication (e.g., API keys, JWT tokens) and rate limiting to protect these endpoints from unauthorized access.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants