A production-ready TypeScript framework for building AI agents that actually work.
📚 Documentation • 🚀 Getting Started • 💼 LinkedIn
Most AI agent frameworks break down when you try to ship them to production. AgentForge was designed from the ground up with production patterns: type safety, fault tolerance, observability, and clean architecture.
- 🏗️ Modular Architecture — Provider-agnostic design that scales
- 🔒 Type-Safe APIs — Runtime validation with Zod, full TypeScript inference
- ⚡ Streaming Systems — Async iterators for real-time responses
- 🔗 Middleware Pipelines — Extensible request/response processing
- 🛡️ Production Patterns — Circuit breakers, retry logic, graceful degradation
- 📊 Observability — Distributed tracing, metrics, structured logging
- 📚 Complete Documentation — Guides, API reference, and real-world examples
- Type-Safe Tools — Define tools with Zod schemas for full TypeScript inference
- Multi-Provider — OpenAI, Anthropic, Azure, or custom providers
- Streaming — Built-in async iterator support for real-time responses
- React Hooks — First-class integration with
useAgent,useChat,useStreamingAgent
- Circuit Breakers — Prevent cascading failures with configurable thresholds
- Request Deduplication — Coalesce identical concurrent requests
- Retry with Backoff — Exponential backoff with jitter for transient failures
- Graceful Degradation — Feature flags and fallback responses
- Distributed Tracing — OpenTelemetry-compatible spans and metrics
- Conversation Persistence — Pluggable storage adapters (memory, file, custom)
- Token Management — Accurate counting per model, budget tracking, smart truncation
npm install agentforge zodimport { Agent, OpenAIProvider, defineTool } from 'agentforge';
import { z } from 'zod';
// Define a type-safe tool
const weatherTool = defineTool({
name: 'get_weather',
description: 'Get the current weather for a location',
parameters: z.object({
location: z.string().describe('City name'),
unit: z.enum(['celsius', 'fahrenheit']).default('fahrenheit'),
}),
execute: async ({ location, unit }) => {
return { temperature: 72, condition: 'sunny', location, unit };
},
});
// Create an agent
const agent = new Agent({
provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY }),
tools: [weatherTool],
systemPrompt: 'You are a helpful weather assistant.',
});
// Run the agent
const response = await agent.run("What's the weather in Boston?");
console.log(response.content);const agent = new Agent({
provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY }),
// Circuit breaker prevents cascading failures
circuitBreaker: {
enabled: true,
failureThreshold: 5,
resetTimeoutMs: 30000,
},
// Deduplicate identical concurrent requests
deduplication: { enabled: true },
// Limit concurrency
concurrency: { maxConcurrent: 10 },
// Timeouts
timeouts: {
requestMs: 30000,
toolExecutionMs: 10000,
},
});
// Check health status
const health = agent.getHealth();
console.log(health.circuitBreaker?.state); // 'closed' | 'open' | 'half-open'import { initTelemetry, createConsoleExporter } from 'agentforge';
// Initialize telemetry
initTelemetry(createConsoleExporter());
// Telemetry automatically tracks:
// - Request/response spans with timing
// - Token usage metrics
// - Error rates
// - Tool execution durationimport { createFailoverProvider } from 'agentforge';
// Automatic failover between providers
const provider = createFailoverProvider(
process.env.OPENAI_API_KEY,
process.env.ANTHROPIC_API_KEY,
{ primaryModel: 'gpt-4-turbo', fallbackModel: 'claude-3-sonnet-20240229' }
);import { useAgent, AgentProvider } from 'agentforge/react';
function ChatInterface() {
const { messages, sendMessage, isLoading, error } = useAgent({
tools: [weatherTool],
systemPrompt: 'You are a helpful assistant.',
});
return (
<div>
{messages.map((msg) => (
<div key={msg.id} className={msg.role}>
{msg.content}
</div>
))}
<input
onKeyDown={(e) => {
if (e.key === 'Enter') {
sendMessage(e.currentTarget.value);
e.currentTarget.value = '';
}
}}
disabled={isLoading}
placeholder="Type a message..."
/>
</div>
);
}import {
createRateLimitMiddleware,
createCacheMiddleware,
createCostTrackingMiddleware,
loggingMiddleware
} from 'agentforge';
const agent = new Agent({
provider: openai,
middleware: [
loggingMiddleware,
createRateLimitMiddleware({ maxRequestsPerMinute: 60 }),
createCacheMiddleware({ ttlMs: 300000 }),
createCostTrackingMiddleware({
onCost: (cost) => console.log(`Request cost: $${cost.total.toFixed(4)}`),
}),
],
});Visit mpalmer79.github.io/agentforge for:
- Getting Started — Installation and first agent
- Core Concepts — Architecture overview
- Tools — Defining type-safe tools
- Providers — OpenAI, Anthropic, custom
- Middleware — Extending the pipeline
- React Integration — Hooks and components
- API Reference — Complete API docs
| Example | Description |
|---|---|
| Basic Agent | Simple tool-using agent |
| Customer Support | Multi-tool support agent with escalation |
| Data Analyst | Agent with database query tools |
| React Chat | Full React chat interface |
Contributions are welcome! Please read our Contributing Guide for details.
MIT © Michael Palmer
Built with TypeScript for production AI systems