A node-based interface for interacting with locally hosted LLMs (with Ollama) and other LLM endpoints (OpenAI, Google, Claude) using React Flow. Create branching conversations, combine chat histories, and visualize your AI interactions in a powerful graph-based UI.
- Node-Based Interface: Visual conversation flow using React Flow
- Multiple LLM Support: Ollama, OpenAI, Google Gemini, and Claude
- Branching Conversations: Create response branches and explore different conversation paths
- History Combination: Merge multiple conversation histories into a single prompt
- Graph Management: Export/import conversation graphs, cleanup orphaned nodes
- Real-time Updates: Live response generation with loading states
- Provider Management: Easy configuration of multiple LLM providers
- Node.js 16+ and npm
- For Ollama: Local Ollama installation running on
http://localhost:11434 - For other providers: Valid API keys
- Clone the repository:
git clone <repository-url>
cd node-chat- Install dependencies:
npm install- Start the development server:
npm start- Open http://localhost:3000 in your browser
- Click the "Add" button in the Provider Panel (left sidebar)
- Select your provider type (Ollama, OpenAI, etc.)
- Configure the required settings:
- Name: A friendly name for your provider
- Base URL: API endpoint (auto-filled for most providers)
- API Key: Required for OpenAI, Google, and Claude
- Model: Select from available models
- Temperature: Controls randomness (0-2)
- Max Tokens: Maximum response length
Name: Ollama Local
Type: Ollama
Base URL: http://localhost:11434
Model: llama2
Temperature: 0.7
Max Tokens: 2048
Name: OpenAI GPT-4
Type: OpenAI
API Key: sk-your-api-key-here
Model: gpt-4
Temperature: 0.7
Max Tokens: 2048
Name: Google Gemini
Type: Google
API Key: your-google-api-key
Model: gemini-2.5-pro
Temperature: 0.7
Max Tokens: 2048
Name: Claude
Type: Anthropic
API Key: your-anthropic-api-key
Model: claude-3-sonnet-20240229
Temperature: 0.7
Max Tokens: 2048
- Add a Prompt Node: Click the "Prompt" button in the toolbar
- Write Your Prompt: Click the edit icon on the prompt node to enter your message
- Send the Message: Click "Send" to generate a response
- Create Branches: Click "Branch" on any response node to create new conversation paths
- Contain user messages/prompts
- Can be sent to generate responses
- Support editing and branching
- Show loading states during generation
- Display AI-generated responses
- Show metadata (response time, tokens used)
- Support copying, regeneration, and branching
- Display provider information
- Drag: Move nodes around the canvas
- Select: Click nodes to select them (Ctrl/Cmd+click for multi-select)
- Delete: Select nodes and press Delete key
- Export: Save your conversation graph as JSON
- Import: Load previously saved graphs
- Reset: Clear the entire graph
- Ctrl/Cmd + Enter: Save node edits
- Escape: Cancel node edits
- Delete: Delete selected nodes
- Ctrl/Cmd + Click: Multi-select nodes
The application uses a binary graph structure to store conversation history:
interface ConversationGraph {
nodes: FlowNode[]; // All nodes in the graph
edges: FlowEdge[]; // Connections between nodes
history: Map<string, Message[]>; // Message history per node
}- User Input: Prompt node receives user message
- History Building: System traces back through the graph to build conversation context
- LLM Request: Combined history sent to selected provider
- Response: AI response stored and displayed in response node
- Branching: Users can create new prompt nodes from any response
The LLM service uses a factory pattern to support multiple providers:
// Abstract base class
abstract class BaseLLMService {
abstract generateResponse(messages: Message[]): Promise<LLMResponse>;
abstract validateConfig(): boolean;
}
// Specific implementations
class OllamaService extends BaseLLMService { ... }
class OpenAIService extends BaseLLMService { ... }
class AnthropicService extends BaseLLMService { ... }
class GoogleService extends BaseLLMService { ... }src/
├── components/ # React components
│ ├── nodes/ # Custom React Flow nodes
│ └── sidebar/ # Sidebar components
├── services/ # LLM service implementations
├── store/ # Zustand state management
├── types/ # TypeScript type definitions
└── utils/ # Utility functions
- React 18: UI framework
- TypeScript: Type safety
- React Flow: Node-based interface
- Zustand: State management
- Tailwind CSS: Styling
- Axios: HTTP client
- Lucide React: Icons
- Create a new service class extending
BaseLLMService - Implement
generateResponse()andvalidateConfig()methods - Add provider type to the factory
- Update the UI provider form
npm run buildThis creates an optimized build in the build/ directory.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
This application handles CORS (Cross-Origin Resource Sharing) issues automatically when running in development mode. External APIs like Google Gemini, OpenAI, and Anthropic don't allow direct browser requests due to CORS restrictions.
The app includes a setupProxy.js file that configures proxy routes for external APIs:
- Google API:
/api/google/*→https://generativelanguage.googleapis.com/v1beta/* - OpenAI API:
/api/openai/*→https://api.openai.com/v1/* - Anthropic API:
/api/anthropic/*→https://api.anthropic.com/v1/* - Ollama: Direct connection to
http://localhost:11434(no proxy needed)
When deploying to production, you'll need to:
- Set up a backend server to handle API requests
- Update the API endpoints in
src/services/llmService.tsto point to your backend - Implement proper API key management on your backend
CORS Errors (Cross-Origin Request Blocked)
- This happens when calling external APIs (Google, OpenAI, Anthropic) directly from the browser
- The app automatically handles this with built-in proxy configuration
- If you see CORS errors, make sure you're running in development mode:
npm start - For production builds, you'll need to set up a backend API to proxy requests
Ollama Connection Failed
- Ensure Ollama is running:
ollama serve - Check if the URL is correct:
http://localhost:11434 - Verify the model is installed:
ollama list
API Key Issues
- Double-check your API key format
- Ensure the key has proper permissions
- Check for any trailing spaces
Graph Not Loading
- Try refreshing the page
- Check browser console for errors
- Verify the imported JSON format
- Keep conversation branches manageable
- Use the cleanup function to remove orphaned nodes
- Export/import graphs instead of keeping everything in memory
- Consider using smaller models for faster responses
- Custom node types
- Plugin system for additional providers
- Advanced graph analytics
- Voice input/output
- Mobile responsive design
- Conversation templates
- Search and filter capabilities