Skip to content

simonerlic/node-chat

Repository files navigation

Node Chat - LLM Flow Interface

A node-based interface for interacting with locally hosted LLMs (with Ollama) and other LLM endpoints (OpenAI, Google, Claude) using React Flow. Create branching conversations, combine chat histories, and visualize your AI interactions in a powerful graph-based UI.

Features

  • Node-Based Interface: Visual conversation flow using React Flow
  • Multiple LLM Support: Ollama, OpenAI, Google Gemini, and Claude
  • Branching Conversations: Create response branches and explore different conversation paths
  • History Combination: Merge multiple conversation histories into a single prompt
  • Graph Management: Export/import conversation graphs, cleanup orphaned nodes
  • Real-time Updates: Live response generation with loading states
  • Provider Management: Easy configuration of multiple LLM providers

Getting Started

Prerequisites

  • Node.js 16+ and npm
  • For Ollama: Local Ollama installation running on http://localhost:11434
  • For other providers: Valid API keys

Installation

  1. Clone the repository:
git clone <repository-url>
cd node-chat
  1. Install dependencies:
npm install
  1. Start the development server:
npm start
  1. Open http://localhost:3000 in your browser

Configuration

Setting Up Providers

  1. Click the "Add" button in the Provider Panel (left sidebar)
  2. Select your provider type (Ollama, OpenAI, etc.)
  3. Configure the required settings:
    • Name: A friendly name for your provider
    • Base URL: API endpoint (auto-filled for most providers)
    • API Key: Required for OpenAI, Google, and Claude
    • Model: Select from available models
    • Temperature: Controls randomness (0-2)
    • Max Tokens: Maximum response length

Provider Examples

Ollama (Local)

Name: Ollama Local
Type: Ollama
Base URL: http://localhost:11434
Model: llama2
Temperature: 0.7
Max Tokens: 2048

OpenAI

Name: OpenAI GPT-4
Type: OpenAI
API Key: sk-your-api-key-here
Model: gpt-4
Temperature: 0.7
Max Tokens: 2048

Google Gemini

Name: Google Gemini
Type: Google
API Key: your-google-api-key
Model: gemini-2.5-pro
Temperature: 0.7
Max Tokens: 2048

Claude

Name: Claude
Type: Anthropic
API Key: your-anthropic-api-key
Model: claude-3-sonnet-20240229
Temperature: 0.7
Max Tokens: 2048

Usage

Creating Conversations

  1. Add a Prompt Node: Click the "Prompt" button in the toolbar
  2. Write Your Prompt: Click the edit icon on the prompt node to enter your message
  3. Send the Message: Click "Send" to generate a response
  4. Create Branches: Click "Branch" on any response node to create new conversation paths

Node Types

Prompt Nodes (Blue)

  • Contain user messages/prompts
  • Can be sent to generate responses
  • Support editing and branching
  • Show loading states during generation

Response Nodes (Green)

  • Display AI-generated responses
  • Show metadata (response time, tokens used)
  • Support copying, regeneration, and branching
  • Display provider information

Graph Operations

  • Drag: Move nodes around the canvas
  • Select: Click nodes to select them (Ctrl/Cmd+click for multi-select)
  • Delete: Select nodes and press Delete key
  • Export: Save your conversation graph as JSON
  • Import: Load previously saved graphs
  • Reset: Clear the entire graph

Keyboard Shortcuts

  • Ctrl/Cmd + Enter: Save node edits
  • Escape: Cancel node edits
  • Delete: Delete selected nodes
  • Ctrl/Cmd + Click: Multi-select nodes

Architecture

Data Structure

The application uses a binary graph structure to store conversation history:

interface ConversationGraph {
  nodes: FlowNode[];           // All nodes in the graph
  edges: FlowEdge[];          // Connections between nodes
  history: Map<string, Message[]>; // Message history per node
}

Message Flow

  1. User Input: Prompt node receives user message
  2. History Building: System traces back through the graph to build conversation context
  3. LLM Request: Combined history sent to selected provider
  4. Response: AI response stored and displayed in response node
  5. Branching: Users can create new prompt nodes from any response

Provider Architecture

The LLM service uses a factory pattern to support multiple providers:

// Abstract base class
abstract class BaseLLMService {
  abstract generateResponse(messages: Message[]): Promise<LLMResponse>;
  abstract validateConfig(): boolean;
}

// Specific implementations
class OllamaService extends BaseLLMService { ... }
class OpenAIService extends BaseLLMService { ... }
class AnthropicService extends BaseLLMService { ... }
class GoogleService extends BaseLLMService { ... }

Development

Project Structure

src/
├── components/          # React components
│   ├── nodes/          # Custom React Flow nodes
│   └── sidebar/        # Sidebar components
├── services/           # LLM service implementations
├── store/             # Zustand state management
├── types/             # TypeScript type definitions
└── utils/             # Utility functions

Key Technologies

  • React 18: UI framework
  • TypeScript: Type safety
  • React Flow: Node-based interface
  • Zustand: State management
  • Tailwind CSS: Styling
  • Axios: HTTP client
  • Lucide React: Icons

Adding New Providers

  1. Create a new service class extending BaseLLMService
  2. Implement generateResponse() and validateConfig() methods
  3. Add provider type to the factory
  4. Update the UI provider form

Building for Production

npm run build

This creates an optimized build in the build/ directory.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

License

This project is licensed under the MIT License - see the LICENSE file for details.

CORS and API Proxy

This application handles CORS (Cross-Origin Resource Sharing) issues automatically when running in development mode. External APIs like Google Gemini, OpenAI, and Anthropic don't allow direct browser requests due to CORS restrictions.

How It Works

The app includes a setupProxy.js file that configures proxy routes for external APIs:

  • Google API: /api/google/*https://generativelanguage.googleapis.com/v1beta/*
  • OpenAI API: /api/openai/*https://api.openai.com/v1/*
  • Anthropic API: /api/anthropic/*https://api.anthropic.com/v1/*
  • Ollama: Direct connection to http://localhost:11434 (no proxy needed)

For Production

When deploying to production, you'll need to:

  1. Set up a backend server to handle API requests
  2. Update the API endpoints in src/services/llmService.ts to point to your backend
  3. Implement proper API key management on your backend

Troubleshooting

Common Issues

CORS Errors (Cross-Origin Request Blocked)

  • This happens when calling external APIs (Google, OpenAI, Anthropic) directly from the browser
  • The app automatically handles this with built-in proxy configuration
  • If you see CORS errors, make sure you're running in development mode: npm start
  • For production builds, you'll need to set up a backend API to proxy requests

Ollama Connection Failed

  • Ensure Ollama is running: ollama serve
  • Check if the URL is correct: http://localhost:11434
  • Verify the model is installed: ollama list

API Key Issues

  • Double-check your API key format
  • Ensure the key has proper permissions
  • Check for any trailing spaces

Graph Not Loading

  • Try refreshing the page
  • Check browser console for errors
  • Verify the imported JSON format

Performance Tips

  • Keep conversation branches manageable
  • Use the cleanup function to remove orphaned nodes
  • Export/import graphs instead of keeping everything in memory
  • Consider using smaller models for faster responses

Roadmap

  • Custom node types
  • Plugin system for additional providers
  • Advanced graph analytics
  • Voice input/output
  • Mobile responsive design
  • Conversation templates
  • Search and filter capabilities

About

A novel way to interact with LLMs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published