Skip to content
/ NLPatch Public

FastAPI service translating natural language instructions into validated geometric edit operations using GPT-4

Notifications You must be signed in to change notification settings

dpmorr/NLPatch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

NLPatch: LLM-Driven Geometric Editing

A FastAPI-based service that translates natural language instructions into structured geometric edits using GPT-4 with schema validation and constraint preservation.

Overview

NLPatch bridges the gap between natural language and formal geometric representations. It uses large language models (GPT-4) to interpret design intent and generate validated edit operations on geometric scene graphs, enabling intuitive CAD manipulation through conversational interfaces.

Features

  • Natural Language Processing: Interpret complex design instructions in plain English
  • Structured Output Generation: Convert intent to validated JSON patch operations
  • 17-Point Schema Validation: Ensure geometric consistency and constraint preservation
  • Multi-Operation Support: Handle parameter updates, node creation/deletion, constraint management
  • Fast Inference: <2s end-to-end latency for typical queries
  • Production-Ready: FastAPI REST API with async support

Quick Start

Installation

# Create virtual environment
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# Install dependencies
pip install fastapi uvicorn openai pydantic python-dotenv jsonschema

# Set OpenAI API key
echo "OPENAI_API_KEY=sk-..." > .env

Start Server

# Development mode with auto-reload
uvicorn main:app --reload --port 8001

# Production mode
uvicorn main:app --host 0.0.0.0 --port 8001 --workers 4

Example Request

curl -X POST http://localhost:8001/edit \
  -H "Content-Type: application/json" \
  -d '{
    "instruction": "Make the box twice as wide and add a distance constraint of 15mm between box1 and sphere1",
    "current_scene": {
      "nodes": {
        "box1": {
          "op": "sdf.box",
          "params": {"width": 10.0, "height": 5.0, "depth": 3.0}
        },
        "sphere1": {
          "op": "sdf.sphere",
          "params": {"radius": 8.0}
        }
      },
      "constraints": []
    }
  }'

Response:

{
  "patch": {
    "actions": [
      {
        "type": "set_param",
        "node_id": "box1",
        "param_name": "width",
        "value": 20.0
      },
      {
        "type": "add_constraint",
        "constraint": {
          "id": "C1",
          "type": "distance",
          "targets": ["box1", "sphere1"],
          "value": 15.0,
          "weight": 1.0
        }
      }
    ]
  },
  "validation": {
    "valid": true,
    "checks_passed": 17
  },
  "latency_ms": 1847
}

Architecture

System Flow

User Input (Natural Language)
    ↓
GPT-4 with System Prompt
    ↓
JSON Patch (Structured Edits)
    ↓
17-Point Schema Validator
    ↓
Validated Patch Output

Patch Operations

The orchestrator supports 5 core operation types:

  1. set_param: Update parameter values
  2. add_node: Create new geometric nodes
  3. add_constraint: Add optimization constraints
  4. delete_node: Remove geometric nodes
  5. delete_constraint: Remove constraints

API Reference

POST /edit

Convert natural language instruction to geometric patch.

Request Body:

{
  "instruction": "string",          // Natural language command
  "current_scene": {                // Current scene graph
    "nodes": {
      "node_id": {
        "op": "string",
        "params": { ... },
        "inputs": ["string"]
      }
    },
    "constraints": [ ... ]
  },
  "conversation_history": [         // Optional: previous exchanges
    {
      "role": "user",
      "content": "string"
    }
  ]
}

Response:

{
  "patch": {
    "actions": [
      {
        "type": "set_param | add_node | delete_node | add_constraint | delete_constraint",
        ...
      }
    ]
  },
  "validation": {
    "valid": true,
    "checks_passed": 17,
    "errors": []
  },
  "reasoning": "string",            // LLM's reasoning (if requested)
  "latency_ms": 1847
}

GET /health

Health check endpoint.

Response:

{
  "status": "healthy",
  "openai_configured": true,
  "version": "1.0.0"
}

POST /validate

Validate a patch without LLM inference.

Request Body:

{
  "patch": { ... },
  "current_scene": { ... }
}

Response:

{
  "valid": true,
  "checks_passed": 17,
  "errors": []
}

Patch Schema

set_param

Update a parameter value on an existing node.

{
  "type": "set_param",
  "node_id": "box1",
  "param_name": "width",
  "value": 20.0
}

add_node

Create a new geometric node in the scene.

{
  "type": "add_node",
  "node_id": "cyl1",
  "op": "sdf.cylinder",
  "params": {
    "radius": 5.0,
    "height": 10.0
  },
  "inputs": []
}

delete_node

Remove a geometric node and clean up dependent references.

{
  "type": "delete_node",
  "node_id": "box1"
}

add_constraint

Add an optimization constraint between nodes.

{
  "type": "add_constraint",
  "constraint": {
    "id": "C1",
    "type": "distance",
    "targets": ["box1", "sphere1"],
    "value": 15.0,
    "weight": 1.0,
    "samples": [
      {
        "node": "box1",
        "point": [0, 0, 0]
      },
      {
        "node": "sphere1",
        "point": [0, 0, 0]
      }
    ]
  }
}

Constraint Types:

  • distance: Maintain distance between geometries
  • tangent: Make surfaces tangent at contact points
  • angle: Maintain angle between surface normals

delete_constraint

Remove an optimization constraint.

{
  "type": "delete_constraint",
  "constraint_id": "C1"
}

Schema Validation

The orchestrator performs 17 validation checks before returning patches:

Structural Checks (1-6)

  1. âś“ Patch has 'actions' array
  2. âś“ Each action has 'type' field
  3. âś“ Action types are valid
  4. âś“ All referenced nodes exist
  5. âś“ No circular dependencies in inputs
  6. âś“ Parameter names match operator schema

Semantic Checks (7-12)

  1. âś“ Parameter values within valid ranges
  2. âś“ Constraint targets have geometry nodes
  3. âś“ Constraint types are valid
  4. âś“ Constraint samples reference valid nodes
  5. âś“ No duplicate node IDs
  6. âś“ No duplicate constraint IDs

Type Safety Checks (13-17)

  1. âś“ Numeric params are numbers
  2. âś“ String params are strings
  3. âś“ Array params are arrays
  4. âś“ Inputs array contains valid node IDs
  5. âś“ Transform parameters have correct dimensionality

Supported Operators

The orchestrator understands 17 geometric operators:

SDF Primitives

  • sdf.box - Axis-aligned box
  • sdf.sphere - Sphere
  • sdf.cylinder - Cylinder
  • sdf.cone - Cone/frustum
  • sdf.torus - Torus
  • sdf.capsule - Capsule/pill shape
  • sdf.halfspace - Infinite plane

Transforms

  • transform.translate - Translation
  • transform.rotate - Rotation
  • transform.scale - Scaling

Boolean Operations

  • boolean.union - Union
  • boolean.intersect - Intersection
  • boolean.subtract - Subtraction
  • boolean.smooth_union - Smooth union

Advanced Operations

  • chart.bspline - B-spline curve
  • chart.extrude - Linear extrusion
  • chart.revolve - Surface of revolution

Example Use Cases

Parametric Design Iteration

import requests

# Initial design
scene = {
    "nodes": {
        "base": {"op": "sdf.box", "params": {"width": 100, "height": 20, "depth": 50}}
    },
    "constraints": []
}

# Iterative refinement
instructions = [
    "Make the base 50% wider",
    "Add a cylinder on top with radius 15 and height 30",
    "Add a distance constraint of 5mm between base and cylinder"
]

for instruction in instructions:
    response = requests.post('http://localhost:8001/edit', json={
        'instruction': instruction,
        'current_scene': scene
    })

    patch = response.json()['patch']
    scene = apply_patch(scene, patch)  # Apply patch to scene

print("Final scene:", scene)

Constraint-Based Design

# Complex constraint setup via natural language
instruction = """
Create a gear assembly:
1. Add a cylinder gear1 with radius 20mm and height 10mm
2. Add a second cylinder gear2 with radius 15mm and height 10mm
3. Add a distance constraint of 35mm between their centers
4. Add a tangent constraint where they mesh
"""

response = requests.post('http://localhost:8001/edit', json={
    'instruction': instruction,
    'current_scene': empty_scene
})

patch = response.json()['patch']
# Patch will contain 4 add_node actions + 2 add_constraint actions

Multi-Step Editing with Context

# Conversational editing with history
history = []

# First edit
response1 = requests.post('http://localhost:8001/edit', json={
    'instruction': "Create a box named housing with dimensions 100x50x30",
    'current_scene': scene,
    'conversation_history': history
})
history.append({"role": "user", "content": "Create a box named housing..."})
history.append({"role": "assistant", "content": response1.json()['reasoning']})

# Second edit with context
response2 = requests.post('http://localhost:8001/edit', json={
    'instruction': "Make it 20% taller",  # "it" refers to housing
    'current_scene': updated_scene,
    'conversation_history': history
})

Configuration

Environment Variables

# Required
OPENAI_API_KEY=sk-...

# Optional
OPENAI_MODEL=gpt-4                  # Default: gpt-4
OPENAI_TEMPERATURE=0.0              # Default: 0.0 (deterministic)
OPENAI_MAX_TOKENS=2000              # Default: 2000
LOG_LEVEL=INFO                      # Default: INFO
PORT=8001                           # Default: 8001

System Prompt Customization

Edit system_prompt.txt to customize LLM behavior:

# main.py
with open('system_prompt.txt') as f:
    SYSTEM_PROMPT = f.read()

# Modify prompt to emphasize specific behaviors
SYSTEM_PROMPT += """
Additional Guidelines:
- Always preserve existing constraints unless explicitly asked to remove them
- Prefer smooth_union over sharp union for organic shapes
- Use standard gear modules (1, 1.5, 2, 2.5, 3, 4, 5mm)
"""

Performance Optimization

Caching

from functools import lru_cache
import hashlib

@lru_cache(maxsize=1000)
def cached_llm_call(instruction_hash, scene_hash):
    # Cache repeated queries
    return call_openai(instruction, scene)

# Usage
instruction_hash = hashlib.md5(instruction.encode()).hexdigest()
scene_hash = hashlib.md5(json.dumps(scene, sort_keys=True).encode()).hexdigest()
result = cached_llm_call(instruction_hash, scene_hash)

Async Processing

from fastapi import BackgroundTasks

@app.post("/edit_async")
async def edit_async(request: EditRequest, background_tasks: BackgroundTasks):
    task_id = generate_task_id()

    background_tasks.add_task(process_edit, task_id, request)

    return {"task_id": task_id, "status": "processing"}

@app.get("/status/{task_id}")
async def get_status(task_id: str):
    result = get_result(task_id)
    return {"status": result.status, "patch": result.patch if result.done else None}

Batch Processing

@app.post("/edit_batch")
async def edit_batch(requests: List[EditRequest]):
    # Process multiple edits in parallel
    tasks = [edit_single(req) for req in requests]
    results = await asyncio.gather(*tasks)
    return {"results": results}

Deployment

Docker

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8001

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8001"]
docker build -t nlpatch:latest .
docker run -p 8001:8001 -e OPENAI_API_KEY=sk-... nlpatch:latest

Kubernetes

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nlpatch
spec:
  replicas: 2
  selector:
    matchLabels:
      app: orchestrator
  template:
    metadata:
      labels:
        app: orchestrator
    spec:
      containers:
      - name: api
        image: nlpatch:latest
        ports:
        - containerPort: 8001
        env:
        - name: OPENAI_API_KEY
          valueFrom:
            secretKeyRef:
              name: openai-secret
              key: api-key
        resources:
          requests:
            memory: "512Mi"
            cpu: "500m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
  name: orchestrator-service
spec:
  selector:
    app: orchestrator
  ports:
  - port: 80
    targetPort: 8001
  type: LoadBalancer

AWS Lambda

# lambda_handler.py
from mangum import Mangum
from main import app

handler = Mangum(app)

Deploy with AWS SAM:

# template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  OrchestratorFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Handler: lambda_handler.handler
      Runtime: python3.11
      Timeout: 30
      Environment:
        Variables:
          OPENAI_API_KEY: !Ref OpenAIKey
      Events:
        EditAPI:
          Type: Api
          Properties:
            Path: /edit
            Method: post

Error Handling

# Example error response
{
  "error": {
    "type": "ValidationError",
    "message": "Parameter 'width' out of range [0.1, 1000.0]",
    "details": {
      "node_id": "box1",
      "param_name": "width",
      "value": 5000.0,
      "valid_range": [0.1, 1000.0]
    }
  },
  "patch": null,
  "validation": {
    "valid": false,
    "checks_passed": 16,
    "errors": ["Check 7 failed: Parameter values within valid ranges"]
  }
}

Testing

# Run unit tests
pytest tests/

# Run integration tests with live API
pytest tests/integration/ --api-url http://localhost:8001

# Test with mock LLM (no API key required)
pytest tests/ --mock-llm

Building from Source

# Clone repository
git clone https://github.com/YOUR_USERNAME/nlpatch.git
cd nlpatch

# Create environment
python -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt
pip install -r requirements-dev.txt  # For testing

# Run tests
pytest

# Start server
uvicorn main:app --reload

Examples

See examples/ directory:

  • basic_editing.py - Simple parameter updates
  • constraint_setup.py - Adding multiple constraints
  • conversational_cad.py - Multi-turn editing session
  • batch_processing.py - Batch edit operations

Limitations

  • LLM Dependencies: Requires active OpenAI API access
  • Latency: 1-3s per request depending on instruction complexity
  • Cost: GPT-4 API costs ~$0.03-0.06 per 1000 requests
  • Context Limits: Very large scenes may exceed token limits (use scene summarization)

Citation

If you use NLPatch in academic work, please cite:

@software{modelspace_orchestrator,
  title={NLPatch: LLM-Driven Geometric Editing},
  author={Your Name},
  year={2025},
  url={https://github.com/YOUR_USERNAME/nlpatch}
}

License

MIT License - see LICENSE file for details

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.

Support

About

FastAPI service translating natural language instructions into validated geometric edit operations using GPT-4

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published