A FastAPI-based service that translates natural language instructions into structured geometric edits using GPT-4 with schema validation and constraint preservation.
NLPatch bridges the gap between natural language and formal geometric representations. It uses large language models (GPT-4) to interpret design intent and generate validated edit operations on geometric scene graphs, enabling intuitive CAD manipulation through conversational interfaces.
- Natural Language Processing: Interpret complex design instructions in plain English
- Structured Output Generation: Convert intent to validated JSON patch operations
- 17-Point Schema Validation: Ensure geometric consistency and constraint preservation
- Multi-Operation Support: Handle parameter updates, node creation/deletion, constraint management
- Fast Inference: <2s end-to-end latency for typical queries
- Production-Ready: FastAPI REST API with async support
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install fastapi uvicorn openai pydantic python-dotenv jsonschema
# Set OpenAI API key
echo "OPENAI_API_KEY=sk-..." > .env# Development mode with auto-reload
uvicorn main:app --reload --port 8001
# Production mode
uvicorn main:app --host 0.0.0.0 --port 8001 --workers 4curl -X POST http://localhost:8001/edit \
-H "Content-Type: application/json" \
-d '{
"instruction": "Make the box twice as wide and add a distance constraint of 15mm between box1 and sphere1",
"current_scene": {
"nodes": {
"box1": {
"op": "sdf.box",
"params": {"width": 10.0, "height": 5.0, "depth": 3.0}
},
"sphere1": {
"op": "sdf.sphere",
"params": {"radius": 8.0}
}
},
"constraints": []
}
}'Response:
{
"patch": {
"actions": [
{
"type": "set_param",
"node_id": "box1",
"param_name": "width",
"value": 20.0
},
{
"type": "add_constraint",
"constraint": {
"id": "C1",
"type": "distance",
"targets": ["box1", "sphere1"],
"value": 15.0,
"weight": 1.0
}
}
]
},
"validation": {
"valid": true,
"checks_passed": 17
},
"latency_ms": 1847
}User Input (Natural Language)
↓
GPT-4 with System Prompt
↓
JSON Patch (Structured Edits)
↓
17-Point Schema Validator
↓
Validated Patch Output
The orchestrator supports 5 core operation types:
- set_param: Update parameter values
- add_node: Create new geometric nodes
- add_constraint: Add optimization constraints
- delete_node: Remove geometric nodes
- delete_constraint: Remove constraints
Convert natural language instruction to geometric patch.
Request Body:
{
"instruction": "string", // Natural language command
"current_scene": { // Current scene graph
"nodes": {
"node_id": {
"op": "string",
"params": { ... },
"inputs": ["string"]
}
},
"constraints": [ ... ]
},
"conversation_history": [ // Optional: previous exchanges
{
"role": "user",
"content": "string"
}
]
}Response:
{
"patch": {
"actions": [
{
"type": "set_param | add_node | delete_node | add_constraint | delete_constraint",
...
}
]
},
"validation": {
"valid": true,
"checks_passed": 17,
"errors": []
},
"reasoning": "string", // LLM's reasoning (if requested)
"latency_ms": 1847
}Health check endpoint.
Response:
{
"status": "healthy",
"openai_configured": true,
"version": "1.0.0"
}Validate a patch without LLM inference.
Request Body:
{
"patch": { ... },
"current_scene": { ... }
}Response:
{
"valid": true,
"checks_passed": 17,
"errors": []
}Update a parameter value on an existing node.
{
"type": "set_param",
"node_id": "box1",
"param_name": "width",
"value": 20.0
}Create a new geometric node in the scene.
{
"type": "add_node",
"node_id": "cyl1",
"op": "sdf.cylinder",
"params": {
"radius": 5.0,
"height": 10.0
},
"inputs": []
}Remove a geometric node and clean up dependent references.
{
"type": "delete_node",
"node_id": "box1"
}Add an optimization constraint between nodes.
{
"type": "add_constraint",
"constraint": {
"id": "C1",
"type": "distance",
"targets": ["box1", "sphere1"],
"value": 15.0,
"weight": 1.0,
"samples": [
{
"node": "box1",
"point": [0, 0, 0]
},
{
"node": "sphere1",
"point": [0, 0, 0]
}
]
}
}Constraint Types:
distance: Maintain distance between geometriestangent: Make surfaces tangent at contact pointsangle: Maintain angle between surface normals
Remove an optimization constraint.
{
"type": "delete_constraint",
"constraint_id": "C1"
}The orchestrator performs 17 validation checks before returning patches:
- âś“ Patch has 'actions' array
- âś“ Each action has 'type' field
- âś“ Action types are valid
- âś“ All referenced nodes exist
- âś“ No circular dependencies in inputs
- âś“ Parameter names match operator schema
- âś“ Parameter values within valid ranges
- âś“ Constraint targets have geometry nodes
- âś“ Constraint types are valid
- âś“ Constraint samples reference valid nodes
- âś“ No duplicate node IDs
- âś“ No duplicate constraint IDs
- âś“ Numeric params are numbers
- âś“ String params are strings
- âś“ Array params are arrays
- âś“ Inputs array contains valid node IDs
- âś“ Transform parameters have correct dimensionality
The orchestrator understands 17 geometric operators:
sdf.box- Axis-aligned boxsdf.sphere- Spheresdf.cylinder- Cylindersdf.cone- Cone/frustumsdf.torus- Torussdf.capsule- Capsule/pill shapesdf.halfspace- Infinite plane
transform.translate- Translationtransform.rotate- Rotationtransform.scale- Scaling
boolean.union- Unionboolean.intersect- Intersectionboolean.subtract- Subtractionboolean.smooth_union- Smooth union
chart.bspline- B-spline curvechart.extrude- Linear extrusionchart.revolve- Surface of revolution
import requests
# Initial design
scene = {
"nodes": {
"base": {"op": "sdf.box", "params": {"width": 100, "height": 20, "depth": 50}}
},
"constraints": []
}
# Iterative refinement
instructions = [
"Make the base 50% wider",
"Add a cylinder on top with radius 15 and height 30",
"Add a distance constraint of 5mm between base and cylinder"
]
for instruction in instructions:
response = requests.post('http://localhost:8001/edit', json={
'instruction': instruction,
'current_scene': scene
})
patch = response.json()['patch']
scene = apply_patch(scene, patch) # Apply patch to scene
print("Final scene:", scene)# Complex constraint setup via natural language
instruction = """
Create a gear assembly:
1. Add a cylinder gear1 with radius 20mm and height 10mm
2. Add a second cylinder gear2 with radius 15mm and height 10mm
3. Add a distance constraint of 35mm between their centers
4. Add a tangent constraint where they mesh
"""
response = requests.post('http://localhost:8001/edit', json={
'instruction': instruction,
'current_scene': empty_scene
})
patch = response.json()['patch']
# Patch will contain 4 add_node actions + 2 add_constraint actions# Conversational editing with history
history = []
# First edit
response1 = requests.post('http://localhost:8001/edit', json={
'instruction': "Create a box named housing with dimensions 100x50x30",
'current_scene': scene,
'conversation_history': history
})
history.append({"role": "user", "content": "Create a box named housing..."})
history.append({"role": "assistant", "content": response1.json()['reasoning']})
# Second edit with context
response2 = requests.post('http://localhost:8001/edit', json={
'instruction': "Make it 20% taller", # "it" refers to housing
'current_scene': updated_scene,
'conversation_history': history
})# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_MODEL=gpt-4 # Default: gpt-4
OPENAI_TEMPERATURE=0.0 # Default: 0.0 (deterministic)
OPENAI_MAX_TOKENS=2000 # Default: 2000
LOG_LEVEL=INFO # Default: INFO
PORT=8001 # Default: 8001Edit system_prompt.txt to customize LLM behavior:
# main.py
with open('system_prompt.txt') as f:
SYSTEM_PROMPT = f.read()
# Modify prompt to emphasize specific behaviors
SYSTEM_PROMPT += """
Additional Guidelines:
- Always preserve existing constraints unless explicitly asked to remove them
- Prefer smooth_union over sharp union for organic shapes
- Use standard gear modules (1, 1.5, 2, 2.5, 3, 4, 5mm)
"""from functools import lru_cache
import hashlib
@lru_cache(maxsize=1000)
def cached_llm_call(instruction_hash, scene_hash):
# Cache repeated queries
return call_openai(instruction, scene)
# Usage
instruction_hash = hashlib.md5(instruction.encode()).hexdigest()
scene_hash = hashlib.md5(json.dumps(scene, sort_keys=True).encode()).hexdigest()
result = cached_llm_call(instruction_hash, scene_hash)from fastapi import BackgroundTasks
@app.post("/edit_async")
async def edit_async(request: EditRequest, background_tasks: BackgroundTasks):
task_id = generate_task_id()
background_tasks.add_task(process_edit, task_id, request)
return {"task_id": task_id, "status": "processing"}
@app.get("/status/{task_id}")
async def get_status(task_id: str):
result = get_result(task_id)
return {"status": result.status, "patch": result.patch if result.done else None}@app.post("/edit_batch")
async def edit_batch(requests: List[EditRequest]):
# Process multiple edits in parallel
tasks = [edit_single(req) for req in requests]
results = await asyncio.gather(*tasks)
return {"results": results}FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8001
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8001"]docker build -t nlpatch:latest .
docker run -p 8001:8001 -e OPENAI_API_KEY=sk-... nlpatch:latestapiVersion: apps/v1
kind: Deployment
metadata:
name: nlpatch
spec:
replicas: 2
selector:
matchLabels:
app: orchestrator
template:
metadata:
labels:
app: orchestrator
spec:
containers:
- name: api
image: nlpatch:latest
ports:
- containerPort: 8001
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: openai-secret
key: api-key
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
name: orchestrator-service
spec:
selector:
app: orchestrator
ports:
- port: 80
targetPort: 8001
type: LoadBalancer# lambda_handler.py
from mangum import Mangum
from main import app
handler = Mangum(app)Deploy with AWS SAM:
# template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
OrchestratorFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: lambda_handler.handler
Runtime: python3.11
Timeout: 30
Environment:
Variables:
OPENAI_API_KEY: !Ref OpenAIKey
Events:
EditAPI:
Type: Api
Properties:
Path: /edit
Method: post# Example error response
{
"error": {
"type": "ValidationError",
"message": "Parameter 'width' out of range [0.1, 1000.0]",
"details": {
"node_id": "box1",
"param_name": "width",
"value": 5000.0,
"valid_range": [0.1, 1000.0]
}
},
"patch": null,
"validation": {
"valid": false,
"checks_passed": 16,
"errors": ["Check 7 failed: Parameter values within valid ranges"]
}
}# Run unit tests
pytest tests/
# Run integration tests with live API
pytest tests/integration/ --api-url http://localhost:8001
# Test with mock LLM (no API key required)
pytest tests/ --mock-llm# Clone repository
git clone https://github.com/YOUR_USERNAME/nlpatch.git
cd nlpatch
# Create environment
python -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
pip install -r requirements-dev.txt # For testing
# Run tests
pytest
# Start server
uvicorn main:app --reloadSee examples/ directory:
basic_editing.py- Simple parameter updatesconstraint_setup.py- Adding multiple constraintsconversational_cad.py- Multi-turn editing sessionbatch_processing.py- Batch edit operations
- LLM Dependencies: Requires active OpenAI API access
- Latency: 1-3s per request depending on instruction complexity
- Cost: GPT-4 API costs ~$0.03-0.06 per 1000 requests
- Context Limits: Very large scenes may exceed token limits (use scene summarization)
If you use NLPatch in academic work, please cite:
@software{modelspace_orchestrator,
title={NLPatch: LLM-Driven Geometric Editing},
author={Your Name},
year={2025},
url={https://github.com/YOUR_USERNAME/nlpatch}
}MIT License - see LICENSE file for details
Contributions welcome! See CONTRIBUTING.md for guidelines.
- Issues: https://github.com/YOUR_USERNAME/nlpatch/issues
- Documentation: https://nlpatch.readthedocs.io
- Email: support@example.com