Automatic MCP Server & OpenAI Tools Bridge for apcore.
apcore-mcp turns any apcore-based project into an MCP Server and OpenAI tool provider — with zero code changes to your existing project.
┌──────────────────┐
│ django-apcore │ ← your existing apcore project (unchanged)
│ flask-apcore │
│ ... │
└────────┬─────────┘
│ extensions directory
▼
┌──────────────────┐
│ apcore-mcp │ ← just install & point to extensions dir
└───┬──────────┬───┘
│ │
▼ ▼
MCP OpenAI
Server Tools
- Zero intrusion — your apcore project needs no code changes, no imports, no dependencies on apcore-mcp
- Zero configuration — point to an extensions directory, everything is auto-discovered
- Pure adapter — apcore-mcp reads from the apcore Registry; it never modifies your modules
- Works with any
xxx-apcoreproject — if it uses the apcore Module Registry, apcore-mcp can serve it
Install apcore-mcp alongside your existing apcore project:
pip install apcore-mcpThat's it. Your existing project requires no changes.
Requires Python 3.10+ and apcore >= 0.5.0.
If you already have an apcore-based project with an extensions directory, just run:
apcore-mcp --extensions-dir /path/to/your/extensionsAll modules are auto-discovered and exposed as MCP tools. No code needed.
For tighter integration or when you need filtering/OpenAI output:
from apcore import Registry
from apcore_mcp import serve, to_openai_tools
registry = Registry(extensions_dir="./extensions")
registry.discover()
# Launch as MCP Server
serve(registry)
# Or export as OpenAI tools
tools = to_openai_tools(registry)your-project/
├── extensions/ ← modules live here
│ ├── image_resize/
│ ├── text_translate/
│ └── ...
├── your_app.py ← your existing code (untouched)
└── ...
No changes to your project. Just run apcore-mcp alongside it:
# Install (one time)
pip install apcore-mcp
# Run
apcore-mcp --extensions-dir ./extensionsYour existing application continues to work exactly as before. apcore-mcp operates as a separate process that reads from the same extensions directory.
For OpenAI integration, a thin script is needed — but still no changes to your existing modules:
from apcore import Registry
from apcore_mcp import to_openai_tools
registry = Registry(extensions_dir="./extensions")
registry.discover()
tools = to_openai_tools(registry)
# Use with openai.chat.completions.create(tools=tools)Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"apcore": {
"command": "apcore-mcp",
"args": ["--extensions-dir", "/path/to/your/extensions"]
}
}
}Add to .mcp.json in your project root:
{
"mcpServers": {
"apcore": {
"command": "apcore-mcp",
"args": ["--extensions-dir", "./extensions"]
}
}
}Add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"apcore": {
"command": "apcore-mcp",
"args": ["--extensions-dir", "./extensions"]
}
}
}apcore-mcp --extensions-dir ./extensions \
--transport streamable-http \
--host 0.0.0.0 \
--port 9000Connect any MCP client to http://your-host:9000/mcp.
apcore-mcp --extensions-dir PATH [OPTIONS]
| Option | Default | Description |
|---|---|---|
--extensions-dir |
(required) | Path to apcore extensions directory |
--transport |
stdio |
Transport: stdio, streamable-http, or sse |
--host |
127.0.0.1 |
Host for HTTP-based transports |
--port |
8000 |
Port for HTTP-based transports (1-65535) |
--name |
apcore-mcp |
MCP server name (max 255 chars) |
--version |
package version | MCP server version string |
--log-level |
INFO |
Logging: DEBUG, INFO, WARNING, ERROR |
Exit codes: 0 normal, 1 invalid arguments, 2 startup failure.
from apcore_mcp import serve
serve(
registry_or_executor, # Registry or Executor
transport="stdio", # "stdio" | "streamable-http" | "sse"
host="127.0.0.1", # host for HTTP transports
port=8000, # port for HTTP transports
name="apcore-mcp", # server name
version=None, # defaults to package version
)Accepts either a Registry or Executor. When a Registry is passed, an Executor is created automatically.
from apcore_mcp import to_openai_tools
tools = to_openai_tools(
registry_or_executor, # Registry or Executor
embed_annotations=False, # append annotation hints to descriptions
strict=False, # OpenAI Structured Outputs strict mode
tags=None, # filter by tags, e.g. ["image"]
prefix=None, # filter by module ID prefix, e.g. "image"
)Returns a list of dicts directly usable with the OpenAI API:
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Resize the image to 512x512"}],
tools=tools,
)Strict mode (strict=True): sets additionalProperties: false, makes all properties required (optional ones become nullable), removes defaults.
Annotation embedding (embed_annotations=True): appends [Annotations: read_only, idempotent] to descriptions.
Filtering: tags=["image"] or prefix="text" to expose a subset of modules.
If you need custom middleware, ACL, or execution configuration:
from apcore import Registry, Executor
registry = Registry(extensions_dir="./extensions")
registry.discover()
executor = Executor(registry)
serve(executor)
tools = to_openai_tools(executor)- Auto-discovery — all modules in the extensions directory are found and exposed automatically
- Three transports — stdio (default, for desktop clients), Streamable HTTP, and SSE
- Annotation mapping — apcore annotations (readonly, destructive, idempotent) map to MCP ToolAnnotations
- Schema conversion — JSON Schema
$ref/$defsinlining, strict mode for OpenAI Structured Outputs - Error sanitization — ACL errors and internal errors are sanitized; stack traces are never leaked
- Dynamic registration — modules registered/unregistered at runtime are reflected immediately
- Dual output — same registry powers both MCP Server and OpenAI tool definitions
| apcore | MCP |
|---|---|
module_id |
Tool name |
description |
Tool description |
input_schema |
inputSchema |
annotations.readonly |
ToolAnnotations.readOnlyHint |
annotations.destructive |
ToolAnnotations.destructiveHint |
annotations.idempotent |
ToolAnnotations.idempotentHint |
annotations.open_world |
ToolAnnotations.openWorldHint |
| apcore | OpenAI |
|---|---|
module_id (image.resize) |
name (image-resize) |
description |
description |
input_schema |
parameters |
Module IDs with dots are normalized to dashes for OpenAI compatibility (bijective mapping).
Your apcore project (unchanged)
│
│ extensions directory
▼
apcore-mcp (separate process / library call)
│
├── MCP Server path
│ SchemaConverter + AnnotationMapper
│ → MCPServerFactory → ExecutionRouter → TransportManager
│
└── OpenAI Tools path
SchemaConverter + AnnotationMapper + IDNormalizer
→ OpenAIConverter → list[dict]
git clone https://github.com/aipartnerup/apcore-mcp-python.git
cd apcore-mcp
pip install -e ".[dev]"
pytest # 260 tests
pytest --cov # with coverage reportsrc/apcore_mcp/
├── __init__.py # Public API: serve(), to_openai_tools()
├── __main__.py # CLI entry point
├── adapters/
│ ├── schema.py # JSON Schema conversion ($ref inlining)
│ ├── annotations.py # Annotation mapping (apcore → MCP/OpenAI)
│ ├── errors.py # Error sanitization
│ └── id_normalizer.py # Module ID normalization (dot ↔ dash)
├── converters/
│ └── openai.py # OpenAI tool definition converter
└── server/
├── factory.py # MCP Server creation and tool building
├── router.py # Tool call → Executor routing
├── transport.py # Transport management (stdio/HTTP/SSE)
└── listener.py # Dynamic module registration listener
Apache-2.0