This is the backend server for the Contentstack Agent SDK. It handles AI queries, fetches Contentstack content types, and streams AI responses to the client.
-
Express.js server with REST APIs
-
Contentstack integration – fetches and caches content types
-
Webhook support – automatically refresh cached schemas on updates
-
AI-powered responses – connects to LLM providers (Google, Groq, etc.)
-
Streaming support – sends partial responses in real-time via chunks
-
MongoDB storage – stores cached Contentstack schemas
- Clone the repo
git clone https://github.com/Parth-Bhovad/contentstack-agent-server.git
cd contentstack-agent-server- Install dependencies
npm install
# or
yarn install- Environment variables Create a .env file in the root directory with:
PORT=8080
# Contentstack credentials
CONTENTSTACK_API_KEY=your_stack_api_key
CONTENTSTACK_DELIVERY_TOKEN=your_delivery_token
CONTENTSTACK_ENVIRONMENT=your_environment
#LLM API keys
GOOGLE_API_KEY=your_gemini_api_key
GROQ_API_KEY=your_groq_api_key
# Database
MONGO_URI=your_mongo_uri- Run the server
npm startThe server will be available at:
http://localhost:8080npx @contentstack/mcp --authGET /
testing check endpoint.
Response:
{
"message": "Welcome to the AI Agent Server API"
}POST /api/v1/ask
Ask a question and stream AI response back.
Request body:
{
"query": "What is the latest blog?",
"contentstackApiKey": "your_stack_api_key",
"llmApiKey": "your_llm_api_key",
"llmProvider": "google"
}Response: (streaming chunks of text, not JSON)
POST /webhook
Webhook handler from Contentstack.
When a content type changes, this endpoint clears the cached schema in MongoDB so the server always fetches fresh content types. It will happen automatically. You Don't need to worry about this endpoint. It is mentioned for reference purpose.