Add OpenAI ChatGPT integration with REST API endpoints#5
Add OpenAI ChatGPT integration with REST API endpoints#5
Conversation
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This PR adds an OpenAI ChatGPT integration to the Web3AI project, exposing RESTful endpoints for programmatic chat interactions and documenting setup/usage. The changes introduce an OpenAIChat helper, an Express server with /health, /chat, and /chat/system endpoints, and supporting configuration/docs.
Changes:
- Add
OpenAIChatclass wrapping the OpenAI Chat Completions API, including basic validation and common response shaping. - Implement an Express-based API server (
index.js) with health and chat endpoints, plus environment-based configuration (API key, port, model). - Provide project metadata (
package.json), environment template (.env.example),.gitignoreentries, and a detailed README describing installation and API usage.
Reviewed changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
package.json |
Defines project metadata, scripts, and dependencies for the Express/OpenAI-based API server (note: Express version here currently diverges from the PR description). |
openai-chat.js |
Implements the OpenAIChat helper class for sending messages (with optional history or system prompt) to OpenAI and normalizing responses/errors. |
index.js |
Sets up the Express server, validates presence of OPENAI_API_KEY, and exposes /health, /chat, and /chat/system endpoints on the configured port. |
README.md |
Documents setup, environment variables, endpoints, example requests/responses, and project structure for the new ChatGPT integration. |
.gitignore |
Ensures node_modules, .env, and lockfile are excluded from version control, protecting secrets and reducing repo noise. |
.env.example |
Provides a template for required environment variables (API key, port, and model) for the ChatGPT integration. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Build messages array with conversation history | ||
| const messages = [ | ||
| ...conversationHistory, |
There was a problem hiding this comment.
conversationHistory is spread directly into the messages array without validating its type or structure; if a client sends a non-array value (e.g. a string or object), this will be iterated incorrectly and produce a malformed messages payload for the OpenAI API. Consider validating that conversationHistory is an array of { role, content } objects (or defaulting to an empty array when it is not) before constructing messages.
| // Build messages array with conversation history | |
| const messages = [ | |
| ...conversationHistory, | |
| // Build messages array with validated conversation history | |
| const safeConversationHistory = Array.isArray(conversationHistory) | |
| ? conversationHistory.filter(msg => | |
| msg && | |
| typeof msg === 'object' && | |
| typeof msg.role === 'string' && | |
| typeof msg.content === 'string' | |
| ) | |
| : []; | |
| const messages = [ | |
| ...safeConversationHistory, |
| "dependencies": { | ||
| "dotenv": "^16.3.1", | ||
| "express": "^4.18.2", |
There was a problem hiding this comment.
The PR description specifies express@4.19.2, but package.json declares "express": "^4.18.2"; this discrepancy can confuse consumers about which version is actually required. Please align the documented dependency version with the one declared here (or update the dependency if the newer version is intended).
| app.post('/chat', async (req, res) => { | ||
| try { | ||
| const { message, conversationHistory } = req.body; | ||
|
|
||
| if (!message) { | ||
| return res.status(400).json({ | ||
| success: false, | ||
| error: 'Message is required' | ||
| }); | ||
| } | ||
|
|
||
| const response = await openaiChat.sendMessage(message, conversationHistory); | ||
|
|
||
| if (response.success) { | ||
| res.json(response); | ||
| } else { | ||
| res.status(500).json(response); | ||
| } |
There was a problem hiding this comment.
Validation errors in OpenAIChat.sendMessage (e.g. non-string message) are converted into { success: false, error: ... }, which the /chat endpoint then returns with HTTP 500, even though these are client-side input issues. To keep API semantics clear, consider either performing full input validation in the route handler and returning HTTP 400 for invalid payloads, or distinguishing validation errors from upstream/OpenAI failures so that only true server errors result in 500 responses.
| const completion = await this.client.chat.completions.create({ | ||
| model: this.model, | ||
| messages: messages, | ||
| }); | ||
|
|
||
| // Validate response | ||
| if (!completion.choices || completion.choices.length === 0) { | ||
| throw new Error('No response received from OpenAI'); | ||
| } | ||
|
|
||
| const response = completion.choices[0].message.content; | ||
|
|
||
| return { | ||
| success: true, | ||
| message: response, | ||
| model: this.model, | ||
| usage: completion.usage, | ||
| finishReason: completion.choices[0].finish_reason | ||
| }; |
There was a problem hiding this comment.
sendMessage and sendMessageWithSystem duplicate most of the request/response handling logic (building the OpenAI request, validating choices, shaping the return object, and error handling), which makes future changes harder to keep in sync. Consider extracting the shared OpenAI call/response logic into a common helper so that differences between the two methods are limited to how the messages array is constructed.
Implements OpenAI ChatGPT integration with RESTful API endpoints for programmatic AI interaction.
Implementation
Core Module (
openai-chat.js)OpenAIChatclass encapsulating OpenAI API clientsendMessage()- basic chat with conversation history supportsendMessageWithSystem()- chat with custom system promptsAPI Server (
index.js)GET /health- server status checkPOST /chat- send messages with optional conversation contextPOST /chat/system- send messages with system-level instructionsOPENAI_API_KEYConfiguration
.env.exampletemplate withOPENAI_API_KEY,PORT,OPENAI_MODEL.gitignoreexcludes secrets and dependenciesDependencies
express@4.19.2,openai@4.20.0,dotenv@16.3.1Usage Example
Response includes success status, AI message, token usage, and model metadata.
Warning
Firewall rules blocked me from connecting to one or more addresses (expand for details)
I tried to connect to the following addresses, but was blocked by firewall rules:
api.openai.com/usr/local/bin/node node index.js(dns block)If you need me to access, download, or install something from one of these locations, you can either:
Original prompt
This pull request was created as a result of the following prompt from Copilot chat.
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.