Skip to content

Comments

Add OpenAI ChatGPT integration with REST API endpoints#5

Open
Copilot wants to merge 4 commits intomainfrom
copilot/add-openai-chatgpt-integration
Open

Add OpenAI ChatGPT integration with REST API endpoints#5
Copilot wants to merge 4 commits intomainfrom
copilot/add-openai-chatgpt-integration

Conversation

Copy link
Contributor

Copilot AI commented Dec 14, 2025

Implements OpenAI ChatGPT integration with RESTful API endpoints for programmatic AI interaction.

Implementation

Core Module (openai-chat.js)

  • OpenAIChat class encapsulating OpenAI API client
  • sendMessage() - basic chat with conversation history support
  • sendMessageWithSystem() - chat with custom system prompts
  • Input/response validation to prevent runtime errors

API Server (index.js)

  • GET /health - server status check
  • POST /chat - send messages with optional conversation context
  • POST /chat/system - send messages with system-level instructions
  • Startup validation for required OPENAI_API_KEY

Configuration

  • .env.example template with OPENAI_API_KEY, PORT, OPENAI_MODEL
  • .gitignore excludes secrets and dependencies

Dependencies

  • express@4.19.2, openai@4.20.0, dotenv@16.3.1

Usage Example

# Basic chat
curl -X POST http://localhost:3000/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Explain Web3"}'

# With system prompt
curl -X POST http://localhost:3000/chat/system \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Explain blockchain", 
    "systemPrompt": "You are a technical educator"
  }'

Response includes success status, AI message, token usage, and model metadata.

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • api.openai.com
    • Triggering command: /usr/local/bin/node node index.js (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Original prompt

Problem:

You want to enhance the existing Web3AI project by integrating OpenAI ChatGPT capabilities. This involves setting up functionality to interact with OpenAI's GPT models, making it possible to send requests and receive responses programmatically.

Solution:

I will extend the Web3AI project with the following features:

  1. OpenAI ChatGPT Integration:

    • Add Node.js code to set up an API for interacting with OpenAI ChatGPT.
    • Include methods to send prompts to ChatGPT and retrieve generated responses.
  2. Environment Variables:

    • Update the .env and .env.example files to include the OpenAI API key and other configurations.
  3. API Endpoints:

    • Create RESTful API endpoints to handle ChatGPT requests.
    • Examples:
      • /chat: An endpoint to send messages and receive ChatGPT responses.
  4. Documentation:

    • Add detailed instructions in the README file for setting up and using the OpenAI ChatGPT integration.

Deliverables:

  1. Node.js module (openai-chat.js) that handles communication with OpenAI ChatGPT using the openai npm package.
  2. Updated .env and .env.example files with the OpenAI API key configuration.
  3. API endpoints in index.js to manage OpenAI ChatGPT requests.
  4. Updated README file with setup and usage instructions.

This pull request was created as a result of the following prompt from Copilot chat.

Problem:

You want to enhance the existing Web3AI project by integrating OpenAI ChatGPT capabilities. This involves setting up functionality to interact with OpenAI's GPT models, making it possible to send requests and receive responses programmatically.

Solution:

I will extend the Web3AI project with the following features:

  1. OpenAI ChatGPT Integration:

    • Add Node.js code to set up an API for interacting with OpenAI ChatGPT.
    • Include methods to send prompts to ChatGPT and retrieve generated responses.
  2. Environment Variables:

    • Update the .env and .env.example files to include the OpenAI API key and other configurations.
  3. API Endpoints:

    • Create RESTful API endpoints to handle ChatGPT requests.
    • Examples:
      • /chat: An endpoint to send messages and receive ChatGPT responses.
  4. Documentation:

    • Add detailed instructions in the README file for setting up and using the OpenAI ChatGPT integration.

Deliverables:

  1. Node.js module (openai-chat.js) that handles communication with OpenAI ChatGPT using the openai npm package.
  2. Updated .env and .env.example files with the OpenAI API key configuration.
  3. API endpoints in index.js to manage OpenAI ChatGPT requests.
  4. Updated README file with setup and usage instructions.

💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI and others added 3 commits December 14, 2025 17:12
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Co-authored-by: lippytm <65956507+lippytm@users.noreply.github.com>
Copilot AI changed the title [WIP] Add OpenAI ChatGPT integration to Web3AI project Add OpenAI ChatGPT integration with REST API endpoints Dec 14, 2025
Copilot AI requested a review from lippytm December 14, 2025 17:20
@lippytm lippytm requested review from Copilot and removed request for lippytm February 1, 2026 04:36
@lippytm lippytm marked this pull request as ready for review February 1, 2026 04:37
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds an OpenAI ChatGPT integration to the Web3AI project, exposing RESTful endpoints for programmatic chat interactions and documenting setup/usage. The changes introduce an OpenAIChat helper, an Express server with /health, /chat, and /chat/system endpoints, and supporting configuration/docs.

Changes:

  • Add OpenAIChat class wrapping the OpenAI Chat Completions API, including basic validation and common response shaping.
  • Implement an Express-based API server (index.js) with health and chat endpoints, plus environment-based configuration (API key, port, model).
  • Provide project metadata (package.json), environment template (.env.example), .gitignore entries, and a detailed README describing installation and API usage.

Reviewed changes

Copilot reviewed 5 out of 6 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
package.json Defines project metadata, scripts, and dependencies for the Express/OpenAI-based API server (note: Express version here currently diverges from the PR description).
openai-chat.js Implements the OpenAIChat helper class for sending messages (with optional history or system prompt) to OpenAI and normalizing responses/errors.
index.js Sets up the Express server, validates presence of OPENAI_API_KEY, and exposes /health, /chat, and /chat/system endpoints on the configured port.
README.md Documents setup, environment variables, endpoints, example requests/responses, and project structure for the new ChatGPT integration.
.gitignore Ensures node_modules, .env, and lockfile are excluded from version control, protecting secrets and reducing repo noise.
.env.example Provides a template for required environment variables (API key, port, and model) for the ChatGPT integration.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +32 to +34
// Build messages array with conversation history
const messages = [
...conversationHistory,
Copy link

Copilot AI Feb 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

conversationHistory is spread directly into the messages array without validating its type or structure; if a client sends a non-array value (e.g. a string or object), this will be iterated incorrectly and produce a malformed messages payload for the OpenAI API. Consider validating that conversationHistory is an array of { role, content } objects (or defaulting to an empty array when it is not) before constructing messages.

Suggested change
// Build messages array with conversation history
const messages = [
...conversationHistory,
// Build messages array with validated conversation history
const safeConversationHistory = Array.isArray(conversationHistory)
? conversationHistory.filter(msg =>
msg &&
typeof msg === 'object' &&
typeof msg.role === 'string' &&
typeof msg.content === 'string'
)
: [];
const messages = [
...safeConversationHistory,

Copilot uses AI. Check for mistakes.
Comment on lines +18 to +20
"dependencies": {
"dotenv": "^16.3.1",
"express": "^4.18.2",
Copy link

Copilot AI Feb 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description specifies express@4.19.2, but package.json declares "express": "^4.18.2"; this discrepancy can confuse consumers about which version is actually required. Please align the documented dependency version with the one declared here (or update the dependency if the newer version is intended).

Copilot uses AI. Check for mistakes.
Comment on lines +39 to +56
app.post('/chat', async (req, res) => {
try {
const { message, conversationHistory } = req.body;

if (!message) {
return res.status(400).json({
success: false,
error: 'Message is required'
});
}

const response = await openaiChat.sendMessage(message, conversationHistory);

if (response.success) {
res.json(response);
} else {
res.status(500).json(response);
}
Copy link

Copilot AI Feb 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Validation errors in OpenAIChat.sendMessage (e.g. non-string message) are converted into { success: false, error: ... }, which the /chat endpoint then returns with HTTP 500, even though these are client-side input issues. To keep API semantics clear, consider either performing full input validation in the route handler and returning HTTP 400 for invalid payloads, or distinguishing validation errors from upstream/OpenAI failures so that only true server errors result in 500 responses.

Copilot uses AI. Check for mistakes.
Comment on lines +90 to +108
const completion = await this.client.chat.completions.create({
model: this.model,
messages: messages,
});

// Validate response
if (!completion.choices || completion.choices.length === 0) {
throw new Error('No response received from OpenAI');
}

const response = completion.choices[0].message.content;

return {
success: true,
message: response,
model: this.model,
usage: completion.usage,
finishReason: completion.choices[0].finish_reason
};
Copy link

Copilot AI Feb 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sendMessage and sendMessageWithSystem duplicate most of the request/response handling logic (building the OpenAI request, validating choices, shaping the return object, and error handling), which makes future changes harder to keep in sync. Consider extracting the shared OpenAI call/response logic into a common helper so that differences between the two methods are limited to how the messages array is constructed.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants