Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 2 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -42,16 +42,13 @@ schema: ## Generate OpenAPI schema file
uv run scripts/generate_openapi_schema.py docs/openapi.json

openapi-doc: docs/openapi.json scripts/fix_openapi_doc.py ## Generate OpenAPI documentation
openapi-to-markdown --input_file docs/openapi.json --output_file output.md
python3 scripts/fix_openapi_doc.py < output.md > docs/output.md
uv run openapi-to-markdown --input_file docs/openapi.json --output_file output.md
uv run python3 scripts/fix_openapi_doc.py < output.md > docs/openapi.md
rm output.md

generate-documentation: ## Generate documentation
scripts/gen_doc.py

openapi-doc: docs/openapi.json ## Generate OpenAPI documentation
openapi-to-markdown --input_file docs/openapi.json --output_file docs/output.md

# TODO uv migration
requirements.txt: pyproject.toml pdm.lock ## Generate requirements.txt file containing hashes for all non-devel packages
pdm export --prod --format requirements --output requirements.txt --no-extras --without evaluation
Expand Down
42 changes: 21 additions & 21 deletions docs/conversations_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ This document explains how the Conversations API works with the Responses API in
* [Continuing Existing Conversations](#continuing-existing-conversations)
* [Conversation Storage](#conversation-storage)
* [API Endpoints](#api-endpoints)
* [Query Endpoint (v2)](#query-endpoint-v2)
* [Streaming Query Endpoint (v2)](#streaming-query-endpoint-v2)
* [Conversations List Endpoint (v3)](#conversations-list-endpoint-v3)
* [Conversation Detail Endpoint (v3)](#conversation-detail-endpoint-v3)
* [Query Endpoint (v1)](#query-endpoint-v1)
* [Streaming Query Endpoint (v1)](#streaming-query-endpoint-v1)
* [Conversations List Endpoint (v1)](#conversations-list-endpoint-v1)
* [Conversation Detail Endpoint (v1)](#conversation-detail-endpoint-v1)
Comment on lines +24 to +27
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix unordered list indentation in table of contents.

The list items have 3-space indentation before the * marker, but markdown linting expects 2 spaces. Apply this diff to fix the indentation:

 * [API Endpoints](#api-endpoints)
-   * [Query Endpoint (v1)](#query-endpoint-v1)
-   * [Streaming Query Endpoint (v1)](#streaming-query-endpoint-v1)
-   * [Conversations List Endpoint (v1)](#conversations-list-endpoint-v1)
-   * [Conversation Detail Endpoint (v1)](#conversation-detail-endpoint-v1)
+  * [Query Endpoint (v1)](#query-endpoint-v1)
+  * [Streaming Query Endpoint (v1)](#streaming-query-endpoint-v1)
+  * [Conversations List Endpoint (v1)](#conversations-list-endpoint-v1)
+  * [Conversation Detail Endpoint (v1)](#conversation-detail-endpoint-v1)

Based on coding guidelines (markdownlint MD007), unordered lists should use consistent 2-space indentation.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
* [Query Endpoint (v1)](#query-endpoint-v1)
* [Streaming Query Endpoint (v1)](#streaming-query-endpoint-v1)
* [Conversations List Endpoint (v1)](#conversations-list-endpoint-v1)
* [Conversation Detail Endpoint (v1)](#conversation-detail-endpoint-v1)
* [Query Endpoint (v1)](#query-endpoint-v1)
* [Streaming Query Endpoint (v1)](#streaming-query-endpoint-v1)
* [Conversations List Endpoint (v1)](#conversations-list-endpoint-v1)
* [Conversation Detail Endpoint (v1)](#conversation-detail-endpoint-v1)
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

24-24: Unordered list indentation
Expected: 2; Actual: 3

(MD007, ul-indent)


25-25: Unordered list indentation
Expected: 2; Actual: 3

(MD007, ul-indent)


26-26: Unordered list indentation
Expected: 2; Actual: 3

(MD007, ul-indent)


27-27: Unordered list indentation
Expected: 2; Actual: 3

(MD007, ul-indent)

🤖 Prompt for AI Agents
In docs/conversations_api.md around lines 24 to 27, the table-of-contents
unordered list items use 3-space indentation before the '*' markers which
violates markdownlint MD007; change each list item to use exactly 2 spaces of
indentation before the '*' so the bullets align consistently (replace the
leading 3 spaces with 2 spaces on each listed line).

* [Testing with curl](#testing-with-curl)
* [Database Schema](#database-schema)
* [Troubleshooting](#troubleshooting)
Expand Down Expand Up @@ -191,9 +191,9 @@ Stores user-specific metadata:

## API Endpoints

### Query Endpoint (v2)
### Query Endpoint (v1)

**Endpoint:** `POST /v2/query`
**Endpoint:** `POST /v1/query`

**Request:**
```json
Expand Down Expand Up @@ -223,11 +223,11 @@ Stores user-specific metadata:
> [!NOTE]
> If `conversation_id` is omitted, a new conversation is automatically created and the new ID is returned in the response.

### Streaming Query Endpoint (v2)
### Streaming Query Endpoint (v1)

**Endpoint:** `POST /v2/streaming_query`
**Endpoint:** `POST /v1/streaming_query`

**Request:** Same as `/v2/query`
**Request:** Same as `/v1/query`

**Response:** Server-Sent Events (SSE) stream

Expand All @@ -243,9 +243,9 @@ data: {"event": "turn_complete", "data": {"id": 10, "token": "The OpenShift Assi
data: {"event": "end", "data": {"referenced_documents": [], "input_tokens": 150, "output_tokens": 200}}
```

### Conversations List Endpoint (v3)
### Conversations List Endpoint (v1)

**Endpoint:** `GET /v3/conversations`
**Endpoint:** `GET /v1/conversations`

**Response:**
```json
Expand All @@ -264,9 +264,9 @@ data: {"event": "end", "data": {"referenced_documents": [], "input_tokens": 150,
}
```

### Conversation Detail Endpoint (v3)
### Conversation Detail Endpoint (v1)

**Endpoint:** `GET /v3/conversations/{conversation_id}`
**Endpoint:** `GET /v1/conversations/{conversation_id}`

**Response:**
```json
Expand Down Expand Up @@ -308,7 +308,7 @@ export TOKEN="<your-token>"
To start a new conversation, omit the `conversation_id` field:

```bash
curl -X POST http://localhost:8090/v2/query \
curl -X POST http://localhost:8090/v1/query \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{
Expand Down Expand Up @@ -338,7 +338,7 @@ curl -X POST http://localhost:8090/v2/query \
To continue an existing conversation, include the `conversation_id` from a previous response:

```bash
curl -X POST http://localhost:8090/v2/query \
curl -X POST http://localhost:8090/v1/query \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{
Expand All @@ -351,10 +351,10 @@ curl -X POST http://localhost:8090/v2/query \

### Streaming Query (New Conversation)

For streaming responses, use the `/v2/streaming_query` endpoint. The response is returned as Server-Sent Events (SSE):
For streaming responses, use the `/v1/streaming_query` endpoint. The response is returned as Server-Sent Events (SSE):

```bash
curl -X POST http://localhost:8090/v2/streaming_query \
curl -X POST http://localhost:8090/v1/streaming_query \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-H "Accept: text/event-stream" \
Expand All @@ -381,7 +381,7 @@ data: {"event": "end", "data": {"referenced_documents": [], "input_tokens": 150,
### Streaming Query (Continue Conversation)

```bash
curl -X POST http://localhost:8090/v2/streaming_query \
curl -X POST http://localhost:8090/v1/streaming_query \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-H "Accept: text/event-stream" \
Expand All @@ -396,15 +396,15 @@ curl -X POST http://localhost:8090/v2/streaming_query \
### List Conversations

```bash
curl -X GET http://localhost:8090/v3/conversations \
curl -X GET http://localhost:8090/v1/conversations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN"
```

### Get Conversation Details

```bash
curl -X GET http://localhost:8090/v3/conversations/0d21ba731f21f798dc9680125d5d6f493e4a7ab79f25670e \
curl -X GET http://localhost:8090/v1/conversations/0d21ba731f21f798dc9680125d5d6f493e4a7ab79f25670e \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN"
```
Expand Down Expand Up @@ -492,7 +492,7 @@ This is expected behavior. The Responses API v2 allows you to change the model/p
### Empty Conversation History

**Symptom:**
Calling `/v3/conversations/{conversation_id}` returns empty `chat_history`.
Calling `/v1/conversations/{conversation_id}` returns empty `chat_history`.

**Possible Causes:**
1. The conversation was just created and has no messages yet
Expand Down
Loading
Loading