Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 57 additions & 8 deletions docs/openapi.json
Original file line number Diff line number Diff line change
Expand Up @@ -1406,7 +1406,11 @@
"properties": {
"query": {
"type": "string",
"title": "Query"
"title": "Query",
"description": "The query string",
"examples": [
"What is Kubernetes?"
]
},
"conversation_id": {
"anyOf": [
Expand All @@ -1417,7 +1421,11 @@
"type": "null"
}
],
"title": "Conversation Id"
"title": "Conversation Id",
"description": "The optional conversation ID (UUID)",
"examples": [
"c5260aec-4d82-4370-9fdf-05cf908b3f16"
]
},
"provider": {
"anyOf": [
Expand All @@ -1428,7 +1436,12 @@
"type": "null"
}
],
"title": "Provider"
"title": "Provider",
"description": "The optional provider",
"examples": [
"openai",
"watsonx"
]
},
"model": {
"anyOf": [
Expand All @@ -1439,7 +1452,11 @@
"type": "null"
}
],
"title": "Model"
"title": "Model",
"description": "The optional model",
"examples": [
"gpt4mini"
]
},
"system_prompt": {
"anyOf": [
Expand All @@ -1450,7 +1467,12 @@
"type": "null"
}
],
"title": "System Prompt"
"title": "System Prompt",
"description": "The optional system prompt.",
"examples": [
"You are OpenShift assistant.",
"You are Ansible assistant."
]
},
"attachments": {
"anyOf": [
Expand All @@ -1464,7 +1486,25 @@
"type": "null"
}
],
"title": "Attachments"
"title": "Attachments",
"description": "The optional list of attachments.",
"examples": [
{
"attachment_type": "log",
"content": "this is attachment",
"content_type": "text/plain"
},
{
"attachment_type": "configuration",
"content": "kind: Pod\n metadata:\n name: private-reg",
"content_type": "application/yaml"
},
{
"attachment_type": "configuration",
"content": "foo: bar",
"content_type": "application/yaml"
}
]
},
"no_tools": {
"anyOf": [
Expand All @@ -1476,7 +1516,12 @@
}
],
"title": "No Tools",
"default": false
"description": "Whether to bypass all tools and MCP servers",
"default": false,
"examples": [
true,
false
]
},
"media_type": {
"anyOf": [
Expand All @@ -1487,7 +1532,11 @@
"type": "null"
}
],
"title": "Media Type"
"title": "Media Type",
"description": "Media type (used just to enable compatibility)",
"examples": [
"application/json"
]
}
},
"additionalProperties": false,
Expand Down
16 changes: 8 additions & 8 deletions docs/openapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -737,14 +737,14 @@ Example:

| Field | Type | Description |
|-------|------|-------------|
| query | string | |
| conversation_id | | |
| provider | | |
| model | | |
| system_prompt | | |
| attachments | | |
| no_tools | | |
| media_type | | |
| query | string | The query string |
| conversation_id | | The optional conversation ID (UUID) |
| provider | | The optional provider |
| model | | The optional model |
| system_prompt | | The optional system prompt. |
| attachments | | The optional list of attachments. |
| no_tools | | Whether to bypass all tools and MCP servers |
| media_type | | Media type (used just to enable compatibility) |


## QueryResponse
Expand Down
16 changes: 8 additions & 8 deletions docs/output.md
Original file line number Diff line number Diff line change
Expand Up @@ -719,14 +719,14 @@ Example:

| Field | Type | Description |
|-------|------|-------------|
| query | string | |
| conversation_id | | |
| provider | | |
| model | | |
| system_prompt | | |
| attachments | | |
| no_tools | | |
| media_type | | |
| query | string | The query string |
| conversation_id | | The optional conversation ID (UUID) |
| provider | | The optional provider |
| model | | The optional model |
| system_prompt | | The optional system prompt. |
| attachments | | The optional list of attachments. |
| no_tools | | Whether to bypass all tools and MCP servers |
| media_type | | Media type (used just to enable compatibility) |


## QueryResponse
Expand Down
70 changes: 62 additions & 8 deletions src/models/requests.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,16 +85,70 @@ class QueryRequest(BaseModel):
```
"""

query: str
conversation_id: Optional[str] = None
provider: Optional[str] = None
model: Optional[str] = None
system_prompt: Optional[str] = None
attachments: Optional[list[Attachment]] = None
no_tools: Optional[bool] = False
query: str = Field(
description="The query string",
examples=["What is Kubernetes?"],
)

Comment on lines +88 to +92
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enforce non-empty query and document the requirement

query is the only mandatory input, yet an empty string would pass validation and yield undefined behaviour downstream.
Add a minimal length (or strip_whitespace=True) to guard against accidental empty submissions:

-    query: str = Field(
-        description="The query string",
-        examples=["What is Kubernetes?"],
-    )
+    query: str = Field(
+        ...,
+        min_length=1,
+        description="The query string (must contain at least one character)",
+        examples=["What is Kubernetes?"],
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
query: str = Field(
description="The query string",
examples=["What is Kubernetes?"],
)
query: str = Field(
...,
min_length=1,
description="The query string (must contain at least one character)",
examples=["What is Kubernetes?"],
)
🤖 Prompt for AI Agents
In src/models/requests.py around lines 88 to 92, the query field currently
allows empty strings which can cause undefined behavior downstream. To fix this,
enforce that the query is non-empty by adding a minimal length constraint or
setting strip_whitespace=True in the Field definition. Also update the
description to document that the query must not be empty.

conversation_id: Optional[str] = Field(
None,
description="The optional conversation ID (UUID)",
examples=["c5260aec-4d82-4370-9fdf-05cf908b3f16"],
)

Comment on lines +93 to +98
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate conversation_id the same way as in FeedbackRequest

You already have suid.check_suid() available and use it in another model. Re-using that validator here prevents malformed UUIDs from leaking into persistence layers and log correlation.

@@
     conversation_id: Optional[str] = Field(
         None,
         description="The optional conversation ID (UUID)",
         examples=["c5260aec-4d82-4370-9fdf-05cf908b3f16"],
     )
+
+    @field_validator("conversation_id")
+    @classmethod
+    def _validate_conversation_id(cls, v: Optional[str]) -> Optional[str]:
+        if v is None:
+            return v
+        if not suid.check_suid(v):
+            raise ValueError(f"Improper conversation ID {v}")
+        return v
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
conversation_id: Optional[str] = Field(
None,
description="The optional conversation ID (UUID)",
examples=["c5260aec-4d82-4370-9fdf-05cf908b3f16"],
)
conversation_id: Optional[str] = Field(
None,
description="The optional conversation ID (UUID)",
examples=["c5260aec-4d82-4370-9fdf-05cf908b3f16"],
)
@field_validator("conversation_id")
@classmethod
def _validate_conversation_id(cls, v: Optional[str]) -> Optional[str]:
if v is None:
return v
if not suid.check_suid(v):
raise ValueError(f"Improper conversation ID {v}")
return v
🤖 Prompt for AI Agents
In src/models/requests.py around lines 93 to 98, the conversation_id field lacks
validation to ensure it is a properly formed UUID. To fix this, add a validator
for conversation_id that uses suid.check_suid() just like in the FeedbackRequest
model. This will prevent malformed UUIDs from being accepted and improve data
integrity and log correlation.

provider: Optional[str] = Field(
None,
description="The optional provider",
examples=["openai", "watsonx"],
)

model: Optional[str] = Field(
None,
description="The optional model",
examples=["gpt4mini"],
)

system_prompt: Optional[str] = Field(
None,
description="The optional system prompt.",
examples=["You are OpenShift assistant.", "You are Ansible assistant."],
)

Comment on lines +111 to +116
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add reasonable length cap to system_prompt

system_prompt is free-form text potentially propagated to LLM back-ends. A runaway megabyte-sized prompt could blow up request size or exceed provider limits.
Align with the 4 096-char limit you already apply to user_feedback:

-    system_prompt: Optional[str] = Field(
-        None,
-        description="The optional system prompt.",
-        examples=["You are OpenShift assistant.", "You are Ansible assistant."],
-    )
+    system_prompt: Optional[str] = Field(
+        None,
+        max_length=4096,
+        description="The optional system prompt.",
+        examples=["You are OpenShift assistant.", "You are Ansible assistant."],
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
system_prompt: Optional[str] = Field(
None,
description="The optional system prompt.",
examples=["You are OpenShift assistant.", "You are Ansible assistant."],
)
system_prompt: Optional[str] = Field(
None,
max_length=4096,
description="The optional system prompt.",
examples=["You are OpenShift assistant.", "You are Ansible assistant."],
)
🤖 Prompt for AI Agents
In src/models/requests.py around lines 111 to 116, the system_prompt field lacks
a length limit, which risks excessively large inputs causing issues with LLM
back-ends. Add a max_length constraint of 4096 characters to the system_prompt
Field definition, similar to the existing limit on user_feedback, to prevent
oversized prompts.

attachments: Optional[list[Attachment]] = Field(
None,
description="The optional list of attachments.",
examples=[
{
"attachment_type": "log",
"content_type": "text/plain",
"content": "this is attachment",
},
{
"attachment_type": "configuration",
"content_type": "application/yaml",
"content": "kind: Pod\n metadata:\n name: private-reg",
},
{
"attachment_type": "configuration",
"content_type": "application/yaml",
"content": "foo: bar",
},
],
)

no_tools: Optional[bool] = Field(
False,
description="Whether to bypass all tools and MCP servers",
examples=[True, False],
)

# media_type is not used in 'lightspeed-stack' that only supports application/json.
# the field is kept here to enable compatibility with 'road-core' clients.
media_type: Optional[str] = None
media_type: Optional[str] = Field(
None,
description="Media type (used just to enable compatibility)",
examples=["application/json"],
)

# provides examples for /docs endpoint
model_config = {
Expand Down