Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions docs/How To/Qtype Server/add_feedback_buttons.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Add Feedback Buttons

Collect user feedback (thumbs, ratings, or categories) directly in the QType UI by adding a `feedback` block to your flow. Feedback submission requires `telemetry` to be enabled so QType can attach the feedback to traces/spans.

### QType YAML

```yaml
flows:
- id: my_flow
interface:
type: Conversational

feedback:
type: thumbs
explanation: true

telemetry:
id: app_telemetry
provider: Phoenix
endpoint: http://localhost:6006/v1/traces
```

### Explanation

- **flows[].feedback**: Enables a feedback widget on the flow’s outputs in the UI.
- **feedback.type**: Feedback widget type: `thumbs`, `rating`, or `category`.
- **feedback.explanation**: If `true`, prompts the user for an optional text explanation along with their feedback.
- **rating.scale**: For `rating` feedback, sets the maximum score (typically `5` or `10`).
- **category.categories**: For `category` feedback, the list of selectable labels.
- **category.allow_multiple**: For `category` feedback, allows selecting multiple labels.
- **telemetry**: Must be configured for feedback submission; QType records feedback as telemetry annotations.

## See Also

- [Serve Flows as UI](serve_flows_as_ui.md)
- [Use Conversational Interfaces](use_conversational_interfaces.md)
- [TelemetrySink Reference](../../components/TelemetrySink.md)
- [Example: Thumbs Feedback](../../../examples/feedback/thumbs_feedback_example.qtype.yaml)
- [Example: Rating Feedback](../../../examples/feedback/rating_feedback_example.qtype.yaml)
- [Example: Category Feedback](../../../examples/feedback/category_feedback_example.qtype.yaml)
62 changes: 62 additions & 0 deletions examples/feedback/category_feedback_example.qtype.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
id: category_feedback_example
description: Example flow with categorical feedback collection

flows:
- id: code_generator
description: Code generation with multi-category feedback
variables:
- id: requirement
type: text
- id: formatted_prompt
type: text
- id: generated_code
type: text
feedback:
type: category
categories:
- correct
- well_documented
- follows_best_practices
- efficient
- needs_improvement
allow_multiple: true
explanation: true
steps:
- type: PromptTemplate
id: prompt
template: |
Generate Python code for the following requirement:

{requirement}

Provide clean, well-documented code following Python best practices.
inputs:
- requirement
outputs:
- formatted_prompt

- type: LLMInference
id: llm
model: nova
inputs:
- formatted_prompt
outputs:
- generated_code
inputs:
- requirement
outputs:
- generated_code

models:
- id: nova
type: Model
provider: aws-bedrock
model_id: amazon.nova-pro-v1:0
inference_params:
temperature: 0.2
max_tokens: 1000

telemetry:
id: category_feedback_telemetry
provider: Phoenix
endpoint: http://localhost:6006/v1/traces
72 changes: 72 additions & 0 deletions examples/feedback/explode_feedback_example.qtype.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
id: explode_feedback_example
description: Example flow with Explode fan-out and feedback collection

flows:
- id: topic_facts_generator
description: Generate interesting facts for multiple topics with feedback
variables:
- id: topics_json
type: text
- id: topics
type: list[text]
- id: topic
type: text
- id: formatted_prompt
type: text
- id: fact
type: text
feedback:
type: thumbs
explanation: true
steps:
- type: Decoder
id: decode
format: json
inputs:
- topics_json
outputs:
- topics

- type: Explode
id: fan_out
inputs:
- topics
outputs:
- topic

- type: PromptTemplate
id: prompt
template: |
Generate one interesting, concise fact about: {topic}

Keep it to 1-2 sentences and make it engaging.
inputs:
- topic
outputs:
- formatted_prompt

- type: LLMInference
id: llm
model: nova
inputs:
- formatted_prompt
outputs:
- fact
inputs:
- topics_json
outputs:
- fact

models:
- id: nova
type: Model
provider: aws-bedrock
model_id: us.amazon.nova-lite-v1:0
inference_params:
temperature: 0.7
max_tokens: 200

telemetry:
id: explode_feedback_telemetry
provider: Phoenix
endpoint: http://localhost:6006/v1/traces
56 changes: 56 additions & 0 deletions examples/feedback/rating_feedback_example.qtype.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
id: rating_feedback_example
description: Example flow with 1-10 rating scale feedback collection

flows:
- id: document_summarizer
description: Document summarization with quality rating
variables:
- id: document_text
type: text
- id: formatted_prompt
type: text
- id: summary
type: text
feedback:
type: rating
scale: 10
explanation: true
steps:
- type: PromptTemplate
id: prompt
template: |
Summarize the following document in 2-3 sentences:

{document_text}

Summary:
inputs:
- document_text
outputs:
- formatted_prompt

- type: LLMInference
id: llm
model: nova
inputs:
- formatted_prompt
outputs:
- summary
inputs:
- document_text
outputs:
- summary

models:
- id: nova
type: Model
provider: aws-bedrock
model_id: amazon.nova-pro-v1:0
inference_params:
temperature: 0.2
max_tokens: 1000

telemetry:
id: category_feedback_telemetry
provider: Phoenix
endpoint: http://localhost:6006/v1/traces
42 changes: 42 additions & 0 deletions examples/feedback/thumbs_feedback_example.qtype.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
id: simple_thumbs_chatbot
description: A minimal chatbot example with thumbs up/down feedback

flows:
- id: simple_chat
description: Simple conversational chatbot with feedback collection
variables:
- id: user_message
type: ChatMessage
- id: assistant_response
type: ChatMessage
interface:
type: Conversational
feedback:
type: thumbs
explanation: false
steps:
- type: LLMInference
id: chat_llm
model: nova
inputs:
- user_message
outputs:
- assistant_response
inputs:
- user_message
outputs:
- assistant_response

models:
- id: nova
type: Model
provider: aws-bedrock
model_id: us.amazon.nova-lite-v1:0
inference_params:
temperature: 0.9
max_tokens: 300

telemetry:
id: chatbot_telemetry
provider: Phoenix
endpoint: http://localhost:6006/v1/traces
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ mcp = [

[dependency-groups]
dev = [
"arize-phoenix>=11.2.2",
"arize-phoenix>=12.35.0",
"boto3>=1.34.0",
"coverage>=7.0.0",
"ipython>=8.37.0",
Expand Down
Loading