-
Notifications
You must be signed in to change notification settings - Fork 2
feat: Add ChatGPT-style streaming responses using SSE for Unio AI assistant #380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedThe pull request is closed. WalkthroughThis PR introduces Server-Sent Events (SSE) streaming support for AI responses. The backend now streams responses from OpenRouter API models while maintaining a non-streaming fallback. The frontend displays typing indicators and progressively updates messages as chunks arrive. Dynamic date computation replaces hard-coded dates, and database logging is added for streamed responses. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant AIChat as AIChat.tsx
participant Backend as api/ai/route.ts
participant OpenRouter as OpenRouter API
User->>AIChat: Send message
AIChat->>AIChat: Show typing indicator
AIChat->>Backend: POST /api/ai (with prompt)
activate Backend
Backend->>OpenRouter: callOpenRouterAPIStream(prompt)
activate OpenRouter
Backend->>AIChat: Response with<br/>content-type: text/event-stream
deactivate OpenRouter
AIChat->>AIChat: Replace typing with<br/>streaming placeholder
loop Stream chunks arrive
Backend-->>AIChat: SSE chunk (data: {...})
AIChat->>AIChat: Parse JSON<br/>Accumulate content<br/>Update message UI
end
Backend->>Backend: Save complete<br/>response to DB
Backend-->>AIChat: Stream end
deactivate Backend
AIChat->>AIChat: Finalize AI message<br/>with full context
alt Error occurs
Backend-->>AIChat: Error response
AIChat->>AIChat: Remove placeholders<br/>Display error message
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
This PR introduces real-time streaming responses for the Unio AI assistant using Server-Sent Events (SSE). Instead of waiting for the full response, users now see messages appear word-by-word, similar to ChatGPT and Claude.
This enhancement significantly improves perceived performance and overall UX.
Key Changes
Backend (SSE + Streaming API Integration)
Files Updated:
app/api/ai/route.tsMajor updates:
Added
callOpenRouterAPIStream()(Lines 156–228)stream: truesupportReadableStreamdirectly from OpenRouterUpdated POST handler (Lines 948–1088)
content-typeImplemented correct SSE chunk parsing:
How it works
Frontend (Real-Time Streaming UI)
Files Updated:
components/ai/AIChat.tsxapp/ai/page.tsxEnhancements:
User Experience Improvements
✅ Immediate “time-to-first-token” (~500–800ms)
✅ Smooth word-by-word updates
✅ No loading delay
✅ Matches modern AI chat UX (ChatGPT-style)
✅ Backwards compatible with non-streaming mode
Testing Done
Verified real-time streaming in both:
/aiTested long messages (1000+ tokens)
Confirmed
ai_training_datalogs full responsesError cases:
Performance
Security
Optional Future Enhancements
▊)Status
✅ Fully implemented
✅ Backwards compatible
🚀 Ready for deployment
Authored by: @akshay0611
Summary by CodeRabbit
New Features
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.