-
Notifications
You must be signed in to change notification settings - Fork 8
Description
Problem
Pi.el blocks input while the agent is streaming. Users have to wait for the response to complete before they can send follow-up messages or course-correct the agent.
Backend Support
The pi backend already supports message queuing during streaming with two distinct behaviors:
Steering interrupts the current response - useful for "stop, do this instead" corrections. Follow-up waits until the agent finishes - useful for queuing the next question while reading the response.
The TUI uses Enter for steering and Alt+Enter for follow-up. During streaming, it displays queued messages below the editor (e.g., Steering: stop and fix the tests first). On abort, queued messages are restored to the editor so nothing is lost.
RPC Commands
The backend exposes steer and follow_up commands, plus prompt accepts a streamingBehavior option. State includes pendingMessageCount for header display. Mode settings control whether queued messages are processed all-at-once or one-at-a-time.
One gap: clearQueue exists in agent-session.ts but isn't exposed via RPC. This would be needed for proper abort handling (restoring queued messages to the input buffer).
Implementation Considerations
A minimal version could just accept input during streaming and queue via steer, showing the count in the header line. Full parity with the TUI would require the missing clear_queue RPC command for abort restoration.
Reference
- Queue implementation:
packages/coding-agent/src/core/agent-session.ts(lines 150-160, 708-730, 831-857) - RPC commands:
packages/coding-agent/src/modes/rpc/rpc-mode.ts(lines 327-335, 417-425) - TUI display:
packages/coding-agent/src/modes/interactive/interactive-mode.ts(updatePendingMessagesDisplay, line 1907) - TUI abort restore: same file,
setupKeyHandlers(line 993)