Skip to content

Conversation

@julianlotzer
Copy link
Contributor

Proposed Changes

  • Switched the hand tracking loop to use AsyncStream with a buffer policy of .bufferingNewest(1).
  • Decoupled the data generation loop from the network transmission loop.

Why this is needed

I was running into an issue where latency would start low but get progressively higher the longer the stream ran.

It turns out the previous loop was forcing frames at 200Hz, but the network connection couldn't always keep up. Since response.write accepts data faster than it actually transmits, we were building up a huge internal queue of old frames. The client was receiving every single packet, but by the time they arrived, they were stale (which would lead to a large buildup of latency).

This fix implements conflation (dropping old frames). We now have a buffer of exactly one frame (can also be increased, this way we keep latency low). If the network is busy sending the previous packet, the generator simply overwrites the buffer with the absolute latest hand pose. This prevents the backlog from building up and keeps latency constant.

@younghyopark younghyopark merged commit 4c54990 into Improbable-AI:main Dec 28, 2025
@younghyopark
Copy link
Collaborator

Thanks @julianlotzer for the fix! Merged, and updated the PyPI package as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants