The Together AI API, with the Llama 3.1 Turbo models, returns an :eos finish_reason as well as a :stop finish reason allowing us to support :end_sequence_encountered.
Unfortunately, other models, such as the Llama 3.1 vision models, do not support this so I've removed support for end_sequence_encountered for now pending clarity from Together Ai.