Hi @bminixhofer , thank you for this TokenKit!
I just want to confirm that in the example script llama3_to_byte_tokenizer_gpu.sh, the loss_mask_mode is set to none by default. This means the Tulu prompt is not masked out while computing the loss. Also, although openmath2 masking option is available, it is not enabled in the example script.
Are they the exact experiment settings reported in the paper? Or is the loss_mask enabled in the experiment? Thanks for answering.