Un-LOCC: Universal Lossy Optical Context Compression for Vision-Based Language Models Achieve nearly 3x token compression at over 93% retrieval accuracy using existing Vision-Language Models.
-
Updated
Feb 3, 2026 - Python
Un-LOCC: Universal Lossy Optical Context Compression for Vision-Based Language Models Achieve nearly 3x token compression at over 93% retrieval accuracy using existing Vision-Language Models.
[ICLR 2026] Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs
A next-gen architecture for LLM memory handling.
🌟 Compress text using images to achieve nearly 3x reduction in tokens with over 93% retrieval accuracy for Vision-Language Models.
Add a description, image, and links to the long-context-llm topic page so that developers can more easily learn about it.
To associate your repository with the long-context-llm topic, visit your repo's landing page and select "manage topics."