Skip to content

[NeurIPS 2025] Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models

License

Notifications You must be signed in to change notification settings

uark-cviu/DirectedTokens

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[NeurIPS 2025] Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models

Paper Conference

Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models
Thanh-Dat Truong, Huu-Thien Tran, Thai Son Tran, Bhiksha Raj, and Khoa Luu
University of Arkansas, Computer Vision and Image Understanding Lab, CVIU

Abstract

Large multimodal models (LMMs) have gained impressive performance due to their outstanding capability in various understanding tasks. However, these models still suffer from some fundamental limitations related to robustness and generalization due to the alignment and correlation between visual and textual features. In this paper, we introduce a simple but efficient learning mechanism for improving the robust alignment between visual and textual modalities by solving shuffling problems. In particular, the proposed approach can improve reasoning capability, visual understanding, and cross-modality alignment by introducing two new tasks: reconstructing the image order and the text order into the LMM's pre-training and fine-tuning phases. In addition, we propose a new directed-token approach to capture visual and textual knowledge, enabling the capability to reconstruct the correct order of visual inputs. Then, we introduce a new Image-to-Response Guided loss to further improve the visual understanding of the LMM in its responses. The proposed approach consistently achieves state-of-the-art (SoTA) performance compared with prior LMMs on academic task-oriented and instruction-following LMM benchmarks.

Training and Testing

Our implementation follow the training and testing evaluation script of LLaVA v1.5

Acknowledgements

The codebase of this project is borrowed from LLaVA v1.5.

This work is partly supported by NSF CAREER (No. 2442295), NSF SCH (No. 2501021), NSF E-RISE (No. 2445877), NSF SBIR Phase 2 (No. 2247237) and USDA/NIFA Award. We also acknowledge the Arkansas High-Performance Computing Center (HPC) for GPU servers.

Citation

If you find this code useful for your research, please consider citing:

@article{truong2025directed,
  title={Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models},
  author={Truong, Thanh-Dat and Tran, Huu-Thien and Son, Tran Thai and Raj, Bhiksha and Luu, Khoa},
  journal={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2025}
}

About

[NeurIPS 2025] Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published