Think in Safety: Unveiling and Mitigating Safety Alignment Collapse in Multimodal Large Reasoning Model

Xinyue Lou1, You Li1, Jinan Xu1, Xiangyu Shi1, Chi Chen2,Kaiyu Huang1
1Beijing Jiaotong University, 2Tsinghua University

Abstract

The rapid development of Multimodal Large Reasoning Models (MLRMs) has demonstrated broad application potential, yet their safety and reliability remain critical concerns that require systematic exploration. To address this gap, we conduct a comprehensive and systematic safety evaluation of 11 MLRMs across 5 benchmarks and unveil prevalent safety degradation phenomena in most advanced models. Moreover, our analysis reveals distinct safety patterns across different benchmarks: significant safety degradation is observed across jailbreak robustness benchmarks, whereas safety-awareness benchmarks demonstrate less pronounced degradation. In particular, the long thought process in some scenarios even enhances safety performance. Therefore, it is a potential approach to address safety issues in MLRMs by leveraging the intrinsic reasoning capabilities of the model to detect unsafe intent. To operationalize this insight, we construct a multimodal tuning dataset that incorporates a safety-oriented thought process. Experimental results from fine-tuning existing MLRMs with this dataset effectively enhances the safety on both jailbreak robustness and safety-awareness benchmarks. This study provides a new perspective for developing safe MLRMs.

Figure: Up, Examples of multimodal safety benchmarks and their corresponding responses on different models. Down, Variation of safety performance for MLRMs across various benchmarks.

Contributions

  1. This study conducts a systematic safety evaluation of MLRMs and investigates the empirical results, revealing several novel findings and providing new perspectives for the development of safer MLRMs.
  2. We construct a multimodal fine-tuning dataset with safety-oriented thought process for safety alignment, alleviating the issue associated with incorporating additional modalities.
  3. Experimental results demonstrate that our method improves the safety performance of MLRMs across multiple benchmarks by enabling self-correction thinking along the reasoning pathways, compared with previous defense methods.

TiS Dataset

we employ a multi-stage pipeline to construct our safety alignment dataset TiS. We begin by collecting safety-related topics and generating image captions, then explicitly incorporating long CoT reasoning into question answering. After a filtering procedure, we finally obtain the dataset. To the best of our knowledge, TiS is the first safety dataset with the ability to retain reasoning chain for MLRMs.

Figure: Overview of our data construction pipeline.

Results

Fine-tuning on TiS can significantly improve the safety of MLRMs while maintaining the thought process.

Experimental Results

Both R1-Onevision and LLaVA-CoT demonstrate improved safety alignment fine-tuned on TiS, substantially outperforming prior datasets.

Qualitative Results

Examples of responses generated by fine-tuned on VLGuard, MIS, SPA-VL and TiS dataset are illustrated in Figure. Our approach demonstrates the ability to retain the thought process of the models while decisively rejecting unsafe inputs and explicitly articulating the potential serious consequences associated with such queries.

Citation

@article{lou2025think,
  title={Think in Safety: Unveiling and Mitigating Safety Alignment Collapse in Multimodal Large Reasoning Model},
  author={Lou, Xinyue and Li, You and Xu, Jinan and Shi, Xiangyu and Chen, Chi and Huang, Kaiyu},
  journal={arXiv preprint arXiv:2505.06538},
  year={2025}
}