When Helpers Become Hazards: A Benchmark for Analyzing Multimodal LLM-Powered Safety in Daily Life

Xinyue Lou1, Jinan Xu1, Jingyi Yin1, Xiaolong Wang2, Zhaolu Kang3, Youwei Liao1, Yixuan Wang1, Xiangyu Shi1, Fengran Mo4,Su Yao2,Kaiyu Huang1
1Beijing Jiaotong University, 2Tsinghua University, 3Peking University, 4University of Montreal

Abstract

As Multimodal Large Language Models~(MLLMs) become an indispensable assistant in human life, the unsafe content generated by MLLMs poses a danger to human behavior, perpetually overhanging human society like a sword of Damocles. To investigate and evaluate the safety impact of MLLMs' responses on human behavior in daily life, we introduce SaLAD, a multimodal safety benchmark which contains 2,013 real-world image–text samples across 10 common categories, with a balanced design covering both unsafe scenarios and cases of oversensitivity. It emphasizes realistic risk exposure, authentic visual inputs, and fine-grained cross-modal reasoning, ensuring that safety risks cannot be inferred from text alone. We further propose a safety-warning-based evaluation framework that encourages models to provide clear and informative safety warnings, rather than generic refusals. Results on 18 MLLMs demonstrate that the top-performing models achieve a safe response rate of only 57.2\% on unsafe queries. Moreover, even popular safety alignment methods limit effectiveness of the models in our scenario, revealing the vulnerabilities of current MLLMs in identifying dangerous behaviors in daily life.

Figure: Up, Unsafe examples of SaLAD. Down, Safe examples of SaLAD. “Q” represents the input query, and “W” denotes the corresponding safety warning.

Contributions

  1. We propose a benchmark SaLAD towards the Safety of Multimodal LLM-Powered Assistant in Human Daily Life and adopt a safety-warning-based evaluation method, focusing on exploring the safety of model’s response regarding human behavior.
  2. Evaluation results of 18 MLLMs demonstrate that existing models face challenges in detecting safety risks in human daily life, highlighting the critical safety vulnerability, especially in scenarios with images.
  3. Further analysis reveals that employing traditional safety defense methods, including safety prompts and safety alignment methods, is not effective in addressing SaLAD, with underlying challenges remaining.

SaLAD

When humans use MLLMs, potential safety risks are often embedded within seemingly benign user queries. Since the user’s original intention is merely to seek the assistance, the sources of danger are not directly expressed in the textual input but instead hidden in the joint visual–textual context. Therefore, as trustworthy AI assistants, MLLMs are expected not only to recognize such hidden risks but also to provide constructive guidance to ensure safe human behavior. And we have proposed a taxonomy of 10 categories to classify the queries encountered in daily human usage of MLLMs.

Figure: Safety taxonomy of SaLAD.

Citation

@misc{lou2026helpershazardsbenchmarkanalyzing,
      title={When Helpers Become Hazards: A Benchmark for Analyzing Multimodal LLM-Powered Safety in Daily Life}, 
      author={Xinyue Lou and Jinan Xu and Jingyi Yin and Xiaolong Wang and Zhaolu Kang and Youwei Liao and Yixuan Wang and Xiangyu Shi and Fengran Mo and Su Yao and Kaiyu Huang},
      year={2026},
      eprint={2601.04043},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.04043}, 
}