Abstract
Figure: Up, Unsafe examples of SaLAD. Down, Safe examples of SaLAD. “Q” represents the input query, and “W” denotes the corresponding safety warning.
Contributions
- We propose a benchmark SaLAD towards the Safety of Multimodal LLM-Powered Assistant in Human Daily Life and adopt a safety-warning-based evaluation method, focusing on exploring the safety of model’s response regarding human behavior.
- Evaluation results of 18 MLLMs demonstrate that existing models face challenges in detecting safety risks in human daily life, highlighting the critical safety vulnerability, especially in scenarios with images.
- Further analysis reveals that employing traditional safety defense methods, including safety prompts and safety alignment methods, is not effective in addressing SaLAD, with underlying challenges remaining.
SaLAD
When humans use MLLMs, potential safety risks are often embedded within seemingly benign user queries. Since the user’s original intention is merely to seek the assistance, the sources of danger are not directly expressed in the textual input but instead hidden in the joint visual–textual context. Therefore, as trustworthy AI assistants, MLLMs are expected not only to recognize such hidden risks but also to provide constructive guidance to ensure safe human behavior. And we have proposed a taxonomy of 10 categories to classify the queries encountered in daily human usage of MLLMs.
Figure: Safety taxonomy of SaLAD.
Citation
@misc{lou2026helpershazardsbenchmarkanalyzing,
title={When Helpers Become Hazards: A Benchmark for Analyzing Multimodal LLM-Powered Safety in Daily Life},
author={Xinyue Lou and Jinan Xu and Jingyi Yin and Xiaolong Wang and Zhaolu Kang and Youwei Liao and Yixuan Wang and Xiangyu Shi and Fengran Mo and Su Yao and Kaiyu Huang},
year={2026},
eprint={2601.04043},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.04043},
}