VLA-Mark: A cross modal watermark for large vision-language alignment model

Kavli Affiliate: Jia Liu

| First 5 Authors: Shuliang Liu, Shuliang Liu, , ,

| Summary:

Vision-language models demand watermarking solutions that protect
intellectual property without compromising multimodal coherence. Existing text
watermarking methods disrupt visual-textual alignment through biased token
selection and static strategies, leaving semantic-critical concepts vulnerable.
We propose VLA-Mark, a vision-aligned framework that embeds detectable
watermarks while preserving semantic fidelity through cross-modal coordination.
Our approach integrates multiscale visual-textual alignment metrics, combining
localized patch affinity, global semantic coherence, and contextual attention
patterns, to guide watermark injection without model retraining. An
entropy-sensitive mechanism dynamically balances watermark strength and
semantic preservation, prioritizing visual grounding during low-uncertainty
generation phases. Experiments show 7.4% lower PPL and 26.6% higher BLEU than
conventional methods, with near-perfect detection (98.8% AUC). The framework
demonstrates 96.1% attack resilience against attacks such as paraphrasing and
synonym substitution, while maintaining text-visual consistency, establishing
new standards for quality-preserving multimodal watermarking

| Search Query: ArXiv Query: search_query=au:”Jia Liu”&id_list=&start=0&max_results=3

Read More