arXiv:2603.23171v1 Announce Type: cross
Abstract: Large language models (LLMs) can be misused to reveal sensitive information, such as weapon-making instructions or writing malware. LLM providers rely on $emph{monitoring}$ to detect and flag unsafe behavior during inference. An open security challenge is $emph{adaptive}$ adversaries who craft attacks that simultaneously (i) evade detection while (ii) eliciting unsafe behavior. Adaptive attackers are a major concern as LLM providers cannot patch their security mechanisms, since they are unaware of how their models are being misused. We cast $emph{robust}$ LLM monitoring as a security game, where adversaries who know about the monitor try to extract sensitive information, while a provider must accurately detect these adversarial queries at low false positive rates. Our work (i) shows that existing LLM monitors are vulnerable to adaptive attackers and (ii) designs improved defenses through $emph{activation watermarking}$ by carefully introducing uncertainty for the attacker during inference. We find that $emph{activation watermarking}$ outperforms guard baselines by up to $52%$ under adaptive attackers who know the monitoring algorithm but not the secret key.
THE AI TODAY 