Mitigating Indirect Prompt Injection via Instruction-Following…
[REF]:
https://arxiv.org/abs/2512.00966
Indirect prompt injection attacks (IPIAs), where large language models (LLMs) follow malicious instructions hidden in input data, pose a critical threat to LLM-powered agents. In this paper, we…