ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts



Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.
The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive


Fonte: Leia a matéria original

No comments:

Post a Comment

A.I. Toy Bear Speaks of Sex, Knives and Pills, Consumer Group Warns

The chatter left startled adults unsure whether they heard correctly. Testers warned that interactive toys like this one could allow childre...