ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts



Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.
The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive


Fonte: Leia a matéria original

No comments:

Post a Comment

Hackers Use LinkedIn Messages to Spread RAT Malware Through DLL Sideloading

Cybersecurity researchers have uncovered a new phishing campaign that exploits social media private messages to propagate malicious payloads...