Users of applications that use ChatGPT-like large language models (LLMs) beware: An attacker that creates untrusted content for the AI system could compromise any information or recommendations from the system, warn researchers.
The attack could allow job applicants to bypass resume-checking applications, allow disinformation specialists to force a news summary bot to only give a specific point of view, or allow bad actors to convert a chatbot into an eager participant in their fraud.
In a session at next month’s Black Hat USA, Compromising LLMs: The Advent of AI Malware, a group of computer…
Read more on google