ChatGPT, Other Generative AI Apps Prone to Compromise, Manipulation

Users of applications that use ChatGPT-like large language models (LLMs) beware: An attacker that creates untrusted content for the AI system could compromise any information or recommendations from the system, warn researchers.

The attack could allow job applicants to bypass resume-checking applications, allow disinformation specialists to force a news summary bot to only give a specific point of view, or allow bad actors to convert a chatbot into an eager participant in their fraud.

In a session at next month’s Black Hat USA, Compromising LLMs: The Advent of AI Malware, a group of computer…

Read more on google

About bourbiza mohamed

Check Also

Artificial intelligence technology behind ChatGPT was built in Iowa — with a lot of water

DES MOINES, Iowa (AP) — The cost of building an artificial intelligence product like ChatGPT …

Leave a Reply

Your email address will not be published. Required fields are marked *