ChatGPT Plugins Open Security Holes From PDFs, Websites and More

n3Ppxdw7qGsRGV8xg4C9Dd 1200 80

As Microsoft prepares to add support for ChatGPT plugins to its own Bing chatbot, there’s more proof that the existing suite of plugins allows for several different kinds of prompt injection attack. Last week, we reported that doctored YouTube transcripts could insert unwanted instructions into your chat via a plugin. Now, we can report that hidden instructions on web pages and in PDFs can also do prompt injection and, even worse, they can trigger other plugins to perform actions you didn’t ask for.

Security Researcher Johann Rehberger of Embrace the Red recently demonstrated that the…

Read more on google


Up to 30% of Published Neuroscience Papers May Be Faked


Travel’s ChatGPT adventure raises hopes and questions