As Microsoft prepares to add support for ChatGPT plugins to its own Bing chatbot, there’s more proof that the existing suite of plugins allows for several different kinds of prompt injection attack. Last week, we reported that doctored YouTube transcripts could insert unwanted instructions into your chat via a plugin. Now, we can report that hidden instructions on web pages and in PDFs can also do prompt injection and, even worse, they can trigger other plugins to perform actions you didn’t ask for.
Security Researcher Johann Rehberger of Embrace the Red recently demonstrated that the…