in

OpenAI releases ChatGPT information leak patch, however the challenge is not utterly mounted

OpenAI releases ChatGPT information leak patch, however the challenge is not utterly mounted


We have mentioned it earlier than, and we’ll say it once more: Do not enter something into ChatGPT that you do not need unauthorized events to learn.

Since OpenAI launched ChatGPT final yr, there have been fairly just a few events the place flaws within the AI chatbot might’ve been weaponized or manipulated by dangerous actors to entry delicate or non-public information. And this newest instance reveals that even after a safety patch has been launched, issues can nonetheless persist.

In line with a report by Bleeping Laptop, OpenAI has not too long ago rolled out a repair for a difficulty the place ChatGPT might leak customers’ information to unauthorized third events. This information might embody person conversations with ChatGPT and corresponding metadata like a person’s ID and session data. 

Nevertheless, in line with safety researcher Johann Rehberger, who initially found the vulnerability and outlined the way it labored, there are nonetheless gaping safety holes in OpenAI’s repair. In essence, the safety flaw nonetheless exists.

The ChatGPT information leak

Rehberger was capable of make the most of OpenAI’s not too long ago launched and much-lauded customized GPTs characteristic to create his personal GPT, which exfiltrated information from ChatGPT. This was a major discovering as customized GPTs are being marketed as AI apps akin to how the iPhone revolutionized cellular functions with the App Retailer. If Rehberger might create this practice GPT, it looks as if dangerous actors might quickly uncover the flaw and create customized GPTs to steal information from their targets.

Rehberger says he first contacted OpenAI concerning the “information exfiltration method” manner again in April. He contacted OpenAI as soon as once more in November to report precisely how he was capable of create a customized GPT and perform the method.

On Wednesday, Rehberger posted an replace to his web site. OpenAI had patched the leak vulnerability.

“The repair isn’t good, however a step into the suitable path,” Rehberger defined.

The explanation the repair is not good is that ChatGPT remains to be leaking information by way of the vulnerability Rehberger found. ChatGPT can nonetheless be tricked into sending information.

“Some fast assessments present that bits of information can steal [sic] leak,” Rehberger wrote, additional explaining that “it solely leaks small quantities this fashion, is sluggish and extra noticeable to a person.” Whatever the remaining points, Rehberger mentioned it is a “step in the suitable path for positive.”

However, the safety flaw nonetheless stays solely within the ChatGPT apps for iOS and Android, which have but to be up to date with a repair.

ChatGPT customers ought to stay vigilant when utilizing customized GPTs and may doubtless move on these AI apps from unknown third events.





Read more on mashable

Written by bourbiza mohamed

Leave a Reply

Your email address will not be published. Required fields are marked *

Mounted yield protocol BarnBridge DAO settles with SEC for .7M

Mounted yield protocol BarnBridge DAO settles with SEC for $1.7M

Galaxy Watch Homeowners, Verify for Replace Now

Galaxy Watch Homeowners, Verify for Replace Now