ChatGPT Hallucinations Can Be Exploited to Distribute Malicious Code Packages

It’s possible for threat actors to manipulate artificial intelligence chatbots such as ChatGPT to help them distribute malicious code packages to software developers, according to vulnerability and risk management company Vulcan Cyber. 

The issue is related to hallucinations, which occur when AI, specifically a large language model (LLM) such as ChatGPT, generates factually incorrect or nonsensical information that may look plausible. 

In Vulcan’s analysis, the company’s researchers noticed that ChatGPT — possibly due to its use of older data for training — recommended…

Read more on google

About bourbiza mohamed

Check Also

Pennsylvania state government will prepare to start using AI in its operations

HARRISBURG, Pa. (AP) — Pennsylvania state government will prepare to use artificial intelligence in its …

Leave a Reply

Your email address will not be published. Required fields are marked *