It’s possible for threat actors to manipulate artificial intelligence chatbots such as ChatGPT to help them distribute malicious code packages to software developers, according to vulnerability and risk management company Vulcan Cyber.
The issue is related to hallucinations, which occur when AI, specifically a large language model (LLM) such as ChatGPT, generates factually incorrect or nonsensical information that may look plausible.
In Vulcan’s analysis, the company’s researchers noticed that ChatGPT — possibly due to its use of older data for training — recommended…
Read more on google