Attackers can exploit ChatGPT’s penchant for returning false information to spread malicious code packages, researchers have found. This poses a significant risk for the software supply chain, as it can allow malicious code and trojans to slide into legitimate applications and code repositories like npm, PyPI, GitHub and others.
By leveraging so-called “AI package hallucinations,” threat actors can create ChatGPT-recommended, yet malicious, code packages that a developer could inadvertently download when using the chatbot, building them into software that then is used widely,…
Read more on google