Security researchers at cyber risk management company Vulcan.io published a proof of concept of how hackers can use ChatGPT 3.5 to spread malicious code from trusted repositories.
The research calls attention to security risks inherent in relying on ChatGPT suggestions for coding solutions.
Methodology
The researchers collated frequently asked coding questions on Stack Overflow (a coding question and answer forum).
They chose 40 coding subjects (like parsing, math, scraping technologies, etc.) and used the first 100 questions for each of the 40 subjects.
The next step was to filter for…
Read more on google