ChatGPT creates mutating malware that evades detection by EDR

A global sensation since its initial release at the end of last year, ChatGPT’s popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it  can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.

A recent series of proof-of-concept attacks show how a benign-seeming executable file can be crafted such that…

Read more on google

About bourbiza mohamed

Check Also

Amazon is investing up to $4 billion in AI startup Anthropic in growing tech battle

Amazon is investing up to $4 billion in Anthropic and taking a minority stake in …

Leave a Reply

Your email address will not be published. Required fields are marked *