Hackers’ favourite new tool is WormGPT

[ad_1]

“While some legitimate AI tools can be used to conduct software code reviews, developers should be discouraged from doing this as their code may be used to train AI models that criminals gain access to, giving them further intelligence into organisational systems.”

Butler said the number of different threat actors would likely escalate as generative AI made it easier for criminals to access cyberattack tools. He said the Tesserent Security Operations Centre had already found an increase in phishing campaigns and malicious email activities targeting Australian organisations, particularly in the months following the emergence of WormGPT and similar tools.

There are now at least six different generative AI tools available to rent or purchase on the dark web, including FraudGPT, EvilGPT, DarkBard, WolfGPT, XXXGPT and WormGPT with more appearing, according to Butler.

“While most lack the large capacity of public-facing tools like ChatGPT and Bard, they are proliferating quickly, which can make them harder to find and take down.”

Scott Jarkoff, director of intelligence strategy, APJ & META, at CrowdStrike, said cybersecurity activity had risen amid the conflict in the Middle East, meaning businesses should be even more vigilant than usual.

He said hacking groups from the so-called “big four” of Russia, China, North Korea and Iran had been using generative AI tools to craft attacks in perfect English.

ChatGPT has exploded in popularity in the past 12 months.Credit: REUTERS/Florence Lo

“The Israel-Hamas conflict is now giving criminals a perfect lure to say ‘hey, visit this site to donate to whichever cause you believe in’, and that means it’s now more important that everyone takes cybersecurity more seriously,” he said.

“We all take safety seriously, why do we not take cyber seriously? We’ve got to get to a point where cyber hygiene is built into everyone’s muscle memory, just as safety is built into everyone’s muscle memory.”

Generative AI is not only being used to create realistic phishing emails. It’s also supercharging social engineering, with bad actors using AI to create realistic fake accounts to spread misinformation, according to Dan Schiappa, chief product officer at cyber vendor Arctic Wolf.

China recently arrested a man for using ChatGPT to create a fake news story of a train derailment, and he will be far from the last person to use the technology to create chaos, Schiappa said.

“The long-standing ‘arms race’ between cyberattackers and cybersecurity practitioners has left both sides with new opportunities to act faster than ever before using AI,” he said.

The positive for the cybersecurity industry and for Australian businesses is that generative AI tools can be used by security personnel, or “good guys”, to identify new vulnerabilities and defend themselves more quickly.

“ ‘Good guys’ are leveraging the tech to find anomalies or patterns in system access records, sniffing out intrusion attempts that otherwise might have gone undetected without AI,” Schiappa said.

“As defenders, we need the power to harness that ability to defend organisations without allowing massive corporations to run wild with no restrictions on their research and development.

“Recent research has even noted that organisations using AI to help defend themselves resolved breaches nearly two-and-a-half months quicker than organisations not using AI or automation, and saved $3 million more in breach costs than those not using the technology.”

The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.

[ad_2]