Scroll Top

Darkweb trained DarkBertGPT gives cyber criminals a huge new advantage

WHY THIS MATTERS IN BRIEF

Having a Darkweb trained AI gives criminals a massive advantage by democratising and reducing the cost and time it takes to generate and launch malware and scams.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

ChatGPT revolutionised the corporate world, and even helped researchers write new ransomware code, and now new Artificial Intelligence (AI) models trained on DarkWeb data could revolutionise – and turbocharge – cybercrime after the developer behind the FraudGPT malicious chatbot announced that they’re “readying even more sophisticated adversarial tools based on Generative AI and Google’s Bard technology” — one of which will leverage a Large Language Model (LLM) that uses as its knowledge base the entirety of the Dark Web itself.

 

See also
MIT's new artificial synapse brings "Brain on a chip" hardware closer

 

An ethical hacker who already had discovered another AI-based hacker tool, WormGPT, tipped off the researchers that the FraudGPT inventor — known on hacker forums as “CanadianKingpin12” — has more AI-based malicious chatbots in the works, according to SlashNext.

 

The Future of Cyber Crime, by Keynote Speaker Matthew Griffin

 

The forthcoming bots — dubbed DarkBART and DarkBERT — will arm threat actors with ChatGPT-like AI capabilities that go much further than existing cybercriminal genAI offerings, according to SlashNext. In a blog post published Aug. 1, the firm warned that the AIs will potentially lower the barrier of entry for would-be cybercriminals to develop sophisticated business email compromise (BEC) phishing campaigns, find and exploit zero-day vulnerabilities, probe for critical infrastructure weaknesses, create and distribute malware, and much more.

 

See also
Simple pixel hack cripples state of the art AI medical imaging systems

 

“The rapid progression from WormGPT to FraudGPT and now ‘DarkBERT’ in under a month underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape,” SlashNext researcher Daniel Kelley wrote in the post.

In terms of functionality, DarkBART will be a dark version of the Google BART AI, and the hackers said it will be based on a large language model (LLM) known as DarkBERT, which was created by South Korean data-intelligence firm S2W with the goal of actually fighting cybercrime. It’s currently limited to academic researchers, which would make malicious access to it notable.

“The threat actor … claims to have gained access to DarkBERT,” Kelley said, adding that when contacted via Telegram, CanadianKingpin12 shared a video demonstrating that his version of DarkBERT “underwent specialized training on a vast corpus of text from the Dark Web,” Kelley wrote.

 

See also
OpenAI's Red Team reveal how they broke ChatGPT and GPT4 pre-release

 

The malicious developer also claims his new bot … can be integrated with Google Lens,” Kelley added. “This integration enables the ability to send text accompanied by images.” That’s notable given that so far, ChatGPT-like offerings have been text-only.

The second adversarial tool, confusingly also named DarkBERT (but wholly separate from the Korean AI), will go even further by using the entire Dark Web as its LLM, giving threat actors access to the hive mind of the hacker underground for carrying out cyber threats. It will also have Google Lens integration, CanadianKingpin12 claims.

Kelley noted that the developers of adversarial AI tools, like their more benevolent counterparts, likely will soon offer Application Programming Interface (API) access to the chatbots, which will allow for more seamless integration into cybercriminals’ workflows and code and lower the barriers to entry for the cybercrime game.

 

See also
Study shows AI can store secret messages in their text that are imperceptible to humans

 

“Such progress raises significant concerns about potential consequences, as the use cases for this type of technology will likely become increasingly intricate,” Kelley wrote.

This rapid progression also means that defense against the threats will require a proactive approach. In addition to typical training provided to enterprise employees to identify phishing attacks, organizations also should provide BEC-specific training to educate employees on the nature of these attacks and the role of AI, the researchers said. Moreover, enterprises also should enhance email verification measures to combat AI-driven threats, adding strict process and keyword-flagging to measures already in place.

“As cyber threats evolve, cybersecurity strategies must continually adapt to counter emerging threats,” Kelley wrote. “A proactive and educated approach will be our most potent weapon against AI-driven cybercrime.”

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This