WordPress Ad Banner

Meet ‘DarkBERT:’ South Korea’s Dark Web AI Could Combat Cybercrime

A team of South Korean researchers has taken the unprecedented step of developing and training artificial intelligence (AI) on the so-called “Dark Web.” The Dark Web trained AI, called DarkBERT, was unleashed to trawl and index what it could find to help shed light on ways to combat cybercrime.

The “Dark Web” is a section of the internet that remains hidden and cannot be accessed through standard web browsers. This part of the web is notorious for its anonymous websites and marketplaces that facilitate illegal activities, such as drug and weapon trading, stolen data sales, and a haven for cybercriminals.

How Does DarkBERT Function?

Currently, the DarkBERT is still in the works. The developers are currently working on the AI to adapt well to the language that might be being used on the dark web. The researchers will be training the model by crawling through the Tor network.

It has also been reported that the pre-trained model will be filtered well and deduplicated. Data processing will be incorporated into the model to identify threats or concerns from the expected sensitive information.

According to the team, their LLM was far better at making sense of the dark web than other models that were trained to complete similar tasks, including RoBERTa, which Facebook researchers designed back in 2019 to “predict intentionally hidden sections of text within otherwise unannotated language examples,” according to an official description.

“Our evaluation results show that DarkBERT-based classification model outperforms that of known pre-trained language models,” the researchers wrote in their paper.

According to the team, DarkBERT has the potential to be employed for diverse cybersecurity purposes, including identifying websites that vend ransomware or release confidential data. Additionally, it can scour through the numerous dark web forums updated daily and keep an eye on any illegal information exchange.

What’s next?

A lot has been going on as the DarkBERT is being developed. The researchers will be incorporating multiple languages into the pre-trained model. DarkBERT performance is expected to be better with using the latest language in the pre-trained model to allow the crawling of additional data.

GPT-4 Can’t Stop Helping Hackers Make Cyber Criminal Tools

OpenAI released the latest version of its machine learning software, GPT-4, to great fanfare this week. One of the features the company highlighted about the new version was that it was supposed to have rules protecting it from cybercriminal use. Within a matter of days, though, researchers say they have tricked it into making malware and helping them craft phishing emails, just as they had done for the previous iteration of OpenAI’s software, ChatGPT. On the bright side, they also were able to use the software to patch holes in cyber defenses, too.

Researchers from cybersecurity firm Check Point showed Forbes how they got around OpenAI blocks on malware development by simply removing the word “malware” in a request. GPT-4 then helped them create software that collected PDF files and sent them to a remote server. It went further, giving the researchers advice on how to make it run on a Windows 10 PC and make it a smaller file, so it was able to run more quickly and have a lower chance of being spotted by security software.

To have GPT-4 help craft phishing emails, the researchers took two approaches. In the first, they used GPT-3.5, which didn’t block requests to craft malicious messages, to write a phishing email impersonating a legitimate bank. They then requested GPT-4, which had initially refused to create an original phishing message, to improve the language. In the second, they asked for advice on creating a phishing awareness campaign for a business and requested a template for a fake phishing email, which the tool duly provided.

“GPT-4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity,” the Check Point researchers noted in their report, handed to Forbes ahead of publication. “What we’re seeing is that GPT-4 can serve both good and bad actors. Good actors can use GPT-4 to craft and stitch code that is useful to society; but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime.”

“GPT-4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity,” the Check Point researchers noted in their report, handed to Forbes ahead of publication. “What we’re seeing is that GPT-4 can serve both good and bad actors. Good actors can use GPT-4 to craft and stitch code that is useful to society; but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime.”