London, May 6 (PTI) Cybercriminals are still struggling to make effective use of AI tools despite widespread experimentation since the launch of ChatGPT, according to a new peer-reviewed study analysing more than 100 million posts from underground cybercrime forums.
Researchers from the University of Edinburgh, the University of Cambridge and the University of Strathclyde have found that many cybercrime actors lack the skills and resources needed to turn AI tools into major new criminal capabilities.
The study found that AI was being used most effectively to hide patterns that cybersecurity systems are designed to detect, and to run automated social media bots linked to harassment and fraud.
People also ask
AI powered insights from this story
How are cybercriminals currently using AI tools?⌵
Cybercriminals are experimenting with AI tools, primarily using them to mask patterns that cybersecurity systems detect and to operate automated social media bots for harassment and fraud. AI coding assistants are more beneficial for skilled users than beginners.
Has AI revolutionized cybercrime according to recent studies?⌵
A recent study suggests AI has not yet transformed cybercrime, indicating it’s more of an evolution than a revolution. Many cybercriminals lack the necessary skills and resources to leverage AI for significant new criminal capabilities.
What are the potential risks of AI in cybersecurity?⌵
The immediate risk lies in the rapid adoption of poorly secured AI systems, creating new vulnerabilities for criminals to exploit. Additionally, AI can be used to develop chemical or biological weapons or corrupt data used to train AI models.
How are writers trying to prove they haven’t used AI?⌵
Writers are going to extremes to sound human, intentionally adding errors like typos or run-on sentences, using casual language, or incorporating obscure references. Some even edit AI-generated text to retain their unique voice.
What measures are being taken to defend against AI-led cyber risks?⌵
Regulated entities are advised to create long-term plans for AI usage in detection and mitigation, recalibrate risk assessments for AI-accelerated threats, and enhance security operation center monitoring. Coordinated approaches for vulnerability management and information sharing are also emphasized.
The researchers analysed discussions from the CrimeBB database, which contains posts scraped from underground and dark web cybercrime forums. They examined conversations from November 2022 onwards, when ChatGPT was publicly released, to understand how cybercriminals were experimenting with AI tools.
The study found that AI coding assistants were proving most useful for already skilled users, rather than making cybercrime easier for beginners. Researchers said the tools still required significant technical knowledge to use effectively.
They also found some evidence of AI being used in more advanced forms of automation, particularly in social engineering and bot farming.
Because many forms of cybercrime already rely heavily on automated tools and pre-made software, researchers said AI currently appeared to represent “an evolution rather than a revolution” in criminal activity.
Ben Collier, senior lecturer in digital methods at the University of Edinburgh, said: “Cybercriminals are experimenting with these tools, but as far as we can tell it’s not delivering them real benefits in their own work.”
The researchers said safeguards built into major chatbots appeared to be limiting some harmful uses.
However, they also found early signs that cybercrime communities were attempting to manipulate chatbot responses.
The study said some users in cybercrime forums were also expressing concern about losing technology sector jobs because of AI disruption, which researchers said could potentially push more people towards cybercrime.
Daniel Thomas from the department of computer and information sciences at Strathclyde said: “The more immediate risk is the rapid adoption of poorly secured AI systems by organisations and individuals, which could create new vulnerabilities that criminals can exploit.”


