Search

The Use of AI Tools by Cybercrime Groups

April 19, 2023
Blog

4 min read

The Use of AI Tools by Cybercrime Groups (1)

Tech giants like Google and Microsoft are rushing to add artificial intelligence tools like “smart chatbots” to some of their products. This has caused a media frenzy about ChatGPT and other AI tools. Microsoft is putting ChatGPT, an advanced AI that is great at natural language processing (NLP) and machine learning (ML), into the Bing search engine slowly and testing it. Google is doing the same thing with a technology called Bard that it owns.

AI can be used to improve search engine results. This is not a new idea, but it seems new because NLP has come so far and is getting so much attention in 2023. The same can be said about ML. When these two AI technologies came together, they made a chatbot app that is conversational, intelligent and even uses correct grammar. The machine learning (ML) part can’t be ignored because it will make NLP better to the point where it will produce results that look and sound like humans. This is certainly good news for applications such as virtual assistants, but there are valid concerns about how cybercrime groups and malicious hackers can leverage AI in their cybercrime activities.

There are many parts of AI that are worthy of discussion with regard to cybercrime and information security, but in this article, we would like to focus on NLP, phishing, and social engineering.

 

Phishing

IBM Security Intelligence released a report in 2023 that said more than a third of all cybercrimes started with phishing (41%). Researchers in information security have found that cybercrime groups have used AI chatbots since 2021. This seems to be related to the rise of phishing attacks and the release of AI platforms like ChatGPT. This gets better as the AI platform can make someone who isn’t a native speaker get content for their phishing attack that has good native English grammar.

Even though NLP platforms have problems, we can’t deny that AI chatbots can be used to write text and business communications that are almost indistinguishable from what people usually write. The ability to deceive is critical to phishing, and AI makes this easier and more effective for hackers, especially for those whose deficient English skills stop them from composing emails and text messages that read as if they were legitimate.

 

AI chatbots and Manipulating People

Phishing can be done on a large scale with digital media. When hackers go phishing, they can catch a lot of people because they can use email, text messages, social networks, and chat apps. They use spoofing, automated tools, and lies to catch people.

Spear phishing is a type of attack that hackers use to go after specific people with a perceived high value. It is very similar to social engineering. Spear phishing is more complicated than phishing, which is usually a one-time attack, because the target may need to be talked into it.

 

Cybercrime: Changing and training AI models with NLP

Advanced NLP models can be fine-tuned by hackers and cybercrime groups through API to write in a certain way. Some models even work with templates that change sampling and vocabulary to make text that sounds like it was written by technicians, executives, clerical workers, customer service representatives, instructors, auditors, and other professionals.

The cybersecurity issue with AI chatbots is that hackers can now deploy phishing attacks that are more finely tuned to deceive the recipient. Not that long ago, many phishing attempts could be found by looking for misspellings, typos, and often funny grammar mistakes made by hackers who didn’t natively speak or write English. Now, these hackers have access to AI chatbots that can write for them. They can also change the text of phishing and spear phishing messages to trick, persuade, and look like something else.

 

Novatech Can Help

With all of this in mind, it’s clear that AI-powered cybercrime is a real worry, but it doesn’t have to be scary to think about. Information security experts have been using AI for several years, while hackers are just starting to use it. Hackers can use AI to write their phishing and ransomware messages, but they know that reaching targets and persuading them to read is even more important to their goals.

To this end, our techniques to filter out phishing attacks are being developed with AI support; we work hard to stay one step ahead of the hackers and cybercriminals. To learn more about this kind of advanced IT security, please get in touch with Novatech today. We are using these AI tools to protect businesses and fight against those using these methods to exploit businesses.

Written By: Sr. Editor