Artificial intelligence is a powerful tool that can be used to do good or harm. Here are a few of the ways that it is helping scammers and hackers right now.
You, the reader, help support MUO. We may receive an affiliate commission if you purchase products using our links.
Cybercriminals also look to this technology for help in their criminal activities. AI can be a fascinating area, but also cause us to worry. What are the ways that AI can help cyber criminals?
1. Writing Malware
It is not surprising that malware can be written using artificial intelligence, as it is a very advanced technology. Malware (a portmanteau between the words “malicious software” and “hacking”) is a term used to describe malicious programs that are used in hacking. It can take many forms. To use malware it first has to be written.
Cybercriminals may only some are experienced coders, and some simply do not want to spend the time required to write new programs. Here is where artificial intelligence can be useful.
Early in 2023, chatGPT was discovered to be a tool that could be used for malicious attacks. OpenAI’s ChatGPT, which is hugely popular, is powered by AI infrastructure. This chatbot is useful for a variety of things but it’s also being used by criminals.
In one case, an individual posted on a hacking site that he had created a Python-based malicious program using ChatGPT.
ChatGPT can automate the writing of malicious programs. This allows novice cybercriminals to take advantage of the system, even if they don’t possess a great deal of technical knowledge.
ChatGPT, or at least its most recent version, can only create basic and often buggy malware programs rather than sophisticated code which poses serious threats. This is not to say AI can’t be used to create malware. A current AI chatbot is capable of making basic malicious programs. It may not be too long before we start seeing more heinous software created by AI systems.
2. Cracking Passwords
Passwords are often the only line of defense for our devices and accounts. It is not surprising that cybercriminals are trying to crack passwords to access our private data.
Cybercrime is already a popular form of cybercrime. Malicious actors can use various techniques to discover a victim’s password. AI can increase the chances of cracking passwords. Different techniques are more successful than others.
AI password crackers do not represent science fiction. ZDNet stated that cybersecurity experts found that over half of all commonly used passwords could be cracked within a minute. This article cited a Home Security Heroes Report that stated an AI-powered tool called PassGAN was able to crack 51 percent (of common passwords) in under a minute and 71 percent in less than one day.
These numbers show just how dangerous AI password cracking can be. It’s impossible to know what cybercriminals could do with a tool that can crack regular passwords in under 24 hours.
3. Conducting Social Engineering
Social engineering, a cybercrime technique, is a problem all over the world. It claims thousands of victims each week. This technique uses manipulation to force victims into complying, sometimes without realizing they are being targeted.
AI could be used to aid in social engineering attacks, by creating malicious content for communications such as phishing texts and emails. It would be easy to use a chatbot, even with the current level of AI, to create a convincing script that cybercriminals could then use to manipulate their victims. The threat is not going unnoticed and people are worried about what’s to come.
AI can also make malicious communications appear more official and professional by correcting spelling and grammar errors. These errors are sometimes cited as possible indicators of malicious activity. It may be helpful for cyber criminals to write their social-engineering content with greater clarity and effectiveness.
4. Finding Software Vulnerabilities
Cybercriminals need to exploit security vulnerabilities to hack software. Bugs in software code are often the cause of these vulnerabilities. Vulnerabilities can be a serious problem if a bug is not fixed or a person does not regularly update software (which fixes security flaws).
Cybercriminals are aware of this and they are always on the lookout to find vulnerabilities. Existing tools, like an exploit kit, can be used to identify vulnerabilities. A malicious actor could highlight more vulnerabilities using AI. Some of these vulnerabilities could be used for a great deal of damage.
This AI application can also help cybersecurity vendors find vulnerabilities in advance of their being exploited. Patching a vulnerability quickly will reduce the ability of malicious actors to exploit it. This can help to mitigate attacks.
5. Analyzing Stolen Data
Data is worth more than gold. sensitive information is constantly being sold on dark-web marketplaces, and some malicious actors are willing to pay very high prices for the data if it is valuable enough.
To make these data available, they must first be stolen. Small amounts of data can be stolen, especially if the attacker targets lone victims. More complex hacks can lead to the theft of large databases. Cybercriminals must now determine which information is valuable in the database.
AI could streamline the process of highlighting useful information, reducing the time that it takes for malicious actors to decide what’s worth selling or, on another hand, directly exploiting. AI is based on learning. It could be easy one day to use a tool powered by AI to detect valuable sensitive information.
AI is promising but also poses many threats
Cybercriminals have exploited artificial intelligence, as they do with other technologies. AI has some illegal capabilities. It’s hard to predict how cybercriminals can use this technology to further their attacks in the future. Cybersecurity companies may also use AI to combat such threats. However, only time will tell if this is the case.