The rise of artificial intelligence (AI) has enabled tremendous advances in cybersecurity defence, yet also opened the door to more sophisticated cyberattacks. As AI capabilities grow, so too does the potential for misuse and abuse. This poses complex challenges for cybersecurity professionals tasked with securing data and systems.

On one hand, AI can analyse huge volumes of data to detect breaches and anomalies at machine speed. It can generate threat intelligence, perform triage, and automate responses to contain attacks. According to Markets and Markets, the AI cybersecurity market is expected to reach $60.6 billion by 2028, growing at a 22% CAGR as organisations invest heavily in AI-enabled defences. However, threat actors also leverage AI to orchestrate attacks, profile targets, bypass defences, and malware. The same technologies fueling next-gen cybersecurity defences are being co opted for offence.

This dynamic interplay between AI offence and defence will only accelerate. As attackers find new ways to exploit AI, defenders must counter with advanced systems. An AI “arms race” threatens to ensue, with each side battling for dominance. In this high-stakes environment, cybersecurity teams must take proactive measures to stay ahead of emerging threats.


AI and Machine Learning Basics

AI refers to machines that are programmed to think and act like humans. Machine learning is a subset of AI that allows machines to learn and improve automatically through experience and data. Rather than being explicitly programmed, machine learning algorithms are “trained” on large datasets to find patterns and make predictions or decisions without human intervention.

Some of the most popular real-world applications of AI and machine learning today include:

  • Image and facial recognition, like in social media and security systems 
  • Speech recognition and natural language processing, like virtual assistants Siri and Alexa 
  • Product recommendations on sites like Amazon and Netflix 
  • Predictive text capabilities in messaging apps and word processing programs 
  • Navigation apps like Google Maps that can route drivers in real-time 
  • Fraud detection and risk assessment in the finance industry


How AI enables more sophisticated cyber attacks

The use of AI is making cyber attacks more effective and harder to detect. AI tools can automate the process of creating targeted phishing campaigns, deep fakes, and other types of social engineering.

With access to vast troves of data, AI systems can analyse a company’s communication patterns and language use to generate highly persuasive fake emails that seem authentic. Attackers can even clone a CEO’s voice using deepfake technology to bypass voice authentication systems.

AI also enables more sophisticated malware. Algorithms can tailor malware code specifically for a victim’s system environment to avoid detection. Some AI systems have even learned to add junk code to malware to evade signature-based antivirus tools.

Overall, the application of AI is making attacks stealthier, more precise, and more effective at infiltrating systems. Defenders must stay on top of these evolving threats.


AI’s Dual Use – Cyber Offence and Defence

Artificial intelligence is a dual use technology when it comes to cybersecurity. On one hand, AI can be used to bolster cyber defences and make systems more secure. Machine learning algorithms can analyse massive amounts of data to detect anomalies and identify potential threats. AI-powered systems can also quickly respond to attacks and automatically take actions to protect networks and data.

According to Jason Healey, Columbia university, AI-enabled defensive systems have the potential to reduce the workload for human analysts, allowing them to focus on the most sophisticated threats that require human judgement. AI defences can also operate at speeds difficult for humans to match.

However, AI can also be used by threat actors to launch more advanced and automated cyber attacks. By weaponizing AI systems, hackers can identify vulnerabilities and launch exploits more efficiently. AI can power new types of attacks like deep fakes and persuasion campaigns on social media. And by leveraging AI, attackers may be able to overwhelm AI-based defences.

While AI provides defenders more advanced tools, it also makes the threats themselves more complex. Currently, it’s difficult to assess whether AI advantages offence or defence more overall. But as AI capabilities grow on both sides, the cybersecurity landscape will continue to rapidly evolve. Organisations must stay vigilant and continue adopting the latest AI defences while also preparing for more sophisticated AI-powered threats.


Real World Examples

AI and machine learning have already been leveraged in real-world cyber attacks. Here are some recent examples:

Researchers from CyberArk Labs recently published a report on using AI to produce synthetic voices for social engineering attacks. By using generative models like DALL-E and GPT-3, the researchers were able to quickly create convincing fake voices and launch vishing (voice phishing) campaigns. This demonstrates how AI can be used to efficiently scale up and automate social engineering attacks.

An AI image generation model called DALL-E mini was used to create fake profile pictures on LinkedIn to make social engineering attacks more convincing. The AI-generated faces looked realistic enough to trick people. This shows how AI can be leveraged to enhance more traditional attack methods.

In 2021, a threat actor group launched a spear-phishing campaign targeting energy companies that leveraged AI. The phishing emails used natural language generation to create messages that mimicked each company’s unique communication style. This enabled more convincing social engineering attacks.

These examples demonstrate how AI is already being used by threat actors to improve the speed and effectiveness of attacks like social engineering. As AI capabilities continue to advance, we can expect to see more sophisticated AI-powered cyber attacks targeting businesses and individuals.



Recommendations for securing systems

With the rise of AI-powered cyberattacks, organisations need to take proactive steps to secure their systems and data. Here are some key recommendations:

Train employees on AI cybersecurity risks

Educate staff about how threat actors may use AI, so they can be more vigilant about unusual activity that may indicate an attack. Training should also cover best practices for using and securing AI systems properly 

Audit AI systems regularly

Continuously monitor your own AI systems for unusual behaviour or changes that could indicate an attack or misuse. Perform rigorous testing and have hackers try to manipulate the systems 

Use AI to enhance defences

Fight fire with fire by employing AI cybersecurity tools that can detect anomalies and emerging threats that humans or traditional software may miss. AI can continuously analyse large volumes of data and traffic patterns 

Update cyber practices and policies

Review and update standard cyber practices to account for AI-related risks like targeted deep fakes, automated spear phishing at scale, and algorithm manipulation. Update ethics policies for proper use of AI.

Closely manage access

Use strict access controls and code reviews for any AI systems, limiting access only to essential personnel. This reduces the risk of insider threats or manipulated algorithms.

Practice good cyber hygiene

Don’t neglect other good practices like keeping software updated, using encryption, filtering web traffic, backing up data, and testing incident response plans. Strong foundational security remains critical.


With the agility of AI systems, defending against AI-powered cyberattacks requires organisations to take an equally adaptive and multilayered approach to security. Combining technological solutions, training, vigilance and policies tailored for AI can help mitigate these emerging threats.



The rise of AI is enabling more sophisticated and dangerous cyber attacks that are harder to detect and defend against. AI can generate convincing phishing emails, launch automated hacking attempts at scale, enhance malware and identify vulnerabilities faster than humans. We should expect AI to be increasingly leveraged by threat actors in the near future as the technology becomes more accessible.

At the same time, AI is also being used to bolster cyber defences through intelligent anomaly detection, automated threat hunting and predictive analytics. AI presents opportunities on both sides of the cyber war.

The escalating use of AI for cyber offence means we must be vigilant and proactive about securing our systems. Recommendations include adopting a “security by design” approach, implementing robust identity and access controls, monitoring for anomalies and staying up-to-date on the latest threats. With awareness, preparation and defensive AI, organisations can get ahead of the risks posed by malicious use of AI. But we must keep pace with this rapidly evolving threat landscape.