Cyberattacks are on the rise. AI is part of the threat but also part of the solution. Especially, some of the newer AI strategies (such as adversarial attacks) could be a significant new attack vector.
For example, using deepfakes, you could create highly realistic videos, audio or photos and overcome biometric security systems or infiltrate social networks.
We will cover adversarial attacks in more detail in the following post. In this post we summarise overall threats and defence mechanisms for AI in cybersecurity.
The top five areas in which AI can be used as a threat in cybersecurity are:
- Impersonation and spear phishing attacks
- Misinformation and undermining data integrity
- Disruption of remote workers
Source MIT technology review
Cybersecurity investment is expected to rise exponentially the meet the emerging threats. AI security market forecasted to reach USD 38 billion by 2026 from USD 8 billion in 2019, at CAGR of 23.3% (source TD Ameritrade).
The top four uses of AI to mitigate these threats are
- Network threat analysis
- Malware detection
- Security analyst augmentation including automating repetitive tasks and focussing analyst time to complex threats
- AI-based threat mitigation including detecting and countering threats
We are currently not seeing much about adversarial attacks because not many deep learning systems are in production today. For example, using adversarial attacks autonomous driving systems can be fooled with misleading road signs. You can see more in A survey on adversarial attacks and defences which we will cover in the next post
Image source flickr