_
AI, a double-edged sword for IT security?
Artificial intelligence (AI) is a major technological advancement that both inspires enthusiasm and raises concerns. While it offers promising possibilities in many fields, it also raises concerns about its impact on cybersecurity. AI can be both an opportunity and a threat to security, which is why it is crucial to understand both aspects in order to assess its role in this domain.
Artificial intelligence in a few words
William RITCHIE, CTO and Doctor of Artificial Intelligence at Custocy, defines AI as “an agent capable of perceiving its environment and adapting to a specific objective.”
In cybersecurity, AI is experiencing significant growth in order to combat the ever-increasing and complex nature of cyberattacks. According to a study published by Grand View Research, the global market size of AI applied to cybersecurity was estimated at $16.48 billion in 2022. It is expected to reach $93.75 billion by 2030, growing at an annual growth rate of 24.3%.
However, while the use of AI tends to expand, it raises questions regarding the security of business IT infrastructures.
AI, a powerful detection technology
The cybersecurity of businesses is generally ensured by small teams of security analysts who must monitor and analyze hundreds of thousands of network data flows daily. Given the volume of traffic, they need to be assisted by specialized software solutions to detect and prevent attacks.
Machine learning, in particular, holds great promise for intrusion detection systems (IDS). Instead of manually defining attack rules, AI trains itself using large amounts of data to analyze network traffic and detect abnormal behaviors with high precision.
Reinforcement learning is another interesting aspect of AI in the field of cybersecurity. It allows the system to learn from its mistakes through feedback and corrections provided by security analysts. This way, the IDS can continuously improve and refine its detection capabilities.
Different learning models can be updated and adjusted as new threats emerge.
Moreover, the use of artificial intelligence offers several advantages in cybersecurity:
- Detection that surpasses humans – By analyzing a massive amount of data and factors in real-time, AI can identify abnormal behaviors far beyond rule-based approaches.
- Detection precision – AI can learn from historical data and patterns of malicious behavior. It can also adapt to the organization by learning the behaviors of its users and assets, enabling highly accurate detection of unusual activities and threats.
- Continuous adaptation – AI continually adapts to new malicious behaviors by continuously training and learning from new data. This positions it as a superior technology for protecting against sophisticated and unknown attacks (zero-day).
- Reduced cognitive load – In certain solutions like Custocy’s NDR, AI enables cybersecurity analysts to focus on higher-value tasks. It can provide recommendations for prioritizing threats and actions to be taken, allowing for faster and more efficient responses. Security management is facilitated.
- Difficult to bypass – AI is much more challenging for attackers to circumvent compared to fixed rules. Moreover, certain types of AI can adapt to their adversaries’ behavior and even attempt to predict potential attack vectors in advance. In this space, adversarial AI is emerging as a future technology.
Therefore, AI enables companies to gain efficiency and speed in detection. It does not replace security teams but proves to be a true assistant. However, this powerful technology also falls into the hands of cybercriminals, who see it as an unprecedented opportunity to develop sophisticated techniques aimed at circumventing security systems.
AI, the new weapon of attackers
Advancements in AI have opened up new possibilities for hackers seeking to attack businesses.
For example, they now use Large Language Models (LLM) to create highly convincing phishing campaigns. Until recently, these campaigns were easily identifiable. The emails were poorly written and contained numerous spelling mistakes. However, this is no longer the case, making them harder to detect and more deceptive to users.
Attackers also exploit these language models in a different way by performing code analysis to identify vulnerabilities and compromise the security of companies.
Likewise, they can use machine learning techniques to analyze weaknesses in an IDS and develop new attacks to deceive it.
Adversarial models, on the other hand, can generate attacks of various categories, such as brute-force attacks, distributed denial-of-service (DDoS) attacks, or malicious scans, in order to evade detection by less robust systems. Manually configured traditional IDS are particularly susceptible to these attacks, as AI can easily identify their limitations and bypass them.
So, how can we counter the development of these AI-based attacks?
Certain AI-based systems have the advantage of being more dynamic, such as unsupervised models that can detect anomalies in network behavior, which could help identify suspicious activities generated by AI.
To address these threats, adversarial techniques can be used to reinforce IDS by training detection models to be robust against attacking models.
In this scenario, two AI models would be used, one playing the role of the defender (IDS) and the other as the attacker. By exposing the IDS to AI-generated attacks, it would be possible to strengthen its detection capabilities and develop more effective countermeasures.
This approach would allow the IDS to adapt to emerging adversarial techniques and better protect against AI-based attacks.
Generative AI, a growing threat
Generative AI such as ChatGPT have become a growing concern in our interconnected society. Fueled by advanced language models like LLM, they are rapidly evolving.
We are going to face an avalanche of cyber intrusions because attackers have the ability to leverage these generative AI to generate large-scale attacks at a lower cost. That is why it becomes crucial to implement robust strategies.
On the network side, generative AI primarily rely on existing data. At the time of writing this article, the attacks they generate are not, to our knowledge, as complex as those of a talented human attacker. However, it is important to note that these generative AI are evolving rapidly and becoming increasingly sophisticated. As they improve, they may potentially generate attacks that are more realistic and harder to detect.
It is therefore crucial to take this evolution into account and prepare for the rise of large-scale adversarial attacks generated by AI.
In conclusion
Cybercriminals have found a powerful weapon in AI to compromise the security of businesses. However, in the face of digital transformation and the proliferation of interconnected devices, without artificial intelligence in a cybersecurity solution, we will not be able to cope with these new forms of AI-driven attacks. We are entering the game of “AI vs. AI,” but who will prevail?
It is a reality that attackers will continue to use AI to launch their attacks. These cyberattacks will become increasingly complex and difficult to detect. Therefore, to defend themselves, it is essential for companies to adopt a multi-layered security strategy and, above all, equip themselves with specialized solutions built around artificial intelligence.
Curious to discover our NDR solution? Book your demo slot, it’s 100% free!👉HERE.