_
NDR solutions require rigorous Data Science not hyped AI models
Network Detection and Response (NDR) solutions play a critical role in safeguarding information systems against emerging threats. In an increasingly connected world vulnerable to sophisticated cyber-attacks, businesses are turning to specialized solutions that integrate AI to enhance their capabilities in detecting malicious behaviors.
However, it is crucial to distinguish between a solution built on its own AI models, regularly trained and fine-tuned by data scientists, and a solution that incorporates preconceived and overhyped AI models.
In this article, we shed light on why NDR solutions must be built upon a solid foundation of Data Science.
The Effectiveness of IDS
Network Intrusion Detection Systems (IDS), such as Suricata or SNORT, leverage community knowledge to define signatures and data schemas that represent potential threats. In terms of threat coverage and volume of data analyzed, they excel in their domain.
However, despite their high performance, they are not infallible.
If an adversary manages to infiltrate the network, they can use legitimate or semi-legitimate means to perform nefarious tasks such as exfiltrating data without being seen. This is where AI can make a crucial difference.
AI for Detecting Anomalous Behaviors
Illegal activities may hide behind legitimate protocols and commands, but their behavior is generally unusual. For example, data exfiltration might go unnoticed if an attacker acts prudently over hours, days, or weeks, transferring small parts of data regularly to stay under specific thresholds set by an NIDS (Network Intrusion Detection System) or a firewall.
However, a regular employee would not behave in such a manner. This is where AI comes into play, analyzing network behavior to detect these unusual repetitive transfers and identifying the malicious objectives of attackers.
The Approach of Rigorous Data Science for Detection
AI offers tremendous potential to enhance threat detection when used rigorously.
Network data comprises a complex mix of quantitative elements, such as payload size or connections per minute, and qualitative data like header type or used ports. These pieces of information are often specific to each business’s network and the applications used, necessitating the use of data specific to the network protected by AI. However, it is not feasible to capture data for each client and run all types of attacks to train AI on their networks.
Considering these limitations, we have at least two possible approaches.
Anomaly Detection
Knowledge Transfer
Rather than training AI to recognize specific behaviors, we can use AI models that only learn the normal behavior of a client’s network and then detect outliers. This approach, known as anomaly detection, requires an extensive body of research. However, it is effective only if the AI model has complete visibility of the network’s normal behavior to accurately identify anomalies.
This implies that network assets must be well-defined, the applications used, and authorized behaviors must be specified and constant over time or exhibit measurable patterns considered normal. However, even under these conditions, anomaly detection can generate many false positives that cannot all be investigated due to time and staff constraints.
To overcome this influx of false alerts, predictions made through this approach can be contextualized. Understanding what makes a data point abnormal allows for attempts to assess its severity or associate it with events from other sources to either reject or prioritize them.
Another approach involves pre-training AI models in a laboratory on many types of attacks and then transferring this knowledge to a client’s network for further training and adaptation to network specifics.
However, this requires that the knowledge can be transferred from a laboratory environment to the client’s network. If the laboratory data lacks specific network protocols or applications that the client’s network will use, much of the data might not be transferrable. As a result, all input data must be meticulously scrutinized and evaluated for its informativeness in different network environments.
It is crucial to strike a balance in knowledge transfer. The AI model must have learned enough in its original environment to handle multiple types of attacks but also adapt sufficiently to the client’s network. If the model retains too much information from its original training, it won’t recognize attacks in the client’s network. Conversely, if the AI learns too much in its new environment, it will forget the information pertained to the attacks it was initially trained on.
Expertise in AI Required
In both approaches, a profound understanding of how input data affects our AI models is essential. It is also crucial to know how data evolves in different environments and how models utilize it to make predictive decisions. To gain this understanding, it is necessary to combine descriptive statistics on input data with model explainability approaches. Additionally, regularly retraining models allows for the detection of potential prediction drifts and monitoring the evolution of input feature importance over time.
For instance, if a model starts relying solely on a few input features instead of using multiple ones for its predictions, this could be an indicator that it has discovered features specific to a particular protocol or application. Such over-focus could make the model “narrow-minded” and cause it to learn network-specific peculiarities rather than distinguishing between normal and malicious behavior.
These valuable insights into data and model usage are not coincidental. They result from meticulous work carried out by a team of data scientists collaborating with experienced security analysts. These highly specialized skills cannot be simply acquired as microservices or prepackaged Docker images. Instead, they require a dedicated lab equipped with expertise in data preprocessing, wrangling, and data monitoring.
The AI model itself, whether Convolutional Neural Nets, XGBoost trees, or transformers, represents the culmination of a thorough exploration of training data and model outcomes. Treating it as a plug-and-play solution would undermine its effectiveness, as its performance heavily depends on the quality of data and the preliminary work done by the data science team.
In Conclusion
For these reasons, NDR solutions aspiring to integrate AI must adopt a proactive and visionary approach. They need to establish an in-house laboratory dedicated to data exploration and Data Science research. Simply importing preconceived AI solutions will not achieve the efficiency and precision required to tackle the growing challenges of cybersecurity.
Ultimately, in the realm of information security, there are no free or ready-made solutions. Embracing rigorous Data Science will allow businesses to truly harness the power of AI and protect their assets and sensitive data effectively.
Curious to discover our NDR solution? Book your demo slot, it’s 100% free! 👉HERE.