SOAR, UEBA, CASB, EDR and others: which tools do you need for you SOC? (2/3)

After the first article, which covered “Extending the scope of detection to new perimeters” (available here), this second installment is the next in our summer series about the SOC…

 

Enhancing detection with new approaches

Think identity to detect suspect behaviors: UEBA

User and Entity Behavioral Analysis (UEBA—previously known as UBA) technologies are among the latest tools being used to enhance SOC’s detection arsenals. As their name suggests, they take a specific approach—leaving aside the technical considerations of current solutions (SIEM, etc.), and, instead, analyzing the behavior of users and entities (including terminals, applications, networks, servers, connected objects, etc.).

The principle is simple, but its implementation much less so. To be effective, UEBA approaches require a diversity of sources, and a variety of data formats. Traditional sources, such as SIEM and log manager(s), are employed and, in addition, certain resources (such as ADs, proxies, BDDs, etc.) are often used directly.

But, to perfect their detection capabilities, UEBA solutions also draw on new sources: information on users (HR applications, badge management, etc.), exchanges between employees (chats, video exchanges, emails, etc.), or any other relevant sources (business applications that need to be monitored, etc.).

Taking all this information together, UEBA solutions analyze the behavior of users (and entities) to identify potential threats. They can use static rules, in the form of signatures to be detected (which are often already implemented in SIEM solutions): simultaneous connections from two different locations, or unusual times of use, etc.

But the real strength of UEBA lies in the use of Machine Learning algorithms to detect changes in the behavior of users or services: suspicious business-function operations, access to critical, previously unused applications during holidays, unusual data transfers, etc.

Although UEBA was initially conceived to counter fraud, its role has gradually broadened to cover some areas that typically pose problems for SIEM: data theft, compromise or loan of application accounts, terminal or server infection, privilege abuse, etc.

Thus, today, UEBA is positioning itself as complementary to SIEM, adding to the latter’s “technical” approach by providing “user” visibility, and bringing an additional layer of intelligence to the analysis.

The market’s view is that, in the coming years, UEBA solutions will probably cease to exist in their present form. Instead, they’ll be integrated into existing solutions (SIEM, EDR, etc.), changing their form from products to functionalities.

Examples of UEBA publishers:

 

Trapping attackers: deceptive security

Deceptive Security can be considered as a move to a higher form of the Honey Pot approach. Here, decoys, in the form of data, agents, or dedicated environments, are distributed widely throughout all, or part of, the IS.

Depending on the needs and solutions, Deceptive Security tools can serve two purposes. By diverting the attention of attackers away from real resources and leading them down false trails, they can act as a means of protection.

But above all, monitoring these decoys can detect threats that are spreading within the IS. In fact, the decoys have no other use than to lure potential attackers or to provide false information; any communication with them is then, by definition, suspect.

This type of solution isn’t a replacement for existing measures but addresses very specific use cases where conventional detection approaches are ineffective: APTs, which are specially designed to circumvent them, and, more broadly, horizontal movements within the IS.

For more detail on Deceptive Security solutions, read our dedicated article here.

Examples of Deceptive Security publishers:

 

Detecting weak signals on the network: machine learning sensors

Traditional detection sensors (IDPSs), based on traffic analysis and comparisons with known attack signatures, are not particularly effective when it comes to detecting subtle attacks (APTs, etc.) or unknown threats (0-day, etc.). To overcome this problem, new-generation IDPSs integrate Machine Learning capabilities (sometimes presented as Artificial Intelligence) into their detection arsenals.

Depending on the solution, two types of use for Machine Learning can be distinguished. On the one hand, the use of these algorithms in supervised mode to learn to recognize the behavior of certain attacks, or phases of attack (during the active phases): command and control, scans, lateral movements, data leakage, etc.

On the other, once the sensor has been deployed, adjustment of the detection thresholds to the client context is also based on Machine Learning algorithms (something already used by many traditional IDPS solutions).

This mode of operation enables rapid deployment (solutions that can be used out-of-the-box with shorter learning phases), and a better ability to detect previously characterized attacks. Conversely, the detection of attacks that have not been subject to learning, or are completely unknown, remains difficult.

In contrast to this approach, some solutions rely on unsupervised learning to detect attacks. Here, during deployment, sensors are positioned on the network to observe the traffic and learn how to recognize what constitutes legitimate traffic.

Once the learning phase is over, the sensors can detect anomalies and raise alerts when suspicious behavior occurs. This approach enables the detection of unknown attacks, but generally requires a longer learning phase if it is to be effective and achieve an acceptable false alert rate.

In both cases, the “Machine Learning sensors make it possible to enhance an SOC’s arsenal (which, today, is mostly aimed at detecting known attacks) through detection capabilities that can discern complex, unknown attacks, or those designed to circumvent conventional security approaches.

Initial feedback from the field shows that these technologies can indeed detect threats that bypass conventional security measures. False positives, however, are very common (the learning curve varies widely, depending on solutions and contexts), and it remains difficult to judge how comprehensively threats are being detected.

“Machine Learning” sensors therefore have a definite future among SOC tools, even if they need to further mature to reach their full potential.

Examples of Machine Learning sensor publishers:

 

You can find our third, and final, article in this series here.

Back to top