How to improve your cyber detection by moving to the Cloud

Cloud is on everyone’s lips, especially in these unusual times of remote work. Many organisations are reviewing the way they design and implement their activities in order to move to Cloud Services Providers (CSP). But this “Move to Cloud” trend might also be an opportunity for security teams to take back control and detect incidents better than ever!

In the past year, I had the chance to work with different organisations in their Cloud transformation, and each of them has provided our team of Wavestone consultants with insights and key lessons on what Cloud-based detection systems can and cannot bring to an organisation.

For this article, bear in mind that we will consider any change of configuration leading to a degradation of the security level as an incident. While it does not perhaps fit the exact, usual definition of a security incident, misconfiguration of a Public Cloud service (where resources and data can be directly accessible through the internet) is too serious of an issue to not raise an immediate alert for the security of the information system.

 

Embrace the quick wins

When using Public Cloud from the main providers (Amazon Web Services, Microsoft Azure and Google Cloud Platform), it is fairly easy to turn on the native detection features and kickstart a basic, yet effective detection capability. Most platforms will provide a central security platform that enables you to detect misconfiguration in the infrastructure you have deployed, score your compliance level against a given standard and raise some alerts when the most typical incidents will occur (see further). There is virtually no reason to skip this feature, which is sometimes free to enable (either for trial or permanently).

Additionally, logging is virtually a non-issue in your security roadmap. Cloud providers will typically allow you to stream the logs from both your virtual machines (through agents), your PaaS components (via a handful of clicks, or a couple of parameters in your Infrastructure as Code templates) and the management plane of your subscription (activated from scratch). This enables your security team to swiftly understand the ongoing activity on the platform and start building on the logs to get some alerts. Moreover, some Cloud providers SIEM systems (such as Azure Sentinel) have ready-to-be-plugged connectors for appliances and external data sources which will parse the logs and remove some of the heavy lifting required when bringing the logs home to the SIEM.

 

Take the opportunity to improve security right away

Once you have learned the basics of the native Cloud detection tools, it is time to build your own expertise to be able to rely on your own tools! You can also leverage third-party solutions such as Cloud Security Posture Management (CSPM) solutions and configure it to cover your needs.

As hinted above, the native features from Cloud Providers offer some basics alerts which can go a long way. With AWS Guard Duty, you can detect compromising of AWS EC2 access tokens or abnormal access to S3 buckets, Azure Security Center will notify you when potentially malicious activity is detected on a virtual machine, or when Azure AD accounts are likely to be taken over… If you need to be quickly capable to detect attacks, there is a way to leverage the native, ready-to-be-used alerts available (although some of them might require the premium license after a free trial).

One of the key perks of Cloud detection is that you can right away act upon them with automatic remediation! For example, misconfigurations are a real source of concern for security teams, as the Terabytes of data leaked through accidentally exposed S3 buckets will testify. So why not reconfigure any bucket exposed, unless it has specifically been set in an “Allow List”? Automation will allow you to detect the exposition pattern, launch a serverless function which will fix the misconfiguration and could even notify the resource owner or the security team.

This can be done for misconfiguration, but also for malicious activity: if you detect an EC2 token being stolen from the metadata of an instance, you can temporarily remove its access rights. If you notice logging is being disabled, re-enable it and lock the responsible user accounts. This will drastically improve your time-to-react to security incidents.

Of course, you still need to work on the overall incident management process: both on how to avoid the misconfiguration of services (through training of developers and controls in the CICD channels if existing) and on how to manage them once they occur (the operating model is tackled further below).

 

Get closer to business and continuous improvement

Moving to Cloud is usually a time where applications and workloads will have to pass again through a security review to ensure the architecture and design are sound and safe. But it is also an opportunity to make security detection more relevant to the application.

To make it count, my advice would be:

  • Go through the process of “Service Enablement” for new services: as moving to the cloud allow business and IT teams to use hundreds of new features and components, it is important to bring together architects and security teams to assess the main risks for each new technology, find countermeasures to limit these risks and start thinking about the alerts that will need to be implemented in the SIEM ;
  • Build an alert catalog for each typical risk scenario and component, with the logic of the alert already pre-defined and only the business specifics to be customised. The “time to market” for supervision should also drop, as a good share of the components used for cloud operations is common to most applications (virtual machines, databases, serverless applications and functions, decoupling systems);
  • Keep up to date with Cloud-related attacks to understand the latest vulnerabilities/attackers paths, and integrate them in your detection systems.

All these applications specifics should sit on top of transversal alerts covering your core Cloud functions (IAM, networking, landing zones, etc.). To help you build this core-detection capability, you can obviously count on our team, but I should also recommend checking on the ever-growing CloudSec community, which continuously share its expertise through open-source tooling (as this consolidated-view will prove) or on live and online platforms (such as the Cloud Security Forum and its first Fwd:CloudSec conference this year).

 

Not everything is easy though!

Based on everything written above, it might seem effortless to get a solid cloud detect and react proficiency. However, some challenges remain to be tackled.

The first one to come to mind is pricing. Often suggested as a selling point for Move to Cloud programs, accurately estimating how much your provider will charge you for Cloud detections is not as easy as it sounds. Over the years, many CSP security solutions have moved to component-based pricing for IaaS and transaction-based pricing for PaaS components. Log storage and alerting are sometimes even more complex, as some solutions will charge you based on log transit and aggregation, while some solutions will charge you for the number of assessments against alerts you run. Significant work is required to determine a truthful budget, and not go bankrupt.

The second key attention point is to understand what your provider offers and what it does not offer in terms of detection. While most solutions will claim to solve all your problems at once, it is unfortunately far from true. And for each security use case, there needs to be a call on whether you are fine with the free option if it exists, if the premium one is required, or if your security teams can make it on their own. Realistically, you will need to start with the native option, until your security team is mature enough, cloud-wise, to move to a homemade process.

Additionally, and maybe the most significant aspect, you need to design an operating model that will allow you to work with multiple subscriptions, multiple teams/businesses and possibly multiple Cloud Providers. More and more organisations are parallelising operations by picking different CSPs for different use cases, which leads to increased complexity for security teams – as they need to manage incidents on different platforms, with responsibilities divided between DevOps, SecOps and the on-premise teams. This will be especially difficult as some misconfiguration will lead to immediate security risks, and a choice needs to be made on whether the Ops or Security is expected to act. Without a strong division of duties across all providers and teams, there is a fair chance a small misconfiguration will snowball its way into a major data leak.

Finally, remember that monitoring your Cloud applications in the Cloud can also create risks. Besides vendors lock-in, you can lose all security functions along with your applications if everything sits under the same management plane. If the global administration rights of the SIEM tenant are taken over by an attacker, he or she will have all the liberty to affect the underlying resources (meaning erase logs, disable alerts or remove remediation capabilities). It is worth thinking about it before stacking your SIEM and critical applications under the same roof.

In the end, to sum it up:

  • Grab the low hanging fruits: your Cloud Provider will help you collect and consolidate the logs easily. There are virtually no technical barriers to not use the logs anymore. In addition to that, enable the basic security features provided by your CSP to detect the most obvious attacks.
  • Grow your cloud maturity together with cloud teams: The Cloud movement has pushed the business and IT teams (SecDevOps) to work closer than ever. Embrace this philosophy by better understanding the business needs in terms of security, customise alerts and automate your response to allow your capability to scale.
  • Optimise costs and operating models to excel: Virtualisation has made a lot of technical aspects easier for teams, but processes can be hard to adapt. Make sure to carefully design your detection/incident response operating model to ensure all your applications and Cloud Providers are covered. Finally, think about cost optimisation when it comes to log management!
Back to top