Our website makes use of cookies like most of the websites. In order to deliver a personalised, responsive and improved experience, we remember and store information about how you use it. This is done using simple text files called cookies which sit on your computer. These cookies are completely safe and secure and will never contain any sensitive information. By clicking continue here, you give your consent to the use of cookies by our website.

CCI
Tuesday, 18 April 2017 17:25

Has Defence-in-Depth failed us? By Anthony Perridge, Regional Director, ThreatQuotient

Posted By 

Defence-in-depth is a philosophy that we’re all familiar with, layering security controls throughout and IT systems so that if one fails or, if vulnerability is exploited, another is there to prevent an attack. 

Having become standard practice for the vast majority, this sounds like a great approach, right? Well, perhaps wrong. If the slew of headlines about compromises and breaches - as well as the velocity at which they occur - are to be believed, it would appear that it has not worked. Therefore, in spite of all its promise, perhaps defence-in-depth has failed us.

But why?

The main issue stems from the fact that each layer of defence has been a point product – a disparate technology that has its own intelligence and works within its own silo, resulting in three key challenges. First, silos can make it difficult to share intelligence – between tools or even teams – in any real way. Second, management complexity grows exponentially as you add additional management consoles for an already stretched security team. And third, these silos of technology act as an obstacle course for the attacker. As the saying goes, “every obstacle is an opportunity”, and attackers capitalise on that, successfully navigating this obstacle course every day until they accomplish their mission whether it is to steal, disrupt or damage what’s not theirs. Granted, over time adversaries have evolved, as have the technologies to catch them, however, the architecture has not. So even if the course may be harder it is still a course nonetheless.

As companies layer new products and technologies they then find themselves with numerous security products and vendors in numerous silos. And, since these products aren’t integrated, each layer in the architecture creates its own logs and events, generating a massive amount of data and a massive management challenge. So, where does all this data go, and how can you keep up?  Recent ESG research finds that 42 percent of security professionals say their organisation ignores a significant number of security alerts due to the volume and more than 30 percent say they ignore more than half! In most cases it is the security operators within the Security Operations Center (SOC) that find themselves drowning in this data as they undertake the onerous task of manually correlating logs and events for investigations and other activities.

What’s the solution?

In an attempt to overcome the data overload challenge, SIEMs emerged as a way to store all this data and aggregate and correlate logs and events. Whilst this has worked to an extent, even SIEMs have limitations. On the technology front, SIEMs can be complex and can face scale challenges with today’s volumes of data.. On the economic front, it can be costly for a company to store everything in the SIEM,thus they pick and choose what to include and what not to.

The tool of choice for SOCs has been the SIEM and it has certainly helped, but the volume of data is so great that security operators still can’t keep up. They are now looking at ways to mine through the SIEM data to find threats and breaches. One use case is to apply threat data from an outside feed – commercial, industry, government, open source, etc. – directly to the SIEM. Using data on threats found “in the wild”, the goal is to see what indicators of compromise (IoCs) may be hidden in the vast amounts of data. In theory, applying threat feeds directly to the SIEM should work and provide some relief, but in reality this approach creates new and additional challenges for multiple reasons:

1.         Lack of Context - SIEMs can only apply limited (if any) context to logs and events. Context comes from correlating events and associated indicators from inside your environment with external data on indicators, adversaries and their methods.

2.         False Positives - Without context it is impossible to determine the ‘who, what, where, when, why and how’ of an attack, in order to assess the relevance to your environment. As a result, SIEMs generate frequent false positives. Security operators end up wasting valuable resources and time chasing problems that don’t matter.

3.         Questionable Relevance - Threat intelligence feeds only offer global risk scores based on the provider’s research and visibility, not within the context of their company’s specific environment. Security operators using these global scores find themselves chasing ghosts.

4.         No Prioritisation - Prioritisation based on company-specific parameters is imperative for faster decision making that improves security posture. Intelligence priority must be calculated across many separate sources (both external and internal) and updated as more data and context comes into the system.

5.    SIEM Architecture Limitations - As previously mentioned, SIEMs themselves are already overwhelmed by the vast volumes of logs and events defence-in-depth generates. Adding millions and millions of additional data does not scale in an affordable way. In addition, SIEMs were built as a reactive technology to gather logs and events that previously occurred.  Aggregating threat data and intelligence to correlate, contextualise and prioritise in a proactive manner is not a SIEM’s primary design.

The result? Indicators of compromise are missed, scarce resources are squandered and attacks still succeed.  

How can obstacles be turned into opportunities?

SOCs need to take a page from attackers and successfully navigate this obstacle course. By automatically applying context, relevance and prioritisation to threat data prior to applying it to the SIEM, the SIEM becomes more efficient and effective. Customised threat intelligence scores based on parameters you set, coupled with context, allows for prioritisation based on what’s relevant to a specific environment. Now, using a subset of threat data that has been curated into threat intelligence, the additional overlay allows the SIEM to generate fewer false positives and encounter fewer scalability issues.

In addition, companies can make their entire security infrastructure more effective by using this threat intelligence as the glue to integrate layers of point products within a defence-in-depth strategy. By compensating for a lack of information sharing and providing richer insights, this approach helps SOCs to accelerate threat detection and response, and enhance preventative technologies with protection against future threats.

With less noise and streamlined operations SOCs can turn obstacles into opportunities. Rather than drowning in data, they can prioritise their investigations on the highest risk threats first, stop attackers from successfully navigating the obstacle course and improve security posture.

Leave a comment

Make sure you enter the (*) required information where indicated. HTML code is not allowed.

cci-app-store-apple

CCi-with-android

255x635 banner2-compressed