An interesting development marked the conclusion of the Borderless Cyber USA 2017 conference last week. A representative from the National Security Agency (NSA) announced the launch of OpenC2 – a “standardised computer language” that creates a layer of abstraction to facilitate cyber response across product and organisational boundaries at machine speed.

The future of cyber, and homeland security in general, would be these layers of abstraction which introduce machine-to-machine inter-operability and seamlessness in a highly fragmented ecosystem. This is probably the second such strategic initiative that is not driven by vendors, but standardisation bodies. The first layer of abstraction which paved the way for OpenC2 was STIX-TAXII.

But these standards really don’t matter. No, they don’t. What matters is our understanding of how we reached here in the first place.

The contemporary enterprise security architecture is dying by a thousand cuts but the attack surfaces have remained consistent since the last two decades.

 

 

We productised the security controls into what I call the Detection, Prevention & Response Layer – the irony being that if the controls act like little kernels of governance, the products would of course create their own layers and silos. From this incompatibility stemmed the need of a Decision Layer which has remained stagnant since the last decade with the onset of SIEMs.

The enterprises created a one-way street of security governance which only gets narrower as we reach the destination. You can’t turn back; there are no milestones or metrics to quantify the progress.

The products were so heavily focused on detection, detection and detection. From the very bottom of attack surfaces to the top of the Decision Layer, we lost almost 70% of our telemetry, context, intelligence and situational awareness. SIEMs became these ugly, inflexible monolithic monstrosities.

Right now, any enterprise worth its salt has around a dozen layers of incompatibility to deal with.

Then arose the question of the motivated state actors. Bet your millions, but the enterprises would always lose out to them. There’s the foundational insecurity of the internet, the routing edge that is literally the no man’s land, the sneaky technology vendors, and the self-defeating complexity of the security architecture.

 

 

During my talks, I never fail to highlight this paradox with the example of the Dirty Cow. A zero-day vulnerability that remained potent for almost a decade. A security administrator who detected it not by using point tools but capturing all the traffic to his datacentre. After twenty years of evolution of the cybersecurity vendor landscape, we have come full circle.

A similar case also drew a lot of media attention last week. An enterprise running Tanium, Cylance and McAfee – and ingesting 138 threat intelligence feeds – struck by an attack so precise that nothing tripped. The saviour being the good ol’ traffic recorder.

The economics of cybersecurity has become completely bogus because of these glitches in the formulae.

To quote Dave Aitel quoting Frank Heidt, “The emergent property of an avalanche is a grain of sand”. It is for this reason that MITRE, DHS, FS-ISAC and the NSA have started acknowledging the emergent nature of cybersecurity. Complex biological systems showcase such behaviour in defence against foreign bodies. The individual components of the architecture don’t make much sense, the system as a whole does. One cell is breached so that the symptoms are relayed and the larger organism survives the intrusion.

And that’s why OpenC2 and STIX mark just the beginning of a new economic incentive which encourages the distribution of risk across organisations to save the economy and the nation. The selfish herd survives at the cost of a small sacrifice. It would also de-layer the enterprise architecture by a factor of six.

Written by Pukhraj Singh