(This article will be published in the April 2016 issue of the Quarterly Journal of the Centre for Advanced Strategic Studies. Inspired by Dan Geer’s 2014 Black Hat keynote, ‘Cybersecurity as Realpolitik’).
Your bondage is this: You see the other – not yourself – as the observer.
– Ashtavakra Gita (circa 500-400 BC)
The bias of observation weighs heavily on all strategic initiatives in cyberspace. At a caucus on national security governed by the Chatham House Rule, hell-bent faujis painted a profile of Lieutenant General Naseer Khan Janjua, the recently appointed National Security Advisor of Pakistan.
I saw grey, neatly trimmed heads nodding in agreement to a cautionary note on Janjua, as someone mentioned tactical nuclear weapons (TNW). TNW, kind of like computer exploits, is a chimera, but it still manages to inspire military doctrines in this part of the world. It seemed to me that the three-star officers who led the panel discussion were clutching on to the vestiges of symmetric warfare.
During my turn to speak, I could only share the immensely overpowering feeling of humility that one is subjected to while going through an exfiltrated ‘top secret’ document laying out plans to “defeat India”. Even a mid-level bureaucrat like me knew that Janjua would have felt betrayed if someone had credited him with TNW rather than his brilliance in executing OperationAzm-e-Nau, a powerful specimen of network-centric warfare. It is actually fun to take inspiration from our convenient enemy Pakistan, underscoring how asymmetry can completely demolish the conventional interpretations of dominance.
The invisible battles of the future will be won by exploiting this observational gap between the hawks at the helm and the operators in the trenches. The generals swim on the surface of scale, showcase, formula and precision, while minnows like me hunt in an ocean of subversion, delay, denial and randomness.
– 1 Corinthians 13:12, King James Bible
Even Einstein balked at the quantum nature of the universe, remarking that “God does not play dice”. Stephen Hawking quipped at his deterministic leanings by invoking the famous biblical allusion of “seeing through a glass darkly”. The paradox of the mirror projecting what we expect to see, failing to know what is behind it, applies to cyberspace as well. Multiple, verifiable truths can exist there – that is the very foundation of information warfare.
A 15th century rendition of Ouroboros (Source: Wikipedia).
And just like Arthur C. Clarke’s concession that “any sufficiently advanced technology is indistinguishable from magic”, I have often resorted to alchemy to explain the dynamics of cyber intelligence. Take, for example, the alchemical symbol of Ouroboros – the picture of a serpent eating its own tail, also seen as a coil around Shiva’s wild avatar of Nataraja. It’s a spectacular enigma. The snake is ceasing to exist, as it is consuming itself, yet it does exist. It represents what is not obvious, what is not in the picture – the system. The system defines its existence; the object, the painter of the object and the observer came from the very system. Only the sum of it all makes sense. The snake is also destroying and regenerating itself at the same moment; in a way it is transcending space, time, and the system, too. That’s hacking. And in cyber intelligence, the cycle itself makes more sense than do its stages.
Information warfare is mainly an inward discipline. Offence merely an extension of defence. Victory nothing but a mathematical stalemate. The observer a set of statistical anomalies.
Dave Aitel, a hacker who has worked for the National Security Agency (NSA),draws an interesting parallel between biology and cybersecurity. Both of them – like many other complex, chaotic processes – show ’emergent’ behaviour. The components as a whole do not make sense, but the system they form does.
Aitel further elaborates the failing concept of defence in cyberspace, heavily focused on the detection of knowns and unknowns, based on what we call the “signatures”. The human body produces billions of antibodies which cling on to the intruders, destroying themselves in the process. However, each of these antibodies is only slightly randomised, not requiring a large engineering blueprint against the types of attacks.
Like the other subtle games which Mother Nature plays, she protects an extremely sophisticated and vulnerable machine with a rapidly orchestrated response mechanism – a perfect balance between brute force and anomaly detection. For its fragility, the human body has a remarkably low ‘signature memory’.
Aitel’s interpretation of cyber threat intelligence is simple: rather than going for the jugular, engaging in time and resource-intensive investigations, it should aid in quickly resetting the diseased entities, restoring everything to normal.
But to understand the nuance of his argument, some immersion is required on the disastrous state of affairs in cybersecurity.
Valuations and investments have proven that security is one of the most promising sectors of the technology industry. A mad race has engulfed the enterprise market in the last two decades. Every week, a new vendor emerges, promising to offer a definitive detection of cyber attacks – a pinpointed solution which expects to sit on and sanitise one of the enterprise’s interfaces like the perimeter, the endpoints or the server farm, etc. The market habitually laps them up, largely driven by regulatory pressures than an actual fear of getting breached.
Such a haphazard pileup has only led to one thing: a broadening of the attack surface. Mudge AKA Peiter Zatko, the rockstar of hacking, in his keynote address at Black Hat 2011 explained the ‘divergent’ nature of the industry with a simple graph. As security products became increasingly bulky, in terms of the number of lines in their source code, the malware sizes remained eerily consistent. When an exponential curve meets a straight line, it results in nightmares.
The situation is such that we have multibillion-dollar companies like FireEye – building products simulating every network transaction, parsing every packet and executing every transferred file in a sandboxed environment, looking for anomalies before they are good to go to the intended recipient. Forgetting how counterintuitive it is to productivity and efficiency – the cornerstones of computerization – one cannot help but grieve about its regressive effect on cyberspace in general.
Of course, these products carry their own vulnerabilities, not to mention state affiliations which make their disclosures selective. It’s a true mathematical orgy to know that every simulator is limited by certain constraints and assumptions, its very engineering premise – and given the persistence of a hacker, it’s bound to give in at some point. That inflection point is where the world comes crashing down and a nation state begins its handiwork. In fact, we have come full circle to Aitel’s veiled argument – that insecurity is also an emergent property.
How did we end up levying the burden of insecurity on a puny little enterprise? And how do we even remotely assume that these enterprise-centric solutions can be used for protecting a country’s cyber assets as well? How did the smartest guys in the room end up making such a horrible gamble?
To understand, we’ll have to excavate one more layer of the relic that is the Internet.
– Robert Oppenheimer, quoting from Bhagavad Gita, after the first atomic test.
After the pilferage at the Office of Personnel Management came to light, Craig Timberg of The Washington Post captured the collective moan of a nation. Dwarfing his Pulitzer-winning coverage of the Snowden affair, Timberg’s spellbinding five-part essay on the broken foundations of cyberspace turned out to be a specimen of great finesse and beauty, melancholic enough to leave any hacker teary-eyed. One by one, the key architects of networking offered their confessions and regretted unleashing the monster upon us.
Much like a Jackson Pollock painting, the computer architecture is a layer of abstractions. It’s the most resourceful model for an information system. Limit the complexity to one layer, and only relay the required data to the other via certain defined interfaces – the Gall’s Law at work. The mainframe was the first attempt at resource sharing. Systems managed the allocation of resources and interfaces via processes. Soon, architects realised that resources needed to be shared across physical or logical boundaries, too. That brought on the process of internetworking. Given the nascent hardware, it was decided that resilience would be its benchmark. Naturally, security was antithetical to that premise.
As requirements grew exponentially and networks mushroomed, the processes ballooned as well. Additional support structures were added to scale it. Simplicity, like beauty, lies in the eye of the beholder. It became a labyrinth of layers, a series of inelegant fixes and hacks heaping upon each other.
What we see today is an overexploited, underperforming hotchpotch of standards and interfaces. Going back to the drawing board would require an investment of trillions of dollars and tremendous geopolitical will.
To display a simple JPEG image, your computer loads dozens of routines. Every piece adds its own bit of automation to make it less tedious. Every layer, right from the hardware microcontrollers to the parsing software of the JPEG format, works towards that goal. Two image viewers share the same JPEG rendering instructions – called libraries – out of common sense. A single line of bad code in that library can break all imaging functionality. To worsen things, the entire process is highly dynamic, with multiple permutations and combinations creating thousands of hypothetical layers and execution paths.
The hacker thrives on this very simplicity limited to layers, latching on to the host like a parasite sharing the same genome. He or she understands that every layer communicates with the other via mutually agreed upon interfaces. By remaining hidden from the strongest layer to reach the weakest, he or she can eventually control the process. If there’s a secret mantra to this dark art, this is it.
Over the years, as the systems introduced one security patch after another in a race against time – barely surviving against that elusive enigma called the “threat landscape” – the exploitation techniques also saw many advances.
Earlier, the hacker was like a circus gymnast, somersaulting across ropes to land on the right spot. A tad bit of manipulation here and there, some hit and trial, and a little innate skill. Now, he or she can make the whole circus tent collapse. What are referred to as “exploit primitives” – by leveraging the basic constraints and architectural assumptions of a multi-layered, multi-component computer architecture, one could elegantly force the system to reach a state where it humbly surrenders control, bypassing all security.
To quote a paper by LANGSEC, a collective revisiting the very fundamentals of computer science, “…at its current sophistication and complexity, exploitation research as a discipline has come full circle to the fundamental questions of computability and language theory.”
Say, the recently discovered vulnerability in the GNU C Library (glibc). The languages C and C++ are the building blocks of the Internet, with glibc being the widely accepted tool to facilitate the conversion of code written in those languages to machine-level instructions. No wonder, the vulnerability affected almost everything humming and blipping in cyberspace. An apt comparison would be someone managing to alter the laws of physics to control the universe. A presentation at the 2016 RSA Conference highlighted that with the advent of mobile ecosystems, the world population is the most vulnerable it has ever been.
— I am The Cavalry (@iamthecavalry) March 4, 2016
Enter a nation state with a billion-dollar cyber offensive program. With the advances in exploitation, one doesn’t even need a set of probabilistic pre-requisites or zero-days (unpatched weaknesses) to compromise. By introducing the right set of conditions (geopolitically or technically like the NSA does) or by triggering the existing complexities in an environment, all forms of sentience could be rigged. In security, everything is made to fail at some point – the nation state knows when and how. The ‘window of vulnerability’ becomes infinite, every bit and byte ‘weaponised’. After a couple of years of sustained targeting, one could subvert the society of an adversarial country so deeply that it figuratively ceases to be sovereign, bleeding it dry.
Any miniscule, inconsequential conduit of cybernetics could trigger that catastrophe waiting to happen. I am reminded of the 2013 raid on The Guardian, right after it published the first few documents obtained from Snowden. The General Communications Headquarters, the British sidekick of the NSA, made sure that not just the hard disk, but every chip, every microcontroller from the motherboard, as well as the keyboard and the trackpad were ripped apart. This deliberate act demonstrates the level of exploit persistence techniques these agencies have attained.
If it’s not a gravel of sand about to be fabricated into a semiconductor or a mind waiting to transform its ideas into a computer program, it’s vulnerable and an existential threat to national security.
And here we are, with our pathetic public discourse, arguing with each other about Net Neutrality and all. What about hardware or software neutrality? The Net doesn’t run on ether! Those think tanks, the didactic cesspools – the mere extensions of a caste and status-ridden society that ours has always been – divorced from the reality, misleading us. The clarions of ‘Digital India’ are so shrill that they have made us tone-deaf. Go down the rabbit hole and you’ll realise that ignorance has become our collective trait and denial the only strategic overture.