Today we delve further into the murky world of cyber-warfare, with insights from former intelligence “spooks” fighting on the front line.
My research team at Frontier Tech Investor spoke to experts from two cybersecurity firms, Darktrace and root9B. Darktrace was founded by former government spooks, people who had significant experience combating cyber-attacks. The man behind American firm root9B is cut from the same cloth.
COO John Harbaugh worked in a number of prominent roles in the Department of Defense, leading teams at United States Cyber Command in Fort Meade. Few people know as much about cyber-defence as he does.
Harbaugh now works privately, but takes his job as seriously as he did when he was safeguarding the USA’s military networks – that much is clear from the name of his company.
“Root is system-level access. 9b is hexadecimal for 9/11”, Harbaugh explained to Vice. “It’s a nod to the fact that the next 9/11 event is most likely going to be cyber-related.”
Today, most hacks result in the theft of personal data – inconvenient, infuriating, but very rarely fatal. But we’ve already seen cyber-attacks perpetrated by governments. Stuxnet – a worm believed to have been programmed jointly by US and Israeli operatives – was used to send Iranian nuclear centrifuges spinning out of control, damaging them beyond use.
In 2013, Iranian hackers retaliated, infiltrating the controls of a small dam 25 miles north of New York City. They had planned to release a torrent of water from behind the dam, with potentially catastrophic consequences, but the dam’s sluice gate had been manually disconnected for maintenance at the time of the attack.
Nonetheless, the attack served as a warning shot; cyber-attacks were fair game. War was moving online.
root9B specialises in old-school hunting
Harbaugh and co are computer specialists, but they’re also military men. They don’t want to leave the job of cybersecurity exclusively to computers.
Instead, root9B gets human operatives to “patrol” client systems, analysing systems in real time, just as a guard would protect a military base. Harbaugh talks about his cyber-defence as if it’s still a military operation: he runs “hunt-operations” from a control room, “pursuing adversaries” on a military-grade console.
“In the physical space, you can have the best cameras, the best locks on the doors, the best alarm systems, the best fences… but at the end of the day a human’s gonna figure out how to get around those things. If they wanna get it they’ll get in.
“What’s the one thing they put into those physical spaces to augment all that great technology? They put a guard in there. If there’s going to be a human adversary that’s going to defeat your technology, you need to put a human defender in there that knows how to respond to that challenge.”
Essentially, root9B’s operations involve sending an operative into a client’s system, poking around for vulnerabilities and abnormalities in the same way as a hacker would – though without the aim of causing damage. But what does that actually mean? Can most people visualise cyber-defence in those terms?
Probably not. That’s why Harbaugh’s language is so useful. Cybersecurity firms love using the language of the physical world to describe their operations, because it makes the process relatable. Darktrace talks about a computing “immune system” and provides visual representations that accord with that metaphor. Harbaugh’s military-speak does the same job.
Harbaugh’s language also serves to remind us how crucial cybersecurity is – and how much more important it will become.
“Breaches are happening. They’re not going away, they’re getting worse. Go all the way back to any point in history, the human will figure out how to get around the technology. The machine’s not smart enough yet. All these major breaches cost millions of dollars, millions and millions.”
That’s why Harbaugh’s aggression and doggedness are so key. Companies are now targets in a cyberwar that they had nothing to do with. And it’s not necessarily governments’ responsibility to protect them.
“Instead of relying on the government to protect everything, because it can’t, we’re trying to take all the good things about what the government can do from a tech perspective, a human perspective, and an experience perspective, and bring that into the commercial sector.
“I don’t think it’s really important to the victim who’s doing [the protecting], they just want to stop it.”
The former spy keeping businesses safe
Darktrace operates in the same industry. We spoke to Dave Palmer, director of technology at the firm. The company has come up with a totally new system for dealing with digital threats – and thanks to machine learning, it’s improving all the time.
Dave spent ten years working at GCHQ and MI5, and now heads up the technical side of Darktrace’s operation. He explained precisely how his product works.
DP: Perhaps I should introduce myself first?
C&C: Please do!
DP: I’m Darktrace’s director of technology, so I’m much more interested in the technical side of things than the business side of things. Darktrace is essentially an advanced mathematics company interested in cybersecurity. The reason that’s interesting is that most applications of maths sort of come the other way around. There are lots of cybersecurity companies that have been going a while and think “hey, maths might be useful here.”
But maths and machine learning is key to what we do and who we are. We like to think we’re bringing a new level of capability into the conversation.
Essentially what we’re doing is building self-learning immune systems. And what we mean by that is a system that can learn in a really complex way what’s going on with every device, every person in the whole company, and all the relationships between them and the sort of information they exchange and how they exchange that information; and their relationships with the outside world, whether that’s service providers or 3rd parties, or the conventional internet.
Our proposition is this: there are loads of cybersecurity approaches that rely on having seen historical attacks in the past and being able to spot them if they happen in the future. We think that’s great up to a point, but we know people continue to be successfully attacked with relatively novel approaches or capabilities.
We think there’s room for this immune system idea, or a home-turf advantage, where, because the system will know everything about your company in such detail, no matter how novel an attack is, it will always cause some changes in behaviour; and even if it’s evaded all your defences, you’ll still be alerted to the fact it’s going on before it becomes a business crisis. And that’s what we’re interested in doing.
C&C: I was curious, is this system predicated on the idea that keeping one step ahead of hackers isn’t really possible?
DP: Let’s use your company as an example – I imagine you’re quite a diverse group with different roles, from finance, through to the executive board, editors, writers that work for you.
I imagine you’re really mobile, I suspect there’s quite a diversity in the type of data and programs you use – you probably use different laptops and phones. If you’re responsible for protecting all that, the first thing you’ll be worried about is the diversity of the company, and thinking about protecting all those different things in different ways for different reasons.
Not only have the attackers got what we’d call a big attack surface, with all this diversity and different ways in and communication mechanisms, they only have to be just different enough in their attacks that the antivirus and firewalls and the next-generation firewalls can’t catch them.
Once they’re in, they won’t be detected for a couple of hundred days. It takes an excess of 200 days for the average attack to be detected, and, if it’s a bit more sophisticated, that number can rise to more than 300 days. And it seems ridiculous, doesn’t it?
We’re used to thinking about protecting our houses with lots of different door and window locks if you’re out of the house. Even then you have a burglar alarm so that if someone’s in your house, no matter how they got in, the alarm will sound and you can do something about it.
That’s essentially what we want to do for digital networks. Rather than focusing on how bad guys get in, or trying to define how you should work or how you should access data, we say it’s getting too complex. We’d rather learn what is normal and highlight the problems as a complement to traditional defences, rather than a replacement for them. We essentially want to give control back to the defence team and say, you’ve got some tools here by which you can respond to attacks when they get through.
C&C: How quickly can your technology detect a threat?
DP: Assuming the learning period has gone through, we call it soft real time – so it’s generally within about a second or two of real life. The system takes about an hour to get up and running, and all the self-learning starts from there. The bot won’t tell you anything for the first week. After that you’ll start getting some simple insights. After the third or fourth week, the system starts getting increasingly complex and capable of detecting more subtle problems. It’s about 80% smart after four weeks, and 99% smart after about six months.
C&C: How much does the system have to be adapted for different clients, or can the algorithms learn in the same way no matter what the system they’re on?
DP: Great question. If you’ll forgive me just for talking about how the mathematics works, I realised this is often more my passion than other people’s, but just for a few seconds – what we’re essentially doing is using lots and lots of different machine learning techniques and algorithms and approaches; each of which is good at different types of pattern-of-life analysis of people, or devices ,or data, and they’re all running all the time, looking at the same data as it comes in, but being differently able to spot problems and concerns that might come up.
So effectively they’re competing – different mathematical approaches are competing with different strengths and weaknesses. Then on top of that, what comes through very strongly is the use of probability theory to essentially choose which of the approaches is working better for certain types of people, or devices, or in certain situations, and make sure that even subtle signals coming through the mathematical approaches that are working well are passed on to a human being.
If something’s not working well and would be better suited to another system, we can basically do really smart filtering, in a self-learning way, of what is and isn’t working – so only high-quality stuff comes through to a human being at the end. So it’s very much embracing as many machine learning approaches as possible, when they’re useful and self-learning, which ones are worth listening to and not.
C&C: Am I right in thinking your background is partly in disaster and emergency response?
DP: Emergency response in business, so how a business works if it loses a building or all of its networks, absolutely.
C&C: Right. The reason I ask is that I was speaking to Dr Stephen Roberts the other week about his work on disaster response and human-agent collectives. His algorithmic technology helps humans and artificial intelligence (AI) to work together to mutual benefit – and it seems your technology works in a similar way. So, my question is this: how does your Enterprise Immune technology deal with threats once it’s detected it? Does it pass the information on to a human agent who works out how to root it out?
DP: How the immune system talks to analysts or investigators in companies is actually quite diverse. Some companies, big global banks for example, will have something that looks like Nasa mission control – lots of people in a dark room with loads of screens, glued to those screens all day every day, getting information and tip-offs to potential problems; and hunting them down and arranging for them to be cleaned up or further investigated – and from our perspective that’s quite a straightforward situation. We’re dealing with experts; it’s easy to give them technical information and they can go off and respond.
But we’re really interested in mid-size and smaller-size companies as well. Our smallest customer is a hedge fund with two people and a lot of money under management. Our next smallest is a shipping insurer with about 14 people in there, and these guys certainly won’t have mission control-style people sat there all day – they probably don’t even have anyone there with security in their job title.
So something we’ve worked really hard on is quite a visual animated storytelling system where you’re essentially shown feedback, a bit like a video game, I suppose, showing how the data moves between different devices and people; who was involved in what; who was logged into the different systems; and how they interacted with the internet.
As you’re aware, there’s a big concern within cybersecurity of a skill shortage of the best trained people. So we really wanted to make it easy for anyone to adopt this. Something that’s been very successful for us and companies we work with, is training up graduates. As long as someone’s smart and interested in technology, then they can grapple with these things and find the system very useful. It doesn’t rely on just having a small number of extremely expensive forensic engineers sitting there seeing the matrix and dealing with very technical data.
At some point we’ll want these smart systems to move away from just highlighting problems to doing something about them. That’s something we’re very interested in.
C&C: I read something about your hiring policies, and I was curious. Somebody from your company, I can’t remember who, was talking about hiring philosophy graduates, for example, and them bringing something new to the table. I’ve seen a lot of this from technical companies, and I often wonder how true it is that people from non-technological backgrounds can bring a lot to a field like cybersecurity. Do you find their input useful?
DP: I think it’s really important to remember that while cybersecurity is dominated by geeks like me, and there’s always been a really high barrier to entry – and people think it’s all black screens and green texts, and you need to know a million commands, none of which are human-readable – when we’re talking about threats to business and risks to businesses, we’re talking about human intentions and the value of business, which are very human things to get one’s head around.
So, while a lot of the technology side can boil down to configuring firewalls, and understanding servers, at a slightly more abstract level it’s about evaluating strategic risks to the company – understanding what your company can and can’t live without, understanding who it is who could gain from your company being harmed or different data you hold, or designs you’ve created, or whatever.
We’re talking about investigation, analysis, strategic understanding of the business, but also an ability to weigh risk and judge what’s worthy of defending and investing in to defend, and what the business can be more laid back about, and fix a problem if and when it comes up. I think those skills are quite different to those of your typical computer science grad. I’m not saying they can’t do that, but I don’t think we’re limited to using highly technical or mathematical degrees in this space.
If there’s anything we’d really like to achieve, it’s reducing the need to have a detailed technical lexicon in order to evaluate risk and do something about it – especially when we’re talking about many of the risks being apparent from employees within the company, either unwittingly, which is usually the case, or deliberately.
C&C: Speaking of the technical lexicon, I’d like to return to a question I meant to ask earlier. What kind of data is it that you monitor? What constitutes the corpus from which the algorithms learn?
DP: What we’re most interested in is learning from the actual raw communications that go over a company’s digital networks. Of course, these days, digital networks can take a lot of different forms – we’re all using mobile devices, we’re all used to remotely connecting to the company, and of course the company might have its own racks of data, or be running loads of stuff in the cloud, in hosted data centres, or whatever. But actually all of those low-level interactions, the underlying network communications or internet communications that go on are incredibly rich sources of knowledge about behaviour.
So what we’re doing is getting access to as much of that communication as we can at points in the businesses where it matters – we don’t necessarily need blanket coverage – and from the communications we’re just trying to extract as much information or metadata about behaviour.
We’re not reading content, not looking inside emails or anything like that, but we’re looking at how much data people interact with on corporate file servers, or checking whether people normally read data from the GitHub where all our source code is stored. Does that employee normally just read a couple of bits at a time, but one day downloads the entire source code of Darktrace via a device they don’t normally log into? So we’re interested in the lowest level we can get of examples of behaviour within the business.
C&C: I’ve read a bit about the company’s origins in some senses as a Cambridge University spinoff – do you have a formal agreement with the university? Are there particular departments you work closely with?
DP: We were particularly close to a gentleman called Professor Bill Fitzgerald, who was at Christ’s College and a mathematics professor who was formerly seated in the engineering department. He’d done a lot of work over the years applying Bayesian mathematics and other types of mathematics to a field called signals processing. Unfortunately, Bill, who was a close friend of many of ours, passed away unexpectedly, so we haven’t got the day-to-day contact we had at the time. But we still have links to the engineering department, the maths department and to Cambridge generally.
We’re physically based on the university’s research park, so we’re really lucky that we continue to be recommended by the university as a great route for mathematics, physics and engineering postdocs. So there’s a pretty well-trodden path for the people who have done their doctorates, coming just slightly down the road every day to the research park and joining us full time. I don’t know what in legal terms or contractual terms the agreements are, but in terms of mutual support and interaction between the university R&D teams and ours, it has been absolutely super.
Certainly we miss Bill for a lot of different reasons. But in quite a lot of ways, a lot of his life’s work is built into Darktrace and a nearby company doing work into fraud detection. So it’s nice to be able to continue that legacy.
C&C: So, the algorithms that form the basis of Darktrace, were they stumbled upon by happy accident, or were they the result of researchers specifically looking to build such a system?
DP: Well really a couple of groups of people were brought together to form the company. As you’ve probably realised, my background was in UK intelligence services, along with a couple of the other staff here. We’d spent several years trying to deal with cybersecurity problems and seeing what state-of-the-art attacks look like, and what the best-in-breed defences looked like, and I guess what we had was a shopping list of “wouldn’t-it-be-great-ifs” in terms of a mindset and different strategic approaches for the future. That included the relatively controversial view – remember this was in 2012 – that organisations should work on the assumption that they would be hacked, and that there was at any given time a high likelihood of some level of compromise to a company’s system.
So that was one group of people. We were united with the second group, mathematicians at Cambridge and a couple of other places. We’d worked together on some specific projects in the past, but never really in the cyber-defence arena. That was where the idea of the immune system was born: we were thinking about dealing with being hacked by something you’d never seen before with the help of some of these self-learning mathematical approaches.
The third group of people at Darktrace is people who are great at building. As you’re aware, Cambridge has become a bit of a hotbed for people who are very good at growing businesses. But I guess the key to what has gone on and the difference between just having a mathematical algorithm that works in some situations, are these advances in probability theory that I talked about before. Because as well as having all the great machine learning stuff, this self-feedback loop on the probability theory really means that we can take on loads of mathematical approaches that work in all kinds of different fields.
Our customers include a Formula One team, a chocolate factory, power stations… we have pharmaceutical companies as well as big banks, but also tiny little companies, media companies, TV stations… without those breakthroughs in probability theory, I think we’d just be a machine learning company, limited by having to rebuild our products for every customer. But it’s that probability theory stuff that we did in conjunction with Bill, which we gained from our work with the university, which makes our product truly different.
C&C: What kind of influence has Mike Lynch had on your activities? Has he been hands-on?
DP: We talk to Mike quite a lot – I’ve seen him personally quite a lot. He’s got an incredible ability to push ideas and know how they’ll play out in the future. He’s really good at making sure we’re not just fixing the problems of yesterday/today, but to make sure that the R&D we’re doing is genuinely pushing us forwards into something that is really useful in the future.
A really great example of that is Invoke itself, Mike’s fund’s strategy arm. They won’t invest in something that hasn’t invented a new core science. The probability theory stuff I keep touching on is our key contribution to that. So Mike’s not interested in the next Facebook or Twitter 2 – he’s only interested and Invoke is only interested, if you’re fundamentally making a difference to genuine problems that economies have at scale.
So, he’s really good at being an advisor and a sounding board – we might say, hey we’ve got all these ideas, and Mike will direct us to help our thinking, deciding whether we’re truly solving a problem or if we’re pursuing something just because it’d be nice to have, or convenient, or if we’re solving a problem that does exist today but won’t be relevant tomorrow. I’m a bit worried that I sound like everyone does when they talk about their advisors.
But he’s really good at holding us to account about where we’re going, where our development is going, where our research is going, whether we’re still doing research rather than simply executing as a company, making sure our company still exists in ten, fifteen years’ time, and he’s really brilliant at that. It helps that he’s a polymath and understands a great deal of different fields in incredible detail – that always has me jaw-dropped. But I guess the other thing he brings is that he’s really good at bringing in ideas from a completely different block of science or mathematics that we’re not really used to.
Until tomorrow,
Nick O’Connor
Associate Publisher, Capital & Conflict
Category: Geopolitics