Matt Walmsley is an EMEA Director at Vectra, a cybersecurity company providing an AI-based network detection and response solution for cloud, SaaS, data centre and enterprise infrastructures.
With the spread of the Coronavirus, cybercriminals have gained more power and become more dangerous, leaving some IT infrastructures at risks. That is why Vectra is offering the use of AI to protect the data centre and specifically, cybersecurity powered by AI to help secure data centres and protect an organisation’s network.
In this interview, Matt explains why data centres represent such a valuable target for cybercriminals and how, despite the vast security measures put in place by enterprises, they are able to infiltrate a data centre system. He also explains the storyline of an attack targeting data centres and how cybersecurity powered by AI can help the security teams to detect anomalous behaviours before it’s too late.
What’s your current role?
I’m an EMEA Director at Vectra. I’ve been here for five years since we started the business, and I’m predominately doing thought leadership, technical marketing and communicating information. I spent most of my time thinking about how we put AI to use in the pursuit of, in our case, cybersecurity and a big part of that is cloud and data centre.
To get into the core of what you do, you talk about cloud, data centres and AI, but which one is the core driver of all of those for your business? Which types of devices are the Vectra AI you use, integrated into and in what sectors is it applied within?
Our perspectives on this as experts of cybersecurity and AI is a culmination of those two practices: machine learning and applying it to cybersecurity user cases. So, we’re using it in an applied manner, i.e. to solve a particular set of complex tasks. In fact, if you look at the way AI is used in the majority of cases today, it’s used in a focused manner to do a specific set of tasks.
In cybersecurity practice, using AI to hunt and detect advanced attackers that are active inside the cloud, inside your data, inside your network, it’s really doing things at scale and speed that human beings just aren’t able to do. In doing so, cybersecurity is like a healthcare issue, if we find an issue early and resolve it early, we’ll have a far more positive outcome, than if we left it festering, and there’s nothing to be done. It’s just the same as cybersecurity.
You talk a lot about the rapid ability of it to scale to bigger projects, in relation to your work, do you see AI as a way to solve problems in the future, or do you think there’s a long way to go with it? Is AI the future? Or do you think humans managing AI is the future?
AI is becoming an increasingly important part of our lives. In cybersecurity practice, it’s going to be a fundamental set of tools but it won’t replace human beings. Nobody’s building a Skynet for cybersecurity, that sorts it all out and turns the table back on us. What we’re doing is building tools at size and scales to do tasks that humans beings can’t do. So its really about augmenting human ability in the context of cybersecurity, and for us, it’s a touchstone of our business and a fundamental building block for cybersecurity operations now and in the future.
There’s a massive skills gap in our industry, so automating some cybersecurity tasks with AI is actually a very rational solution to fixing the immediate massive skills resource gap. But it can also do things humans can’t do. It’s not just taking the weight off your shoulders, it’s going to do things like spotting the really really weak signals of threat actor hiding in an encrypted web session. It’s impossible to do that by hand, to do it looking at the packets, the bits of bytes, you need a deep neural net, that can look at really subtle temporal changes. AI does it faster and broader and it does things we are just not capable of doing at the same level of competency.
It’s optimistic that it’s going to have such a dramatic effect on our working process. In terms of Data centres, how is AI working to protect data centres?
The data centres change and I’m sure, as you’ve seen, it’s becoming increasingly hybrid. There’s more stuff going out to the cloud, even though people still have private data centres and clouds. One of the main challenges that a security team has with a data centre is that, as workloads are increased, moved, or mobile or flexed, it’s really hard to know about it.
As security experts usually have incomplete information, they won’t know which VM you have just sped up or what it’s running. They don’t know all of those things and they are meant to be agile for the business but that agility comes with a kind of information cost. I have imperfect information, I never quite know what I’ve got.
I’ll give you an example: I was at a very large international financial services provider and I was talking to their CCO. He had their cloud provider in and he told me where we are at with licensing and usage. What he thought he had to cover and what the business actually had was about ten times off. So there was ten times more workload out there than he and his security team even knew about.
So how can AI help us with that?
Well, if we integrate AI and allow it to monitor virtual conversations, it can automatically watch and use past observations spot, new workloads that are coming in there and how they are interacting with other entities. It’s those behaviours that are the signal that tells the AI to look and find attackers. So it’s not about putting software in its workload, just monitoring how it works with other devices.
In doing so, we can then quickly tell the security team: “here are all the workloads we’re seeing, here are the ones with behaviours that are consistent with active attackers” and then we can score and prioritise them. What we’re doing is automating and monitoring the attacker for behaviours, so as a security team you’re getting more signals, more noise and less ambiguity. It’s not just headlined Malware or exploits, which are the ways people get into systems.
What else do you see in threat actor events?
Exactly what happens next in an advanced attack. An advanced attack can play out in days, weeks, months… The attacker has got to get inside, he’s had to get a credential, had to exploit a user, he’s got to do research and reconnaissance, he’s going to move around the environment, he’s going to escalate. We call that lateral movement then he’ll make a play for the digital jewels, which could be in the data centre or in the cloud.
So, if you can spot those behaviours that are consistent with an attacker, you’ve got multiple opportunities to find an attacker in the life cycle of the attack. Just to use that healthcare analogy again, find it early and it will much better and faster to close it down. If you find them when they are running for the default gateway with a big chunk of data doing a big infiltration, you are almost breached and that’s a bit too late.
Using AI, basically, is like being the drone in the sky looking over the totality of the digital enterprise and using the individual devices and how the accounts are talking to each other, looking for the very subtle, hard to spot for robust signs of the attackers and that’s what we’re doing. I can see why AI speeds that up efficiently.
Is there a specific method or security process that Vectra’s cybersecurity software implements, to help protect mass data centres?
That’s quite an insightful question because not all AI is. Built the same AI, is quite a nebulous term. It doesn’t tell you what algorithmic approach people are taking. I can’t give you a definitive answer for a definitive technology but I can give you a methodology.
The methodology starts with the understanding that the attacker must manifest. If I got inside your organisation and I want to scan and look for devices, there are only so many techniques available for me to do that. That’s behaviour and we have tools and the protocols, to spot that. So, we can see how can we spot the malicious use of those legitimate tool or procedures, these TTPs.
How does that whole process start?
That starts with a security research team, to look for the evidence that attackers do use these behaviours, because it may be a premise, it may not be accurate, once we’ve done that we bring a data scientist to come in and work with this team.
So, let’s find some examples of this behaviour attacker, of this behaviour manifesting in a benign way, as an attacker, in a malicious way and let’s look at some regular no good data. The data scientist looks at that data, does a lot of analysis and tries to understand it. They look at the attributes, what they call a feature, and what are the feature selections. I might find it useful to build a model to find this and there are various ways you can look at data and separate the customer’s infrastructure and all of the different structure inside it. Then they’ll go off and build a model and they’ll train it with the data. Once we’ve got it to an effective level and performance and we’re happy with it, we release into our incognito NDR, network detection platform, and that goes off and looks for individual behaviour.
Remote Desktop Protocol, RDP, recon will be completely different from the thing that’s looking for hidden https command and ctrl behaviours. So, it has different behaviours and data structures and different algorithmic approaches. However, some of those attacks manifest in the same way in everybody’s network. We can pre-train those algorithms.
Are they aware of those behaviours?
Yes. It’s like it’s been in school, It’s got its certification, it’s ready to go, as soon as you turn it on. There’s no concept of what it has to learn anything else, it’s already trained, it knows what to look for, it’s a complex job but we’ve already trained it.
But there are some things that we could never in advance. For example, I’ll never know how your data centre is architected, what the IP ranges are, there’s no way of me knowing it in advance and there’s a lot of things we can only earn through observation.
So, we call that an unsupervised model and we blend these techniques. Some of them are supervised, some are unsupervised, some of them use recursive really deep neural networks. When it’s really challenging when we’re looking into a data problem, we just can’t figure it out, what are the attributes? What are the features? We know it’s in there.
But what is it?
We can’t figure it out. We are going to get a neural net to do that for themselves, once again doing things at a scale human could not do in an effective way. We’ve got thirty patents pending now, we are rewarded on different algorithms that build they are brains that we built that does that monitoring and detection.
Do you think there are any precautions people should take to avoid cybercrime during coronavirus?
Our piece of the security puzzle is: how do I find someone who already penetrated my organisation in a quick manner, so we’re not the technology that stops known threats coming in? You might think of healthcare who adopted this really quickly. Healthcare, the WHO, recently called out a massive spike in COVID related phishing.
That’s the threat landscape, that’s what’s happening out there, that’s what the threat actors are doing. We are actually inside healthcare and we did not see a particularly large spike in post intrusion behaviour, so we did not see evidence that more attackers were getting into these organisations, they all had done a reasonable job in keeping the wolf from the door.
But what we did see, because we were watching everything, were changes in how users were working. We saw a rapid pivot to using external services, generally services are associated with cloud adoption, particularly around collaboration tools and we saw a lot of data moving out of those, that created compliance challenges.
What do you mean?
Sensitive data suddenly being sent to a third party. That’s not to beret health organisation during a really challenging time but their priorities were obviously making sure clinical services were delivered but in doing so, they also opened up the attack surface where the potential for attackers to get was in increased.
It’s important to maintain visibility so you can understand your attack surface, and you can then put in the appropriate procedures and policy and controls to minimise your risk there.