An incident always has a timeline whether we see it clearly or not. Some are visible to the naked eye and some require an electron microscope. Different approaches exist like Cyber Kill Chain and I’d like to delve into the life of an attack and try to look at it from the analyst angle.

We can say that at any time an attack has a past. Detection technologies and many bumps that we, security professionals, put into the road will notify us about an event happening and yet there’s more to it. Let’s take a C&C communication event detected by our next-gen firewall. This won’t happen on its own, it should have come from somewhere -yeah it’s a malware- C&C comm is an important part of malware. Currently, nearly no malware operates alone (barring Stuxnet and so on). Therefore we can assume that the event we have most probably had a malware event attached to it. Yet we didn’t see it in our detections. Assuming we have the necessary mechanisms this means that in the past we missed something. It would be great to go back to the past and catch that malicious part wouldn’t it?

Well, we actually can. In a sense at least. We can look into our past logs and try to look for something that’s not right. That event might slip through our thresholds but now that we are looking closely into the activities we’ll be suspicious about these events. It’s therefore important to have a ledger to keep track of what’s happening in our environment. Log Management is more or less defined in cyber security but doing it right still requires a bit of investment. Usually, generic logs are the way to go but if we are to go deeper we need to have more.


Maybe powershell or the seventh layer of hell -essentially the same thing- Endpoint or network logs of anything could be useful — or just a waste of space- and since this necessity is known several solutions made themselves available like EDR or NTA. Although third-party solutions give us an easier time in deployment, tools to achieve this kind of logging exists by default in modern OS’s. I won’t go into detail but this article might give some insight.

So let’s get back to this malware of ours. It needs to do something — otherwise, it won’t be any kind of ware — Since environment/conditions are what give the meaning to behaviour. We can’t say that any activity is certainly malicious or benign. We need to set the rules of our environment and made them known. Rules that are not known to the denizens of the environment can’t be used. Without saying shorts are not allowed your employees can’t know that it’s not allowed. That means we need to teach our environment population our expected behaviors — it’s in our case easier to say what is a good behavior than saying what is bad, so whitelisting is in most cases a better approach-. Security tools have many capabilities but most of them interestingly are not that flexible in these cases. But let’s not get into that rabbit hole. In short our rule definition is: Behaviour -> Rules -> Detection.

If we get back to our malware, what we need is the attribution of C&C communication with an origin. If there’s communication it has to start somewhere. Most origin determinations are very easy but in our case just knowing where this communication started yields only a small part of the bigger picture. To understand the overall picture is our goal for sure but time is relative. The picture might look very different now than it does in the past. Therefore change itself has meaning for our investigation. I’ll get back to change in a minute but let’s talk about the origin a bit more.

Let’s say that our origin is a lowly desktop computer that has limited access. This contextual information itself gives us a lot about the impact we are dealing with but we are yet to determine what we are dealing with. This computer somehow started a communication but how? An investigator, if willing, always has to go deeper. We could, for instance, start a scan on this computer, find something malicious, clean it and then say the incident is resolved. Is it though?

Malicious intent doesn’t start by accident, so an event labeled as malicious can’t be an accident either. The intent, as we see it, can come from a determined adversary or it can be an automated attacker -a bot-. In both cases gathering as much as we can to know our adversaries is the best course of action. Let’s say we’ll go in and gather some more info. We’ll connect into this computer and look around, maybe the process list will give something, maybe the connection table will lead to an attribution. It’s a conundrum that if an attacker is sophisticated enough, it’ll be indistinguishable from the expected normal. Hopefully, we’ll find some evidence that something is/was there. With this evidence, we now have our first attribution. It’s a direct relationship between two objects and they will start to form our investigation scope.

If hope fails and we cannot find a trace we have to step back and think about what we have This is the change. We might be too late and the attacker is through with his work and removed his footsteps. But the cost/benefit ratio applies to our attackers as well. If they are that determined to hide their tracks, are they just script kiddies? Continuous access is very important for any adversary, so removing your traces and your backdoor means you are confident that you can get through again. This line of thinking can bring some insight into what we are dealing with. Because your environment can tell you who might be your adversary e.g a bank or an intelligence agency have very different opinions on this. For example, if you think your adversaries are probably gunning for your Prod_Srv_A, then looking into the relation it has with what have might give off some clue to you. But in the end, a detective without a clue can’t solve a murder. A detective without a corpse won’t even know he has to solve one.

Incident Response is an iterative process in which we start over many times to gather evidence, connect them and try to find out the reasons. There are lots of thoughts about this process and the threat intelligence pyramid is one thing that pops out and defines goals in it. It more or less defines how much deeper you want to go into your process. Are you content enough with just blocking the superficial artifacts like IP, URL or you want to find out who is doing it and want to strengthen your defense through weak point detection? (I specifically want to avoid using the term vulnerability detection because that’s mostly used on software but weak points can be found throughout your system which includes people and process as well.)

If we get back to our case we now need some means to gather this data, see it in a new light and then make some assumptions leading us to the culprit. Challenges here are most of the time a lack of information or a lack of means to process them. The first one is countered by Threat Intelligence providers but this front is still growing and finding it places I believe. The second one is thought to be solved in the past by SIEM solutions. Currently, we think SOAR’s are the answer which will unify IR. They seem very promising but have interesting puzzles to be solved. For example, in the 1980s and 1990s US worked on similar aggregation platforms in their intelligence world trying to determine who is trying to do bad things on US soil. Some of their solutions to gather connections between people like phone communication records and trying to link them is still being used in today’s security products but scale problem and as they call it -a big ball of wool- isn’t solved in today's commercial products. — Watchers by Shane Harris, 2010-

The past of an incident usually means the first entry and the events leading to the detection, but a deep investigation is required. We need to understand our adversaries and learn how they get prepared for their purpose and plans. This is the epitome of our incident response in the past.

I’ll continue exploring the timeline and the next parts will show the relations of now and future into the IR.

 

written by Mustafa Mısır, Director of Product Management.