Finding Beacons...with Ruby!

Searching for beacons in network traffic is not a trivial problem to solve. In fact, network anomalies in general are getting harder and harder to detect. The days of just looking for a 'spike' are long gone.

Before I get started, I wanted to apoligize for messy code segments, Blogger isn't the best engine in the world for posting well formatted code.

What I've been digging into recently is ways of detecting beacons. Visualization tools may aid this however it would take some specific knowledge of what to look for before any real patterns are going to make sense.

So first off, what's the classic way of detecting a beacon?
Search for a spike every x minutes?
Grep for traffic to a particular host and then look for the difference in time stamps?

While those may work, these ideas are assuming that you know the external site that software is beaconing to. What if you don't know exactly what is talking on your network and which external sites are getting hit? Searching through these by hand are simply not an option. That is unless you've managed to hire hoards of monkeys.... I wouldn't count that option out yet.

If you're any good company I'd assume that you have all your web traffic tunneled through proxies and EVERYTHING is logged. I can't stress that last part enough.

Let's take a look at our ingredients:

  • Ruby for this example (any scripting language will do, PERL, Python, or you can amaze us all and write in C)
  • Proxy Logs
  • A fast processor
  • A ton of RAM

As a side note, I usually try to keep my scripts pretty lightweight and low on the I/O. I found that when dealing with huge log files, unless I'm creating temp files, this is hard to do. Creating temp files would also violate my second rule.

How does this work?
Well, let's think of this script as elegant as a brute forcer in cryptography. It gets the job done, but it's going to cost in terms of time and memory consumed.

The way that it works is by taking in all IPs and the sites that they've visited and dumping those into a hash.
Secondly, we take all the times that each particular site was hit and keep track. For easy time calculation, I'm using the ruby time class.
What we're looking for is the frequency at which these sites have been visited. So we're not really concerned with everything that occurred at any single visit. So we take all the times and compute the differences.
Then we begin computing. Crunch crunch crunch. For this one, feel free to go on your lunch break and await results when you get back.
Once all that is over, we'll add in a threshold and a confidence level. Due to network, quirks, latency or just a bad setup it's not uncommon for a beacon to fail or to be running a little late so we need to account for these things. One thing to keep in mind, however is that when a threshold goes up, our confidence should go down.
Then we see how closely these distances match a particular pattern and then report back our findings.

This will solve the general beacon problem. However there are other ways around this. Malware writers may add some randomness to their beacons or even have the interval long enough where it's not talking multiple times a day. Trying to track down a beacon that only talked once a week or month would probably be near impossible. That's comforting. :)

So let's take a look at the source... (and don't be too harsh on the code, this is alpha quality)

Read the logfile in from STDIN and put it into a hash with corresponding URL values. I'll need to keep some of the code generic. Since there are a billion different log formats and some may not match the type I've tailored to.


data = Hash.new
print "Reading Log File"
while(line = STDIN.gets)
temp = line.split(" ")
#make sure the line isn't a comment or a header
if temp[0].scan("#").length.eql?(1)
next
end
#declare all values
begin
key = URL.to_s + " : " + IP.to_s
time = Time.local(year, month, day, hour, min, sec)
if data.has_key?(key)
data[key].push(time)
else
data[key] = Array.new.push(time)
end
rescue Exception => e
next
end
end
print "...done!\n"

We've got our main data structure established here. It's a large hash of arrays, something in the form of this:
google.com : 192.168.0.2 => time1, time2, time3, etc ….
yahoo.com : 192.168.0.2 => time1, time2, time3, etc …
google.com : 192.168.0.1 => time1, time2, time3, etc …

Now for the second part, we'll compute the differences in the times and store those in a hash similar to our first one.

difference = Hash.new
print "Computing Differences"
data.each_key do |key|
index = 0
begin
difference[key] = Array.new
while(index <> diff = (data[key] [index+1].to_f - data[key] [index].to_f).abs.to_f
#check below for explaination
if diff <> else
difference[key].push(diff.to_f)
end
index += 1
end
rescue Exception => e
next
end
end
print "...done!"


*The reason for this is when multiple requests are made for images, scripts, etc all in relation to the same site it will have a lot of redundant information that'll lead to false positives. We need to make sure that a change has occurred rather than just a single request. The 5 can be tweaked as necessary depending on the logs.

Now we set our tolerances and run. For this example, my tolerance is 50 and can be set depending on how loose you'd like to be.


tolerance = 0
while tolerance < report =" Hash.new" num =" difference[key].first" lower =" num" upper =" num"> 15
#number of beacons you'd expect
puts tolerance.to_s + " - " + key.to_s
end
difference.delete(key)
end
tolerance += 10
end


That's all there is to it! The number of beacons can be changed, but obviously the more verbose something is, the easier it is to catch. The lower you set this number and the higher that you set your tolerances can greatly change the outcome and confidence of this script.

The fun part: Let's see if it works.....

Now let's run it on a big logfile to get an idea of what to expect.

On a logfile that is 82.1 Megs and 273653 lines long, it took:
real 0m49.325s
user 0m46.591s
sys 0m2.416s

Which isn't miserable, but let's start looking at some bigger data sets. On a file that is 3.6 Gigs and 11726177 lines long.
real 158m53.342s
user 155m54.865s
sys 2m10.196s

...ouch. Given I don't own a horribly fast computer and my amount of RAM is less than stellar.

More importantly, did we get any results?
0 – 192.168.0.12 : updates.installshield.com
0 – 192.168.0.229 : update.intervideo.com
0 – 192.168.0.76 : updates.installshield.com
0 – 192.168.0.165 : yahoowidget.weather.com
…..
(There were a little over a 200 results which I didn't paste in to save space)

I don't know about you, but looking for beacons on the network is a lot easier when there is only ~200 results (for a 3.6G logfile) instead of 11726177!

The first number is our tolerance (the lower the better) and then IP followed by site that the IP is beaconing to. In this case there isn't any surprise, there's computers on the network beaconing for updates.

That's all I've got. If there are any changes or modifications that can/should be done to the code, please let me know and I'll keep updating this script accordingly. I'll also be happy to make changes if you flat out think that it's wrong :) Hopefully it can shrink down in size and increase in efficiency through the more eyes that see it.

Until next time.


About this entry


0 comments: