May
28

Twenty months ago, I wanted to try something out, and made a proof of concept in just 3 hours (thats including the time it took to buy the domain networktotal.com and point it to the IP!).

The concept was to have a place where one could upload a pcap, and have Snort and Suricata (and other tools), with all the different rule sets (VRT-Registered, VRT-Subscription, ET Open and ET PRO), parse through a pcap and display the result. Other engines like Bro could also be added etc…

It would be what VirusTotal is for binaries, just for pcaps…. I did not announce my project anywhere, and I just kept it private, but some of my friends knew about the place. So during the the first year, I counted around 250 pcaps uploaded from different places around the world (not bad for not announcing it!).

My personal usage of networktotal.com was more for taking random pcap at different places and uploading it to look for infections easy.

First off, I liked the NetworkTotal idea, but keeping the IDS engines up2date and the rulesets up2date took some time. Also, the way Suricata and Snort used to work, made it so that it took like 5 to 10 minutes to process a pcap on the old hardware that NT is running on :/ (You needed to start and stop the engines for each pcap, and just starting them up takes some time). So when Suricata went stable with their unix-socket support, meant that it would take me around a second to process a pcap with Suricata on my old hardware (Still 5 minutes with Snort). My subscription for VRT-Subscriber rules also ran out, so I dropped Snort for this setup, and I might add it back one day, but I would need more powerful hardware and a new VRT-S license…

But I started to develop the NetworkTotal backend code more on a host I set up at home. There where many goals with this, among learning about different NoSQL/Document stores which I find necessary in today’s world of data. I stared out with Cassandra, but needed fulltext, so I tried ElasticSearch (ES), but had some trouble using the Ruby Tire to do what I wanted. I ended up doing a MongoDB setup along side the ES setup, and kept that for about 8 months. But now I got ES to behave like I want to, but I still got lots to learn about ES.

Anyways, I have now ported NetworkTotal.com to only use Suricata with the Emerging Threats PRO rules. Processing of a pcap should normally take around a second and the events are stored in ElasticSearch. For all the malware I process at home with my cuckoo setup, I send the pcap in the form of <md5sum of the malware>.pcap to NetworkTotal.com, so you can search for events generated by a malware on NetworkTotal.com. As I don’t have storage to save all the pcaps, I do save some metadata in ElasticSearch. Mainly because I want to search up malware I upload based on events that fired, IPs that it talked to, domains it resolved, User-agent or URIs. This also uses disk space, so I wounder how long I can keep doing that too. Luckily, ElasticSearch scales linearly, so I’m hoping that will save me somehow…

Virustotal has also began to run pcaps through Snort and Suricata, but its not always that a malware produces a pcap, so there will be times that there are no events etc…

Here is an example from VT: https://www.virustotal.com/en/file/a7a526177c4475d47cdd49ea1b3ead2b/analysis/ where there is no snort or suricata events from the network traffic. You can search for the same md5sum on NetworkTotal.com where there is a rule that tags this as SpyEye fires : http://www.networktotal.com/search.php?q=a7a526177c4475d47cdd49ea1b3ead2b

I made a simple bash-scripts to upload a pcap to NetworkTotal and one to Search for IDS Events on NetworkTotal.

Example of the search bash-script output:

$ nt-search.sh a7a526177c4475d47cdd49ea1b3ead2b
[i] Result URL: http://networktotal.com/search.php?q=a7a526177c4475d47cdd49ea1b3ead2b&json=1
[i] Results:
{
“events”: [
"[1:2012686:4] ET TROJAN SpyEye Checkin version 1.3.25 or later“,
“[1:2010706:8] ET USER_AGENTS Internet Explorer 6 in use – Significant Security Risk”,
“[1:2406376:303] ET RBN Known Russian Business Network IP (189)”
],
“md5″: “a7a526177c4475d47cdd49ea1b3ead2b”
}

I still have some more stuff to fix in the backend code, but having started to write this blog post for about 18 months ago, I figured Ill just blog it, and write updates as they come :)

I’ll see if I can make my cuckoo module more generic and share that, ( it uploads the <malware-md5sum>.pcap to NT and gets back the events that fired).

Again, I hope this is useful for anyone out there. Suggestions on improvements are also appreciated :) Im also open to discuss what more data could be displayed publicly without showing off information from the uploaded pcaps that can be somehow sensitive (like Microsoft registration keys encoded in the URI of a malware checkin, or the name of a company etc.).

Jan
01

Im happy to announce that my PassiveDNS has reach version 1.0 (stable)!

For those of you who has played with earlier versions, the biggest changes in the last tags is the log output format:

Old:
1341819126||1.2.3.4||8.8.8.8||IN||www.google.com.||A||173.194.32.7||300

New:
1341819126.845527||1.2.3.4||8.8.8.8||IN||www.google.com.||A||173.194.32.7||300||17

I added microseconds to the unix timestamps, and also added a count field (the last field). The count field outputs how many times it has seen a query answer since it last printed it as PassiveDNS if you use caching. If you run PassiveDNS with -P 0 (No caching), it should always output 1.

Running PassiveDNS with default options, it will look something like this for a domain:

1341500304.265705||1.2.3.4||8.8.8.8||IN||www.facebook.com.||A||69.171.247.21||45||1

1341779965.656576||1.2.3.4||8.8.8.8||IN||www.facebook.com.||A||69.171.247.21||107||11

This means that in the time PassiveDNS was running, a query for www.facebook.com. returned 69.171.247.21 12 times in total. 11 of the entries happened between the configured “print time”. ( -P Seconds between printing duplicate DNS info (default 86400). )

So if you have any custom tools for parsing the output, you probably need to update it, before you upgrade to v1.0. pdns2db.pl which you will find in the tools/ dir has patched to handle the change.

Now that v1.0 is out, I will work with releasing new versions of PassiveDNS. In versions to come, I will make it so that you can customize the output fields via the command line.

BTW, I have also added a bit more statistics when passivedns 1.0 ends. It looks something like this:

– Total DNS records allocated : 15726
– Total DNS assets allocated : 23259
– Total DNS packets over IPv4/TCP : 0
– Total DNS packets over IPv6/TCP : 0
– Total DNS packets over TCP decoded : 0
– Total DNS packets over TCP failed : 0
– Total DNS packets over IPv4/UDP : 222139
– Total DNS packets over IPv6/UDP : 0
– Total DNS packets over UDP decoded : 222133
– Total DNS packets over UDP failed : 6
– Total packets received from libpcap : 463374
– Total Ethernet packets received : 463374
– Total VLAN packets received : 0

You can download the 1.0 release in tar.gz or in zip.

Or you can find the project on github.

Version 1.0 has been tested extensively and should be considered stable and production ready. But if you find any issues, please don’t hesitate to report your findings here.

Hacky New Year by the way!

May
23

A great thing about open source software, is that you can make something that works for you, and someone else might add stuff that works for them, and combined, you might have something all in all more powerful…

pdns.ui – A Minimalistic WebUI for PassiveDNS

phunold (Philipp Hunold) has made a webgui for my PassiveDNS :) I cloned it on git and have it up and running here at home. I’m not a webcoder, so seeing that someone made a GUI for my PassiveDNS makes me happy! (As I would have spent too much time on doing it than it would be worth). I’ve emailed with Philipp and I know that pdns-ui is in an early stage, but I would like to let other people know about the UI so that they can use it instead of making their own and maybe come with suggestions on how to improve it, come with patches etc.

Right click and view to show pdns-ui in bigger picture

So for people who wants a web-frontend to their PassiveDNS DB, try it out and give the feedback to Philipp!

Big thanks Philipp :)

Mar
29

I have pushed PassiveDNS version 0.5.0.

According to the roadmap, I have been at 0.5.0 for a while, and even started to implement stuff for the 1.5.0 version. But my real aim is the 1.0.0 release, and I have started all the activities for the 1.0.0 release, but I lack the statistics that I set in the roadmap when PassiveDNS ends. I have played it against pcaps with DNS attacks, Im fuzzing pcaps being read by PassiveDNS etc. so a 1.0.0 is hopefully not that far away :)

Some of the changes since my last blog post (v0.2.9):

* Logging of NXDOMAINs (-Xx -L nxdomain.log)
* DNS over UDP/TCP on IPv4 and IPv6 (Used to be just IPv4+UDP)
* Logging to stdout (-L – / -l -), both for NXDOMAINS and other DNS records.
* Implemented some hardening, including checking that client TID match server TID etc.
* Other small optimization and fixing a small memleak etc.

The way I implemented NXDOMAINS in PassiveDNS for now, makes it compete with the memory pool from “normal” domains/records. So if you have a fastflux or someone just querying for generated b0gus domains on your network, you might push out valid domains from the cache in favor for a NXDOMAIN. The reason I did this, is that it was faster than implementing an own memory pool for the NXDOMAINS and it give the possibility to log NXDOMAINS in current version with out to much hassle. If this way of implementing NXDOMAINS turns out to fight for memory more aggressively than one would like, one can always start two instances of PassiveDNS, one just looking for NXDOMAINS, and the other one looking for the regular domains. As I gain more experience with NXDOMAINS in PassiveDNS and get more feedback, Ill reconsider the implementation if needed :)

One note, the current logfile format will be stable until the 1.5.0 release (that is my intention at least), After that, my plan is to implement a customizable log format, and also more fields of interest will be available. If you have any additional data that you want to output and thoughts about how the output for those data should be, don’t hesitate to let me know :)

I ran into a security related bug on my Ubuntu 10.04 which might be triggered running PassiveDNS. I have emailed the Debian package maintainer and reported the bug to security@ubuntu.com and also filed a bug report. The bug is fixed upstream in ldns long time ago, so hopefully it will be fixed soon in Ubuntu 10.04 too :)

For reporting issues or making feature request, please do so here.

Happy DNS Archiving :)

Jan
17

PassiveDNS 0.2.9

I added some features and changes to PassiveDNS. The most important change is that the output now contains the TTL value, so you need to use the current tools/* (if you use them) as they are also changed to work with this new output format (or update your own tools).

I also added the ability to specify the DNS record types that you want to log from the command line and I added support for more record types. PassiveDNS now should be able to track: A, AAAA, CNAME, DNAME, NAPTR, SOA, PTR, RP, SRV, TXT, MX and NS.

Support for chroot and dropping privileges are also added.

I also added some features to tools/pdns2db.pl while I was at it:
1) You can now process a passivedns.log file in “batch” mode, exiting when finished.
2) You can now specify a file with a list of domains or IPs to skip insertion to the DB.
3) You can now specify a file with a list of PCRE (Perl Compatible Regular Expressions) of “domains/IPs” to skip insertion to the DB.
4) You can now specify a file with a list of domains or IPs to alert on!
5) You can now specify a file with a list of PCRE of “domains/IPs” to alert on!
6) You can now specify a file with a list of domains to whitelist and not alert on.
7) You can now specify a file with a list of PCRE of “domains/IPs” to whitelist and not alert on.

The skiplists will be checked first, and if the domain/IP is found/matched there, whitelist and blacklist will be ignored and insertion to DB will be ignored.

Next the whitelists will be checked, and if a domain/IP is found there or match a PCRE that you have defined it will not be checked by the blacklist.

Last the blacklists is checked, and if a domain/IP is found there or match a PCRE that you have defined, it will write the PassiveDNS record to the alert file that you specify (Default: /var/log/passivedns-alert.log).

There are different sources for getting lists of known bad domains. Here is one if you want to test the blacklist functionality: http://isc.sans.edu/feeds/suspiciousdomains_High.txt

Im pretty far as what it comes to planed features at this stage. Please try out PassiveDNS and beat the crap out of it :) I will probably “up” the version to 0.5.0 soon and from there on, it is just testing and testing and more testing before it will be a “one dot O” release.

If you have any issues with PassiveDNS, please submit them here.

Jan
12

The question was brought up to me late last night on IRC, as p0fv3 RC was recently announced. This is a short answer to that question:

“People that find the PRADS page and already know p0f or pads may be interested in a comparison or essentially arguments why you would use one over the other.”

First off, its exiting to see Michal Zalewski back with p0fv3 :) I quickly read through his code yesterday and tested it out, and its rather interesting how he solves things. The fingerprint database at the moment is limited, but expect that to grow in the near future. I also love his non formal output in his applications :)

[PRADS vs PADS]
So, back to the questions. First off, pads “Passive Asset Detection System” uses regexp syntax to look for common bytes in payload to identify server application. So if the server says ” Server: Apache/2.2.3 (Linux/SUSE)” that is collected as what service is running on the server port where this was detected. The “rules” can be written more specifically for each server software, but are rather general and small today. Some pads “rules” look for ASCII strings, and some for different bytes in hex etc. to identify stuff like SSL/TLS. Pads is no longer actively developed by the original author, but I do maintain a fork of the last version with enhancements added.

PRADS extended the way pads does asset detection. We have build in IPv6 support in PRADS, so it also detects asset listening on IPv6 addresses. We also have build in connection tracking, so that we can cut off detection in a stream after an amount of packets or bytes seen from the client or server. This to drop trying to look for server/client assets in connections that transfers big files or are encrypted etc. Most “banners/identifiers” are in the first packet etc. so limiting for how many packets in a stream to do detection on helps on performance etc.

To extend pads a bit, we also added detection for client applications using the same method as for detecting server applications.

My future thoughts on enhancing the pads/PRADS asset rules are to make them more like the Snort/Suricata rule language and use fast pattern matching before invoking the pcre method etc. Pads does no OS fingerprinting per say btw.

[PRADS vs p0f]
PRADS tcp fingerprinting was based on the p0fv2 way as p0f had the fingerprint DB and we thought that reusing the fingerprints would make it easier for people to migrate if they wanted, instead of recollecting and adding fingerprints. PRADS also added some touches of its own (for IPv6 etc) and the way we match the fingerprints (and fuzzing). We have thought about extending the fingerprints and re-write them, but thats in the future. Right now they are doing a good job. We also added all the p0fv2 ways of fingerprinting to the whole tcp session, from the syn to the rst/fin. p0fv2 could just use one method at a time, depending on how you started p0fv2. PRADS outputs all the info it gathers, and leaves the final correlation to the end user/program etc. A good example on that is prads-asset-report and prads2snort, which ads wight to each type of fingerprints, ranging the syn and syn+ack higher than stray-ack, rst and fin etc. You can also base the final guess on client or server applications to, say if the User-Agent contains: “Linux” or “Windows NT 6.1″ or “Macintosh; Intel Mac OS X 10.7″ etc.
or if the Server string of the web server is: “Microsoft-IIS 6.0″ or “Apache 2.2.15 (FreeBSD)” or “Apache 2.2.3 (Red Hat)” etc.

The p0fv3 tcp fingerprints are new in the way they are written. A new fingerprint file format, that makes it easy to add different types of fingerprints into one and same file (TCP/HTTP/SMTP etc). The most significant enhancement in the TCP fingerprints that I see is the MSS and MTU multiplier field. p0fv3 also detects new quirks not measured in p0fv2. The rules are now also more human readable, Example:

# RULE
label = s:unix:Linux:2.6.x
sig = *:64:0:*:mss*4,6:mss,sok,ts,nop,ws:df,id+:0

# Will match:
.-[ X.X.X.X/58435 -> Y.Y.Y.Y/22 (syn) ]-
|
| client = X.X.X.X/58435
| os = Linux 2.6.x
| dist = 9
| params = none
| raw_sig = 4:55+9:0:1460:mss*4,6:mss,sok,ts,nop,ws:df,id+:0
|
`—-

The way the tcp fingerprints are matched are also changed a bit, and I believe Michal Zalewski has done this for good reasons and that it will enhance the detection.

Beside the new tcp fingerprint changes, p0fv3 also has application layer detection added. I looked at the HTTP stuff, and p0fv3 matches also on the HTTP header order and dont blindly trust the User-Agent, as we do in PRADS. We have thought about extending the “rule/signature” in PRADS to be more Snort/Suricata like, so you can have more content matches etc, but more accuracy can be achieved today using the pcre language, to verify header order etc, before blindly trusting the UA, but pcre is way too expensive used alone I think, so organizing the signatures/rules better internally and having something like a fast_pattern matcher would help alot. Quick pcre example for a User-Agent with specific HTTP header order:

# Detects Firefox/3.6.X with HTTP header order to add confidence in the match.
# PRADS rule:
http,v/MFF 3.6.X/$1//,\r\nHost: .*\r\nUser-Agent: Mozilla\/5\.0 (.*Firefox\/3\.6\..*)\r\nAccept: .*\r\nAccept-Language: .*\r\nAccept-Encoding: .*\r\nAccept-Charset:

Running it in PRADS on an old pcap gives me:

# Client IPs deducted just to be kind
[client:MFF 3.6.X (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/10.04 (lucid) Firefox/3.:80:6],[distance:8]
[client:MFF 3.6.X (X11; U; Linux x86_64; en-GB; rv:1.9.2.12) Gecko/20101027 Ubuntu/10.10 (maverick) Firefox:80:6],[distance:11]
[client:MFF 3.6.X (Windows; U; Windows NT 5.1; de; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 ( .NET CLR 3.:80:6],[distance:10]
[client:MFF 3.6.X (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR :80:6],[distance:14]
[client:MFF 3.6.X (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101027 Linux Mint/10 (Julia) Firefox/3:80:6],[distance:15]
[client:MFF 3.6.X (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/10.10 (maverick) Firefox:80:6],[distance:9]
[client:MFF 3.6.X (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12:80:6],[distance:6]
[client:MFF 3.6.X (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6:80:6],[distance:12]
[client:MFF 3.6.X (Windows; U; Windows NT 6.1; es-ES; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12:80:6],[distance:14]
[client:MFF 3.6.X (X11; U; Linux x86_64; en-US; rv:1.9.2.10) Gecko/20101005 Fedora/3.6.10-1.fc14 Firefox/3.:80:6],[distance:8]
[client:MFF 3.6.X (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/10.04 (lucid) Firefox/3.:80:6],[distance:12]

Not the whole User-Agent is grabbed, and we need to extend that in the future. But the pcre language makes it possible to match on as much content as you want, to have the confidence you need in your signatures/rules for detecting assets. PRADS looks for client and server applications on all ports and both UDP and TCP and for IPv4 and IPv6.

[PRADS vs The World]
Right now we are working on adding the DHCP OS fingerprinting and ICMP OS fingerprinting. DHCP is pushed to the git master on github but is not fully integrated into the PRADS core yet, but printing and matching is working, so you can help add fingerprints if you want :) . The ICMP part is tricky as I want to fingerprint on the protocol layer, and also the payload, so I kind of have to combine the p0f way with the pads way of detecting and matching.

PRADS has also lots of other stuff, like connection tracking/Flow gathering with output compatible with cxtracker and sancp. I have also been working on my passivedns project, and I tend to port the relevant function over to PRADS, so we can have domain names mapped with assets to.

p0fv3 has an API so you can talk to it, to fetch relevant info about the IPs it knows about. I see p0fv3 with this functionality aimed at mail and web servers etc, to determine if this is spam or ham stuff coming its way, but you can use it in lots of cool ways.
I know PRADS is used in much the same way from people I have talked too. An example that Kacper put up can be found on http://prads.delta9.pl/. On the road map for upcoming PRADS releases, we have access to assets via shared memory. That will make it easier for extracting info from the running PRADS process that is current. PRADS also ships with prads2db.pl which parses a prads asset log-file and inserts the info to a DB so you can query it for info.

PRADS philosophy is something like: “If it can be detect passively, PRADS should probably do it.”

So if you are comparing for deciding which application to go for, I would say use them all, and correlate the the knowledge that each tool gives you. You can even add the output from the active fingerprinting tool nmap into the mix.

That said, much of my view on PRADS comes from that I use it in my Network Security Monitoring setup and from my wish to “know as much as possible about my assets”. If you have any wishes or suggestions, god or bad etc, feel free to contact us.

E

Jan
09

I have been looking into malware traffic that is hard to make signatures for in a “regular” way. I’m not a malware reverser, so I don’t dig into a malware to determine byte-testes and jumps etc. in binary protocols. This lead me to use a lot of flowbits at first, for making my sigs, but the performance in Snort and Suricata was “crap” to say it nice. So I talked to Victor Julien, lead programmer of Suricata, discussing implementing packet and byte counting in Suricata. I want to count each packet sent by a client and server and the total amount of bytes sent by client and server. Talking back and forth, Victor convinced me that I might be best to go for byte count for reassembled streams. So I added a feature request to Suricata. I since then updated the feature request to add the packet and byte counters, as I think they will do great use.

Talking to Matt Jonkman (Emerging Threats Pro), he pointed me to flowint in Suricata to try to solve my packet counting. So in Suricata 1.1.1, you can do something like this to initialize the packet counters:

# Initialize the packet counter (Suricata 1.1.1 and some older versions)
#alert ip $HOME_NET any -> $EXTERNAL_NET any (msg:”Generic Client Established Flow IP Packet Counter set”; flow:established,from_client; flowint:client_packet,notset; flowint:client_packet,=,0; flowbits:noalert; classtype:not-suspicious; sid:1; rev:1;)

#alert ip $EXTERNAL_NET any -> $HOME_NET any (msg:”Generic Server Established Flow IP Packet Counter set”; flow:established,from_server; flowint:server_packet,notset; flowint:server_packet,=,0; flowbits:noalert; classtype:not-suspicious; sid:2; rev:1;)

In Suricata 1.2dev (rev 4c1e417) (I did my test for the blog on this version) and newer, you dont need to initialize the counter, as it will automagical be initialized to zero, so you don’t need sid:1 and sid:2:

## Generic packet counter: (This could be better done internally in Suricata/Snort? and not with rules?)
alert ip $HOME_NET any -> $EXTERNAL_NET any (msg:”Generic Client Established Flow IP Packet Counter”; flow:established,from_client; flowint:client_packet,+,1; flowbits:noalert; classtype:not-suspicious; sid:3; rev:1;)

alert ip $EXTERNAL_NET any -> $HOME_NET any (msg:”Generic Server Established Flow IP Packet Counter”; flow:established,from_server; flowint:server_packet,+,1; flowbits:noalert; classtype:not-suspicious; sid:4; rev:1;)

So, what can you do with packet counters?

First off, lets look at some generic rules I made up to test with, which basically should limit the detections in streams to the first 29 packets from the client:

# GENERiC GET
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:”GENERIC GET (classic)”; flow:from_client,established; content:”GET “; depth:4; content:!”connection: keep-alive”; nocase; http_header; classtype:not-suspicious; sid:5; rev:1;)

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:”GENERIC GET (flowint)”; flow:from_client,established; flowint:client_packet,<,30; content:”GET “; depth:4; content:!”connection: keep-alive”; nocase; http_header; classtype:not-suspicious; sid:6; rev:1;)

# GENERiC UA
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:”GENERIC User-Agent (classic)”; flow:from_client,established; content:”User-Agent: “; http_header; content:!”connection: keep-alive”; nocase; http_header; classtype:not-suspicious; sid:7; rev:1;)

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:”GENERIC User-Agent (flowint)”; flow:from_client,established; flowint:client_packet,<,30; content:”User-Agent: “; http_header; content:!”connection: keep-alive”; nocase; http_header; classtype:not-suspicious; sid:8; rev:1;)

Sid 5 and 6 looks for a HTTP GET request that is not a HTTP keep-alive. Sid 7 and 8 is looking for User-Agent in non HTTP keep-alive request. Common for the flowint versions of the rules, are that they are just limited to the first 29 packets in an established flow. So running Suricata against 2009-04-20-09-05-46.dmp etc. shows some interesting results:

Num Rule Gid Rev Ticks % Checks Matches Max Ticks Avg Ticks Avg Match Avg No Match
——– ———— ——– ——– ———— —— ——– ——– ———– ———– ———– ————–
1 4 1 1 1695335708 67.74 510720 510720 6412616 3319.50 3319.50 0.00
2 3 1 1 581354624 23.23 508970 82175 3602972 1142.22 3061.99 772.59
3 7 1 1 135943292 5.43 7900 2352 499972 17208.01 16156.62 17653.74
4 5 1 1 43040648 1.72 3313 2517 199052 12991.44 16247.74 2694.82
5 8 1 1 29172972 1.17 7900 2352 434592 3692.78 6588.51 2465.18
6 6 1 1 17917112 0.72 3313 2517 353684 5408.12 6528.93 1864.06

Sorry for the formating :)
First, if we look at sid 5 and 6, we see that they both where checked 3313 times, and matched 2517 times. If we look at total ticks, sid 5 uses 43040648 ticks and sid 6 (flowint) uses 17917112 ticks. Average ticks for sid 5 is 12991.44 ticks and 5408.12 ticks for sid 6 (flowint).

Looking at sid 7 and 8, we see that they both where checked 7900 times, and matched 2352 times. If we look at total ticks, sid 7 uses 135943292 ticks and sid 8 (flowint) uses 29172972 ticks. Average ticks for sid 7 is 17208.01 ticks and 3692.78 ticks for sid 8 (flowint).

A basic conclusion for this test, is that the rules with the flowint check are faster and will give you the same alerts.
But if we look at the ticks sid 3 and 4 uses to count the all the packets, they are high in total, but low on average ticks. So they are not expensive for each check, but since they are checked (and possibly incremented) for each packet, the total ticks are relative high. Having this in the core of Suricata and Snort, would probably make them less expensive (hint hint).

So what more c00l stuff can we do with packet counters?

Some malware I stumbled upon will give you an example (Mostly used in the Gheg Spam bot, aka Tofsee/Mondera)
b31e4624cdc45655b468921823e1b72b
3c453e40ff63da3c2a914c29b6c62ee0
e8034335afb724d8fe043166ba57cd23

It seems to communicate in a binary way (encrypted), but looking at 5 different pcaps I got, I saw a pattern and my flowint counters came to good use. It seems like the client and server sends packets with a specific payload size in different parts of the communication. I did not see any obvious content to match on, so content matches didn’t seem trivial, and this is a great way to demonstrate my point: Flowint+packet counters to the rescue! Here is a tcpdump output of traffic on port 443 (not including the port 22050 traffic, which is much longer, but the start is the same), so you can see the packets sizes and in which order they do come in this short sessions:

reading from file b31e4624cdc45655b468921823e1b72b.pcap, link-type EN10MB (Ethernet)
03:47:02.571111 IP 192.168.1.10.1031 > 216.246.8.230.443: Flags [S], seq 910650996, win 65535, options [mss 1460,nop,nop,sackOK], length 0
03:47:02.608784 IP 216.246.8.230.443 > 192.168.1.10.1031: Flags [S.], seq 442582883, ack 910650997, win 5840, options [mss 1380,nop,nop,sackOK], length 0
03:47:02.608977 IP 192.168.1.10.1031 > 216.246.8.230.443: Flags [.], ack 1, win 65535, length 0
03:47:02.646959 IP 216.246.8.230.443 > 192.168.1.10.1031: Flags [P.], seq 1:201, ack 1, win 5840, length 200
03:47:02.647342 IP 192.168.1.10.1031 > 216.246.8.230.443: Flags [P.], seq 1:142, ack 201, win 65335, length 141
03:47:02.685098 IP 216.246.8.230.443 > 192.168.1.10.1031: Flags [.], ack 142, win 6432, length 0
03:47:02.718986 IP 216.246.8.230.443 > 192.168.1.10.1031: Flags [P.], seq 201:676, ack 142, win 6432, length 475
03:47:02.718999 IP 216.246.8.230.443 > 192.168.1.10.1031: Flags [F.], seq 676, ack 142, win 6432, length 0
03:47:02.719268 IP 192.168.1.10.1031 > 216.246.8.230.443: Flags [.], ack 677, win 64860, length 0
03:47:02.719584 IP 192.168.1.10.1031 > 216.246.8.230.443: Flags [F.], seq 142, ack 677, win 64860, length 0
03:47:02.757350 IP 216.246.8.230.443 > 192.168.1.10.1031: Flags [.], ack 143, win 6432, length 0

And here is how I sigged it:

# Backdoor:Win32/Tofsee (aka: Gheg / Mondera)
alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:”Possible Tofsee server Packet 2 (200 Bytes)”; flow:established,from_server; flowint:server_packet,=,2; dsize:200; flowbits:set,Tofsee_SERVER_200; flowbits:noalert; classtype:trojan-activity; sid:9; rev:1;)

alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:”Possible Tofsee client Packet 3 (141 Bytes)”; flow:established,from_client; flowint:client_packet,=,3; dsize:141; flowbits:isset,Tofsee_SERVER_200; flowbits:set,Tofsee_CLIENT_141; flowbits:noalert; classtype:trojan-activity; sid:10; rev:1;)

alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:”Possible Tofsee server Packet 4(475 Bytes)”; flow:established,from_server; flowint:server_packet,=,4; dsize:475; flowbits:isset,Tofsee_CLIENT_141; classtype:trojan-activity; sid:11; rev:1;)

Sid 9 looks only for the 2. packet in an established flow from the Server (C&C) and the packet has to have payload size/dsize 200. It then sets the flowbit Tofsee_SERVER_200 if this hits and the rule has noalert, because this could easily trigger a false positive just this check. So we got to do some more checks. Sid 10 checks only Client packet 3, it has to have a payload size/dsize of 141 and flowbit Tofsee_SERVER_200 has to be set for this too match. Sid 10 is also no alert, as we still can check some more, to not be spammed by falses. So sid 11 checks if server packet 4 has payload size/dsize 475, and that flowbit Tofsee_CLIENT_141 is set. No we can give an alert, as this would probably be an unique set of conditions. So testing again with out 2009-04-20-09-05-46.dmp test pcap, we get:

Num Rule Gid Rev Ticks % Checks Matches Max Ticks Avg Ticks Avg Match Avg No Match
——– ———— ——– ——– ———— —— ——– ——– ———– ———– ———– ————–
1 4 1 1 1727862376 63.39 510720 510720 14059784 3383.19 3383.19 0.00
2 3 1 1 508719672 18.66 508970 82176 3689732 999.51 2830.58 646.95
3 7 1 1 140271824 5.15 7900 2352 1013800 17755.93 18570.93 17410.42
4 9 1 1 101662288 3.73 28419 0 6625384 3577.26 0.00 3577.26
5 11 1 1 84264720 3.09 32938 0 612848 2558.28 0.00 2558.28
6 10 1 1 71553560 2.62 32938 0 576132 2172.37 0.00 2172.37
7 5 1 1 42053248 1.54 3313 2517 805736 12693.40 15831.10 2771.81
8 8 1 1 31547660 1.16 7900 2352 153972 3993.37 7039.04 2702.21
9 6 1 1 17944504 0.66 3313 2517 292508 5416.39 6476.95 2062.83

Overall, sid 9, 10 and 11 did not do that bad here. And the best thing is, they all have 0 matches. I ran this on many of my test pcaps, and I’ve not been close to false positives. Sid 10 seems to fire some times, but not the others, so rather unique combo of packets in a stream I guess and a way to sig malware like this. Also, we could add check for the TCP “PUSH” flag in sid 9, 10 and 11 etc to be more accurate if we need.

So the proof of the pudding, running it against a pcap of the malware:

Num Rule Gid Rev Ticks % Checks Matches Max Ticks Avg Ticks Avg Match Avg No Match
——– ———— ——– ——– ———— —— ——– ——– ———– ———– ———– ————–
1 3 1 1 443120 33.03 165 158 102108 2685.58 2731.72 1644.00
2 11 1 1 310420 23.14 259 2 2860 1198.53 2478.00 1188.58
3 4 1 1 302944 22.58 269 269 15376 1126.19 1126.19 0.00
4 10 1 1 257896 19.22 259 3 16484 995.74 7446.67 920.14
5 9 1 1 27088 2.02 10 3 7448 2708.80 5080.00 1692.57

Events:

[**] [1:11:1] Possible Tofsee server Packet 4(475 Bytes) [**] {TCP} 216.246.8.230:443 -> 192.168.1.10:1031
[**] [1:11:1] Possible Tofsee server Packet 4(475 Bytes) [**] {TCP} 84.16.252.136:22050 -> 192.168.1.10:1032

My Tofsee rules fire on all 5 pcaps I looked at initially (and lots more pcaps I tested after that), so hopefully it will fire on all current Tofsee traffic.

I also replied on an e-mail to the snort-user list 3. of November, making the same feature request as I did for Suricata. No one followed up :/ The email should probably be directed to the snort-devel list some time in the future…

I hope this post has been useful, and hopefully we can get some more flowint rules out there, and maybe even get native packet and byte counting in Snort and Suricata one day :)

Dec
30

Security thoughts for 2012+

Quoting Richard Bejtlich: “Prevention will eventually fail!”

And I have always agreed. Accidents do happen, the world is not perfect. So when companies that really spend time and money on security get breached (RSA, Lockheed, Google, [place your company here?]) then you could work out from the theory that you eventually will get breached too.

When you realize and accept that, you may need to redefine the way you think of IT security. You should prepare for the worst, so identifying what would be “the worst” for you (your company) and then identifying you most critical assets should be on the top of your list, and you should start focusing your effort on securing them the most.

Limit the users that have access to the most critical assets (and work on sensitive projects etc). The users also need special attention when it comes to awareness training and follow up. They should also have a good communication with the security staff making it easy to report anything that seems suspicious and get positive feedback no matter what. They are an important part of picking up security issues where your technology fails! So you need them.

The most critical assets needs to be monitored as close to real-time as it gets. The time it takes for an incident detection and till your response should be a minimum, even outside working hours and weekends.

Then the users who has access to theses critical systems should also have special attention/hardening on their OS’s etc. Use a modern operating system and enabling the security functionality all ready there and making sure that executables cant be executed from temporary directory etc. When you got basic security features in place (Including Anti-Virus), you should start looking at centralized logging and alerting on suspicious activities from the logs.
You should also look into implementing different ways of monitoring anomalies for the users usage. When do they normally log on? From where do they normally log on? Are they fetching lots of documents from the file servers? etc. And did they access the fake “secret document” that is there just for catching any suspicious activity? (You need to define your own anomalies).

When the inner core (most valued assets + its users) are “secured”, you should strive to maintain an acceptable level of security on the rest of the corporate office network and also importantly the public facing part. Compromises here can be used to escalate into the “inner core” or to damage your reputation and business affairs, so keeping an acceptable level of security here “as always” is good.

As “Prevention will eventually fail!”, you need to have sufficient logging up and running. So when you do have an incident, the analyst has sufficient data to work with and this will also keep the cost down, as the time it takes to handle an incident will be lower. I’m mostly into Network Security Monitoring, so for me, NetFlow type data, IDS events, full packet capture, proxy logs, and DNS queries logs are some key logs from network that will help me. On the more host side of logging, the more logging, the better… web, email, proxy, spam, anti-virus, file-access, local client logs, syslogs/eventlogs, and so on…..

And remember – if you cant spot any badness, you are not looking hard enough :)
I always work on the theory that something in my networks are p0wned. That keeps me on my toes and keeps me actively finding new ways to spot badness.

With that – I wish you all a hacky new year!

Dec
08

It has been some while since I had time to code on my C projects. But the last week I got some time and used it to get PassiveDNS into a state where Im more relaxed about it. Previous version (V0.1.1) used to spit out all DNS data it saw. The latest version caches DNS data internally in memory and only prints out a DNS record when it sees if for the first time, or if it is a active domain, it prints it out again after 24 hours and so on (once a day). The previous version would give me Gigabytes of DNS data daily in my test setup, while this version gives me about 2 Megabytes. This version also just gives you A, AAAA, PTR and CNAME records at the moment. I’m open for suggestions for more (use-cases would be great too!).

In my tests and in feedback from people who has tried it, PassiveDNS is very resource friendly when it comes to CPU usage (more or less idling). In current version (v0.2.4) there is not implemented any limitation on memory usage, so if your network sees a lot of DNS traffic, you might end up using some hundreds of Megabytes RAM for the internal cache. The most I’ve seen is around 100 MB at the moment. My plan is to implement some sort of “soft-limit” on memory usage, so that you can specify how much memory PassiveDNS should maximum use. The “downside” of this though, is that PassiveDNS would have to expire domains from its cache faster. That might end up in bigger log files with duplicate entries. When I say “downside”, its not a real downside as I see it. From my tests with the example scripts pdns2db.pl and search-pdns.pl, it is not much of a problem keeping up with insertions to the DB (MySQL) and your last seen timestamp will be a bit more accurate. I guess this kind of data though, is better suited for a NoSQL solution, if you are collecting lots of it.

If you have read this, and you are into Network Security Monitoring, and you don’t use passive DNS in your work, I recommend you too Google it and read a bit about it.

Nov
20

Thanks to Ian Firns that has implemented custom output formating (sancp like), pcap indexing and pcap capturing (daemonlogger-style)…!

Starting from commit 6b32fb24db, cxtracker can now, additional to writing flowdata, also do packet-capturing and outputting indexing data about where in the pcap(s) the flow starts and ends. This should potentially bring down the time needed to carve a session out of a big pcap. Right now, all this is just in beta, but the functionality is there, and there is also an example perl-script to carve out a session based on index-data.

Output fields of interest:
%spf pcap file containing start packet in session
%spo pcap file offset of start packet in session
%epf pcap file containing last packet in session
%epo pcap file offset of last packet in session

Example on a indexed pcap output, using: “%spf|%spo|%epf|%epo”
“/tmp/test1.pcap.1321821603|10115|/tmp/test1.pcap.1321821809|62704″

So, basically, if you have a 1 GB pcap file, normally you could use tcpdump with a BPF filter to care out the session you where looking for, reading and searching the whole 1 GB pcap file.

With this addition to cxtracker, you would now be able to spool right to the start-byte off the session and start carving from there until the end-byte of the session. So if the session is placed say 450 MB into the pcap, and ends at 550 MB into the pcap, you basically only have to read and carve in 100 MB of pcap data. In the example perl script (cxt2pcap.pl), the file handle for the file is opened, it would seek to the right place in the pcap (_not_ reading 450 MB of data from your disk) and start reading 100 MB data from your disk and carving+filtering and then close the file handle.

We would love to have some feedbacks here, and to have people test it. Again, it is still beta, so be aware :)

Note: Idexing pcap files is nothing new, the sancp project did add alike features, but was never properly released.

Apr
29

For those of you not familiar with the concept of Passive DNS, there are lots of stuff on it on the intertubes…

Just some of the links:
Some use cases: http://conferences.npl.co.uk/satin/presentations/satin2011slides-Rasmussen.pdf
A public passive dns db: http://www.bfk.de/bfk_dnslogger.html?query=sans.org#result
Or just click here: http://lmgtfy.com/?q=passivedns

I have not found any good tools yet that lets you build your own passive DNS DB, so I have started to walk down that path…
First off, I have coded a DNS sniffer (passivedns) I have ported the same functionality over into PRADS. All code is in beta at the moment.

I announce this release, so if anyone is interested, I will take input on the output format :)
My first tests shows that the passive DNS data collected on a small network is too much… My plan is to implement a in memory “state” so that it don’t prints out the same record more than X times over a time interval (say, if a record is the same, just print it once a day, but if it changes, print it immediate). When that is done, Ill write a parser to feed it into a DB and a query tool to fetch passive DNS records on request.

Feedback is always welcome!

Mar
02

I did this several years ago, but when I switched to full packetcapture I did not have the need for catching pcap of traffic firing a rule.

You can do this with the tag option in Snort. If you want to know more, please read README.tag.

I will present you with a signature that will log the first 1000 bytes or 100 seconds (What ever comes first!) after the packet that triggered the event. Im looking for a SYN flag in a TCP session and I start my logging from there (0,packets means that there are no limits on amount of packets).

alert tcp 85.19.221.54 any <> $HOME_NET any (msg:”GL Log Packet Evil-IP 85.19.221.54 (gamelinux.org)”; flags:S; tag:session,1000,bytes,100,seconds,0,packets; classtype:trojan-activity; sid:201102011; rev:1;)

I use unified2 as output plugin for Snort (something that also Sourcefire 3D does IIRC), so I need to fetch the pcap from the unified log. Snort 2.9.0 and newer ships with a new tool that will help you here, u2boat. This will carve out the pcaps from the unified log:

# u2boat /var/log/snort/<unified.log.timestamp> /tmp/snort.pcap

From there, you can read the /tmp/snort.pcap with tcpdump or wireshark etc. or just fetch the evil-IP packets:

# tcpdump -r /tmp/snort.pcap -w /tmp/Evil-85.19.221.54-traffic.pcap 'host 85.19.221.54'

If you love it in console, you can read the pcap with tcpflow etc:

# tcpflow -c -r /tmp/Evil-85.19.221.54-traffic.pcap

I did could not seem to verify that the “0,packets” actually do work. I added the following line also to my snort.conf:

config_tagget_packet_limit: 0

But again, not sure if it works.

I wanted to do some more testing before releasing this blog, but it has been sitting around for a while, so If I play more with it and have something new, Ill post a new post :)

BTW, turning you Sourcefire 3D into a packetcapture device is easy :) adding the rule as above, you can just click the “Download Packet(s)” Button in the Event Information/Packet Information view :) Use such a rule with care though…

Jan
28

Update on tuning snort…

I rearranged the “menu” on my blog, and put most items under “pages

I also added my blogpost on how to make snort go faster under linux, and made it a page…
I updated it and added among some words of DAQ and PF_RING.
If you have any Tips or Tricks on how to tune snort, please mail me/add a comment, and Ill update the page.
Read the “article” here.

Jan
25

January 2011 gamelinux.org has its 10th birthday…

Did you know that gamelinux.org started out as the website for GamelinuX, a linux distribution for gaming?
I never got a working release that I wanted to present to the public, and after 2 years of working on the GamelinuX distro, the project came to an halt, as my Master degree and personal life took too much time from hacking on the distro. The GamelinuX project got official dead in September 2001 :/ And thinking of it now… do I have copies of the Alpha CDs somewhere??? I should have, but I dont know where… :/

My first security related post was in July 2003, when Free-X released an exploit for Xbox, that would let you install linux on it…

In March 2007, the blog entered its current form, leaving phpnuke/drupal (and clones) for wordpress.

Gamelinux.org has always been about Open Source and hacking (‘as in finding a way to make things work’). As I started to play with Linux in 1998, Linux has been my OS of choice since. My reasons for continuing to blog security related topics on this domain, was that “Game Linux” was for me also associated with “gaming linux”, meaning “hunting linux” – find ways to break it/exploit it.

I went online for the first time with my Linux machine in 1998, and went to IRC/EFnet and the channel #Oslo. I asked anyone if they where into hacking/cracking, and asked for pointers on where/how to best start reading and learning more about it. Not long after, some guy told me to look in my /root/ directory and there was a dir that had a dozen of exploits… I realized that I had been hacked, and decided then not to get back online before I knew more about how to protect my self. The sploit used, IIRC, was a buffer overflow in wu-ftpd that shipped with the Red Hat release then, and wu-ftpd was default enabled :)

I stayed offline for about 2 months with my Linux machine, using the university machines to read more about hardening linux, firewalling, IDS, HIDS and such… As long as I can remember, I have been interested in hacking/cracking and defending from it. So linux+security has been an active interest for ~13 years now, and with my first related job experience ~10 years ago working for a Managed Security Service Provider (MSSP).

Thinking back the last 15 years, it has been some good years. I love what I’m doing and I have no plans on quitting!

Jan
09

As I installed a new home router/firewall some months back, I installed it with an IDS (Sguil) just to have something to play with at home. I never got comfy with oinkmaster or pulledpork as I had to dig into config files too much…

Based on my idea for sidrule on how to manage rules, and also baring in mind cerdo, I quickly made a sidrule like tool in perl. I talked to some people about it and they liked my approacher on how to do rule management. I got some very positive feedback, so I decided to rewrite it and publish the code (Get polman 0.3.1 here).

To cope with not having a configuration file, you have to start polman with the –configure option to add a RuleDB (or more) and a Sensor (or more). A RuleDB is a “database” that holds rules. The idea is that you can load Sourcefire VRT rules for Snort version X.X into one RuleDB, and you can also load Emerging Threats rules for Snort X.X into the same RuleDB and have nice set of rules to play with. As I’m currently testing Suricata, I can with the same tool, and without any extra configfile or too much hassle, make a second RuleDB, but this time one with the VRT rules and with the ET suricata rules in it. I have one Sensor attached to the Snort RuleDB, and the other one to the Suricata RuleDB. One tool too rule them all…(LOTR->Lord of the Rules?).

A bit back to the –configure option. This will enter a ascii menu where you can add and edit RuleDBs and Sensors. For a RuleDB, you specify where to load the rules from (Currently just filesystem, but http/https is scheduled for next release), name, comment, etc. For a Sensor, you specify name, comment, what RuleDB to use, where to write rules to (a file, for writing all rules to a file or a dir, for writing all rules out into their original filename in that dir), where to write sid-msg.map file, etc.

Once a RuleDB is set up, you can eater from the menu load rules into the RuleDB or from command line. Once a Sensor is setup, and the specified RuleDB has rules in it, you can start to play with the Sensor rules. If you choose to write out rules from the menu straight away, it will turn on all rules that are default turned on by vendor (VRT/ET/ETPRO etc.). When you are back at command line, you can start turning on/off rules. One way I start, is to disable categories that I normally would not enabled in my setup, example:

polman.pl -i SensorA -m "(dos|games|icmp_info|pop3|rpc|scada|scan|snmp|sql|voip)"

This will search through all the rules based on the filename that the rules was loaded from. So from only the “sql” entry in my regexp above, it would typically show you all rules that where in the files: emerging-sql.rules, mysql.rules and sql.rules.
You will then be faced with some questions… Disable them all? or Enable them all? (With this particular search, I disable all the rules). There is also a third option, and that is to go through the rules, rule for rule, to make a conscious decision after you read the raw rule. If you choose to enable all rules, or based on rule for rule, you can change behavior of the rules to. Most rules are set to action “alert“, but if you choose to enable rules, you will have the option to change action for rules (to alert,log,pass,drop,reject,sdrop,default or current). The option “default” and “current” may need some explanation… Default will set the rule to the default action that was set by the vendor. Current will leave the rule(s) as they are defined for the sensor (say you have set an alert rule to drop before, the rule will keep the current action for the rule, which is drop).

There are some powerful searches built into polman. You can search in field classtype, metadata and msg at the same time. You can also search in “fields” ‘default enabled/disabled’ (Vendor state of the rule) and category (which is default filename where the rule was loaded from).
Example:
Say you want to search all rules that VRT classifies in their most secure config:
polman.pl -i SensorA -p "policy security-ips drop"
(You can then enable them all, and set action to drop).
Say you want to limit your search, and you only want to search for Zeus/Zbot traffic…
polman.pl -i SensorA -p "policy security-ips drop" -s "(Zeus|Zbot)"
Now you can enable them all, and set action to drop ;)

If you have a sid, or a list of sids, you can enable (-e) or disable (-d) them rather easy…
polman.pl -i SensorA -e sid1,sid2,sid3,....,sidN

To load rules into a RuleDB from command line:
polman.pl -r RuleDB1 -u

To write out rules for a sensor to a file (or files if a dir is specified):
polman.pl -i SensorA -w
This also writes out the sid-msg.map file…

There is nothing wrong about reusing the same Sensor rules across multiple sensor. Indeed that is one of the reasons that I choose the name policy manager, as they don’t need to be looked upon as Sensors that you define, but also Policies (I have thought about naming of Sensor/Policies a lot). In the future I hope to finish the implementation of Thresholding and Suppression too, so that you can edit them quickly from command line.

Thoughts and feedback are welcome on this blog or here!
Project on github here.
An example on how to use polman here.