It amazes me that there are some simple firewall rules that everyone can do to aid in the defense of their internal network, yet seem to be rarely implemented. These rules limit *outbound* traffic. It seems, unfortunately, many network administrators neglect to limit traffic from their internal network to less-trusted (e.g., VPN, DMZ, and Internet) networks. Too often this is due to the fact that the admins are too busy trying to keep upper management happy by ensuring that public services (web and e-mail) are accessible to customers and potential customers with five-nines uptime. This is a sad state of affairs.
How many customers are you really going to lose if your website is down for 5 minutes? If a customer finds that your website is inaccessible for a short time, they are likely going to first suspect their PC or their ISP network before they blame your organization. Even if it they do eventually blame you before the problem is resolved, who is really going to be that mad about it? If Google goes down for 15 minutes (as recently happened), I just chalk it up to bad luck. I don’t fault Google. So what, I wasn’t able to hit GMail for 15 minutes? My life is not over. Computers suck. Stuff happens. Services become inaccessible. Big deal.
Now, think about how many customers are you going to lose if your organization is in disarray and can’t close sales deals due to some malware spreading internally? How about your reputation when all your customer information is stolen and posted on the Internet for your competitors (and customers) to see? What if you lose personal data like SSNs or bank account numbers? The list of damaging items that can be lost from inside your network is long and scary. A reasonable person (like myself) would much rather your organization’s Internet services be down for a few minutes (or, heck, even a few hours) than for your organization to lose their confidential data. Even if you are providing me a service (VoIP or spam filtering, for example), I can stand a few minutes of unexpected downtime (albeit a very few minutes…like 5). That’s just life.
So enough of the rant. Here are two simple rules to aid you in detecting malware spreading inside your network. Of course, you’ll have to be paying some attention to your firewall logs to notice. You are paying attention, aren’t you?
- Block outbound SMTP that does not originate from your internal e-mail server(s).
- Block outbound DNS requests that do not originate from your internal DNS server(s).
Simple. Quick. Powerful. But why are these rules helpful?
The first rule above will catch spambots. Spambots are malware that sit on a PC and spew tons of spam. If you have an internal machine spewing e-mail to the Internet, and it’s not your internal mail relay, then that machine is h0sed and you need to examine it. It’s likely to have more than just one piece of malicious software on it.
The second rule will catch malware that is exploiting the fact that most organizations don’t block outbound DNS. These malware will use hardcoded public DNS servers to resolve hostnames, all the while avoiding being logged by the legitimate internal DNS server(s). The hostnames the malware are resolving are often used to aid an attacker in maintaining command and control.
If you can identify infected internal machines through your firewall logs, you can clean the malware and identify further holes in your internal security posture (like foolish users who installed “Whack-a-mole 2009” from hAcme Games, Inc., on their corporate PC).
Check out my next post on outbound firewall rules.