[Sysadmins] Firewalls.

Alan Doherty alan at alandoherty.net
Sun Mar 14 14:03:03 GMT 2010

At 12:00 14/03/2010  Sunday, you wrote:
>Date: Sat, 13 Mar 2010 15:09:40 +0000
>From: "Owen O' Shaughnessy" <owen.oshaughnessy at gmail.com>
>Hi Guys,
>We run boundary firewalls running FreeBSD and PF, and we have a good
>process built around administering the system and keeping our site
>safe from external attacks.
>Given that somewhere in the region of 60 to 70% of attacks are
>executed from inside organisations, I want to review our desktop
>firewall protection.
>I'm pretty comfortable with what to do with the linux and bsd machines
>that are on our lan, but I'm looking for advice from people on what
>works well on a windows desktop machine.
>Do any of you know of a windows desktop firewall which can support
>centralised logging and centralised administration? and which you'd be
>happy to recommend to a friend? :-)

err short answer no, as to the central-admin and logging {as most good ones need to track filename+crc+destination/source-rules for this application, and as in an enterprise many different versions of app tend to be in use over several desks, especially during patch tues-thurs}

but heres some ideas, {btw id say nowdays its closer to 90% of threatening network activity is in>out, and of the 10% left little had a hope of passing so much as a home router {its just probes to see if your an unshielded host running pppoe direct}

personally some of the stuff I add after the host-based-firewalls and host-based-av, that I find works well to rapidly detect malware {as lord knows AV is pretty hopeless against the good {good==well written} viruses} {and as infection cannot be avoided all we can hope for is rapid detection and removal}

first off 99.99 of legit traffic to Internet tends to be web-browsers, 
so step A is separate their port80 and 53 traffic from the malware if possible,
Most enterprises already have some internal DNS so just drop+alarm all non dns servers attempting dns outbound.
I use squid internally directly configured in users browser {transparent proxy is i find counterproductive as you help-proxy malware too, and cant detect/distinguish between configured clients and un configured clients{potential malware}

thus proxy>internet traffic rules is the only place you gotta add high ports when user 'needs' to view http://obscure-site.on-a-weird-port.com:6845/

next on rules applied to outgoing traffic from desktops
cat1 rules allowing outgoing to specific services 
{i find few enterprises have a real need for desktops to direct connect to many places/services some ftp servers in client/partner sites, sometimes an ssh to somewhere but thats largely about it}
cat2 rules known blocked outgoing caused by busted software {non-malware}, rejected+log, stuff like ident if you have an old mailserver but always site specific
cat3 rules known malware signs dropped+alarmed {DNS, direct-port80, port25, netbios{bunches of em}, irc, etc etc}
cat4 everything else dropped+alarmed {this is a rule no traffic should hit as it should eventually be identified and re-configured to be either cat1 {allowed} cat2{unthreatening-blocked} or cat3{threatsign-blocked}

next on internal dns ensure you attempt to thwart malware
A automate* use of lists such as http://www.malwaredomains.com/ and http://www.malware.com.br/ to return bogus information within these zones
*automate as stale lists are no-use to anyone
B in bogus information returned do not use as in examples, use an internal dedicated ip

often this ip is either a dedicated alias ip on the firewall {so all traffic to it results in a drop+alarm} or if possible  run a dedicated box using firewalling to drop+alarm all but http traffic, and running apache+scripts to alarm on client connects but parse and return the x-forwarded-for ip provided by your squid to identify the internal client.

these tactics combined result in much earlier detection/removal rates for malware, prevent many from auto-updateing, or connecting to their command/control server, but do require rapid response

most malware will try connecting to name:port only once they may try multiple name:ports, before failing back to http//name 
but nearly all will fallback to ip:port or http//ip eventually so at that stage they will get outbound connection if no-one has reacted to the alarms triggered by the earlier attempts

i can easily offer more detail {as opposed to my bad summary above}, on achieving any all of the above + some lockdown methods for smtp/submission and other common areas where security can be tightened from the default-allow.

noways if it isn't designed with default-deny, its broken

More information about the Sysadmins mailing list