Showing posts with label iptables. Show all posts
Showing posts with label iptables. Show all posts

Saturday, March 27, 2010

Changing The Default SSH Port Without "Really" Changing It


UT99 players can sit this one out.

For years, on security forums and mailing lists, if you ever dared to suggest changing SSH's default port (TCP 22) the "security by obscurity" crowd would come out of the woodwork and nail your ass to the Cross of Righteousness for having the unmitigated gall to even dare utter such heretical nonsense.

Unfortunately for these dogmatic True Believers, changing the ssh daemon's default listening port is such an incredibly effective method for avoiding ssh scans and brute force password attacks that it's starting to show up in HOWTO security articles as a method for hardening your system.

For example, see this article at Linux Magazine.

But the Port 22 Crowd will not leave well enough alone. Although they haven't abandoned the "security by obscurity" mantra completely, they're now using the following argument with increasing frequency:
NEVER CHANGE YOUR SSH PORT! If an exploit comes out that can crash SSH locally, a local unprivileged user on your system could crash SSH and start their own daemon on the SSH port > 1024 and capture your usernames and passwords. If you want SSH on a different port, do this with firewall rules.
Note that ALL CAPS is required when raising this alarm.

Also note that if you require users to connect with SSH in the first place, it's not going to do them a helluva lot of good to crash SSH. If you have users who actually sit down at the keyboard of the physical system, that's another problem entirely. Why bother with crashing SSH when they can slip a bootable CD into the tray and bounce the box?

And of course if you choose a port other than 22 but less than 1024 you can avoid this issue completely.

However, "changing the port with firewall rules" struck me as a novel idea (maybe I'm just stupid but it never occurred to me before) and set me to wondering how you would do such a thing, since I've always taken the easy way out by changing or adding ports in sshd_config.

So I sat down with iptables and experimented a bit. I came up with the following method. If everything you have is behind NAT, the problem can be reduced to simple port forwarding. If not, there are a few hoops you need to jump through. Be advised the iptables rules presented below assume you have a blank set of rules. Just copying and running them against an existing set of rules probably won't work.

First, set SSHD back to the default port 22. Next, figure out what port or ports you want to do SSH over. We're going to use 44, 88, and 8188 here.

Now we take care of the Hypothetical Evil Unprivileged User by not accepting anything over those ports in the first place. This is only meaningful for port 8188 (since 44 and 88 are privileged ports) but we'll do all three for the sake of completeness:

#~iptables -t filter -A INPUT -p tcp -m multiport --dports 44,88,8188 -j REJECT --reject-with tcp-reset




Then, pick a number between 1 and 4294967295. This will be the value of the iptables "mark" we use for ssh. I'll use 0x2200 (8704), just because it's ssh-ish, but any positive integer in that range will do. We're going to tell iptables to reject anything without this mark coming into port 22.

#~iptables -t filter -A INPUT -p tcp -m tcp --dport 22 -m connmark ! --mark 0x2200 -j REJECT --reject-with tcp-reset




I would prefer to DROP these packets rather than REJECT them, but more on that later.

Now we'll tell iptables what ports we will accept for ssh.

#~iptables -t filter -A FORWARD -p tcp -m multiport --dports 44,88,8188 -j ACCEPT




In the "mangle" table we slap our mark on these packets.

#~iptables -t mangle -A PREROUTING -p tcp -m multiport --dports 44,88,8188 -j CONNMARK --set-mark 0x2200




Finally, in the "nat" table we tell iptables to send the marked packets back to port 22.

#~iptables -t nat -A PREROUTING -p tcp -m multiport --dports 44,88,8188 -j REDIRECT --to-ports 22




The packets go back to the INPUT rule and, since they're marked correctly, are sent to the SSHD process listening on port 22.

We have done exactly what was recommended, i.e. we have indeed changed the default ssh port with firewall rules alone (and without NAT). And yet, ssh still listens on port 22!

I must admit this appeals to me on a number of levels, not the least of which is that it has all the hallmarks of a slick little hack. Secondly, it definitely takes a load off of the ssh daemon since it's listening on one port instead of three (in reality I have ssh listening on seven ports).

However, this is done at the expense of complexity, which has yet another group of Annoying True Believers who are fond of chanting "COMPLEXITY IS THE ENEMY OF SECURITY" at the slightest provocation. This is another religion I have never bought into (it may be the same group since increasing complexity generally tends to increase obscurity).

Well, fuck them, you can't please everyone.

Besides, I use the most bizarre, complex combination of port forwarding and routing you'd ever want to see. Most days I have a hard time understanding it myself. It keeps me sharp.

I am almost 100% certain this wouldn't please a "real" firewall administrator. They're mostly overpaid Certified Cisco Clowns anyway. It would never even occur to them to actually use iptables.

But what we have done in essence is put all our trust in iptables working "just right". And we're betting the next time there's an update to the netfilter core code (or the kernel) everything will still work.

I've been around iptables/netfilter far too long to ever bet on that. Sorry, fellas, "once bitten, twice shy" and all that.

Then there's the TCP reset thing. Can it be bypassed? Web filtering appliances that use span ports and two-way TCP resets work fine when everyone "plays by the rules of TCP/IP" (actual statement from Web filter appliance vendor Sophos), but the Bad Guys don't usually play by the rules.

The alternative, DROP-ing the packet, lets anyone scanning our system with tools like nmap know with absolute certainty that SSHD is listening on port 22 since it will show up as "filtered". This makes TCP resets the lesser of two evils, but it's still evil.

In the end analysis, isn't sending a reset from an "open" port just another instance of "security by obscurity"?

Not that I care, that was a purely rhetorical question.

And what about attacks against iptables connection marking (CONNMARK) itself? Do they even exist? Am I opening myself up to an unknown exploit vector? Do I have to take additional measures to avoid spoofing or brute forcing or some other method (fragmented/crafted packets, maybe) to get around my firewall rules?

Even though we followed this fellow's advice, there are too many open questions and it appears that just changing the port in sshd_config is still a simple and effective countermeasure. It's worked for me for over ten years. I have escaped all 0day SSHD vulnerabilities for over a decade and no one ever tries to brute force passwords on my box. It doesn't happen because port 22 isn't open.

Simple. Effective.

duh!

So I feel compelled to shout out my own advice:

ALWAYS CHANGE YOUR SSH PORT!!!

Saturday, December 12, 2009

What A Pain In The Ass


Linux kernel 2.6.32 + ipset v4.1 + iptables 1.4.6 are now current on BOT House and EXP IV.

Just like last time, the upgrade went without a hitch on EXP IV. Everything is smooth as silk. No issues at all. Period. Well, maybe one. Once again, the NVidia driver had to be replaced because of the kernel upgrade, but that too compiled on the first shot and installed without a problem.

BOT House, on the other hand, would not cooperate.

Sure, the kernel compiled on the first attempt, but then the fun began. The NFS (Network File System) server choked on reboot. This was not good because EXP IV gets all its scripts from an NFS share on BOT House. This turned out to be the Debian NFS start-up script. Fixed. Rebooted. Then iptables wouldn't come up. It suddenly didn't like the order it was started in, so I changed that. Bingo.

BOT House took about four and a half hours to upgrade, two and a half for the kernel build and another two hacking around with the side issues. Now if it stays up without a problem for 24 hours I will be a happy fella. The upgrade to 2.6.31 in September was fine until the next day, when the whole system ground to a halt. I think I did something silly during the configure back then, so this time around I used the stock Debian config. Only time will tell.

Anyway, I'll be playing on and off the whole weekend. See you there.

Friday, December 11, 2009

iptables-1.4.6


Here we go again.

EXP IV now has the latest, allegedly greatest, version of both ipset and iptables. And of course kernel 2.6.31, which is now (naturally) .01 revision behind the times. Or .8 revisions, depending on how you look at it (it went through 7 revisions during its lifetime, the last being 2.6.31.7).

BOT House had some serious issues with 2.6.31 and I have been threatening to try to upgrade it again for some time now. This looks like a good weekend to do so. It's been running a stock Debian kernel (2.6.28.5) without ipsets for a long time now and it bugs the HELL out of me.

None of this should impact playing, but there's always the possibility of the IP address of the server changing.

Keep that in mind if you can't connect.

Saturday, November 28, 2009

ipset v4.1


On November 10th, ipset v4.0 was released.

Or maybe it escaped, because less than 24 hours later ipset v4.1 came out.

So it's been two weeks and ipset v4.2 is nowhere to be seen, so I finally bit the bullet and installed ipset v4.1 on EXP IV.

You may recall the upgrade to kernel 2.6.31 on EXP IV was declared a huge success at the time. Well, that was back in September. As the days and weeks went by I started noticing some performance issues. It was something like LAG, but different. Things would freeze - but you could still move - then the world would snap back into place and you'd be somewhere else.

Usually dead.

Annoying, but - for the most part - playable.

The BOT House upgrade was a disaster and resulted in a complete rollback, so I knew somewhere, something wasn't right.

I suspected iptables, but EXP IV uses a very small subset of iptables, namely, ipset. And the issues that plagued BOT House never showed up on EXP IV, so I let it roll.

Until now.

I've only played a few games but it seems OK. I also blew out the Ban List once again, for testing purposes (only on EXP IV - the BOT House Ban List was untouched).

If this upgrade pans out, I will probably revisit the 2.6.31 upgrade on BOT House next weekend.

I'm sure ipset v4.2 will be out by then.