Application Intrusion Detection
Published on August 19, 2009 by Alexander Goodwin
I recently worked with a UNIX security expert setting up a small pile of servers. We hired him to handle the total system security of the servers as those servers would be charged with storing highly sensitive customer data. In fact, the vendor for this data had very strict requirements as to how we were allowed to store this data. The requirements (something similar to PCI Level I) were dictated in a 40 page document where one of the rules literally required a monitored camera to be shining directly on the primary database server at all times.
My job was rather easy as I just had to hire someone to get the servers secure – not actually do the work. I happily found a crack shot security guy that had done large installations at maximum security. And he didn’t disappoint. I know my way around UNIX systems pretty well but he worked magic I’ve surely never seen.
One of the last things he did was to install an intrusion detection system. That is, a system that monitors all interesting system activity and sends out (a lot) of email about everything it noticed. Change a UNIX config file? Get an email. Add a user? Get an email. Issue a command as root? Get an email. Needless to say I started getting a lot of email.
At one point during a discussion we had about the system I asked him why we had the system at all. I mean, if this guy was so good (and to this day, I still believe he is) – an intrusion detection system is sort of like a prenuptial agreement. You get one in the event of something bad happening – but at the same time you can’t really get one without sort of admitting you *expect* something bad to happen.
Wasn’t his system secure? Did he really personally think he left a hole somewhere that would allow a hacker to get into the system?
He responded that he was 100% confident that after his work, the system was as secure as possible. He left no holes and patched all known vulnerabilities. He followed his declaration that of course – regardless of all that – of course the system can still be broken into. His response was utterly matter of fact. Of *course* break in was still possible – even likely.
I suppose this wasn’t a huge surprise. Despite locking the doors to our houses and cars – we still have alarm intrusion detection systems that detect entry. No one is foolish enough to think a car or house door lock keeps out a real thief. A reasonable defense after that is to get immediate notification when an intrusion occurs. Another reasonable defense that many people follow is to hide valuables. You typically don’t leave your stash of cash on your dresser – you hide it somewhere. In your sock drawer or under your mattress – but typically, even though you have a front door lock, and maybe an alarm system, its clearly prudent to add time and difficulty to a prospective thief’s job.
After some more discussion my security guy invited me to attend Defcon. A yearly security/hacker conference held in Las Vegas. I was intrigued and agreed. When I went to pre-register online – I found out that there wasn’t any such thing. No pre-registration, on-site registration only. And – no credit cards – cash only $120.
I arrived early at the convention and gave a nice lady my $120. In return she gave me an anonymous badge. She didn’t want my name or my address or my email. Everyone at the convention was to remain as anonymous as they wanted to be. The convention itself was jam packed. Wall to wall humans mostly in black t-shirts wth typically funny hacker slogans on them.
In one room they had situated 5 teams with networked computers in a competition where they simultaneously defended their servers and attacked the other teams’ servers. Another room had a large projected screen showing passwords of people at the convention that sent them unencrypted over the wireless (I shut-off my laptop and phone for the rest of the weekend). They showed the IPhone hack that could own a remote iphone with a single sent SMS message. I saw talks on hacking websites with request forwarding and one by an MIT student that beat stock spammers at their own game.
Needless to say – the folks at this conference weren’t messing around. To this day I get into discussions with developers as to the appropriateness of code obfuscation. Obfuscation technology has come a very long way past simple identifier renaming. These days it’s more about things like control-flow obfuscation and opaque predicates. Their argument usually circles around to some variation of “If an expert really wants your code – they’re going to get it”.
I couldn’t agree more. In fact, after attending Defcon I believe it even more. The only place I start to disagree is when they contradict themselves with solutions that they think will actually work. If your code is reachable, in any form, by users – its vulnerable. Software-as-a-service is often cited as safety. If that was true, then I wouldn’t have needed to hire a security expert. And he wouldn’t have told me point blank that no matter what you do – an expert can hack your server.
Protecting your software is not about 100% sure solutions – because there are none. It’s about throwing as many obstacles in the way of attackers as you can. Like I said – your front door lock hasn’t a prayer at stopping a thief – but for some reason you still always lock your door. Alarm systems don’t stop thieves; they just let them know that they’ve only got a few minutes to get their job done. Hidden valuables will be found – but the act of hiding them just makes the job harder.
The name of the game is risk and reward. Simple security measures by you that cause major headaches for attackers are what you’re looking for. Obfuscation makes deciphering your IP harder. Software tampering detection lets you know someone is tinkering with your application. By nature, attack surfaces start out pretty large. The best you can do is reduce that area as much as possible.