Predictability Can Be Fatal

SHARE
Share it on Twitter  
Share it on Facebook  
Share it on Google+
Share it on Linked in  
Email  
Hey, Baby, where'd you get such a great set of settings?

When I was still in my formative years, I (like most other post-pubescent males) was always hunting for the magical key to youthful bliss. There just had to be a common vulnerability -- the right line, the right look, the right attitude, whatever -- that the vast majority of the opposite sex would instantly fall for.

Although some claim to have, I never found it.

For some reason, every young lady that I came across had something unique about her. Whatever ploy, tactic, or stroke of dumb luck I found momentary success with once, failed miserably on the next go around. Were they secretly plotting against my teenage happiness? Was there a conspiracy involving all young womanhood?

No, there was just one little thing that kept it challenging -- kept the playing field pretty level for all of us mere mortal boys. Everyone is their own individual person, with their own impressions, likes, dislikes, values, and moral structure.

To put it in geek terminology: There is no default configuration amongst women.

Monoculturalism revisited

When Dan Geer, Charles Pfleeger, et al. last fall released the incendiary report, ''Cyber Insecurity: The Cost of Monopoly'', quite a few folks perked up, including Dr. Geer's employer.

In the report, this team, which was made up of some of the world's foremost authorities on security, present a powerful argument against what has become the status quo in much of the corporate and government IT realm.

To make a long story short, the report claims that Microsoft's dominance has created a global target environment that leaves little guesswork for the bad guys (girls, things, dogs, whatever) while the good guys find themselves in ever-shortening supply, trying to defend increasingly complex, yet predictable, systems.

That, predictability, can be fatal.

Referring to Nimda and Slammer, they wrote, ''These worms did not have to guess much about the target computers because nearly all computers have the same vulnerabilities.''

Let me say that the problem that's going to have you pulling your hair out when the next virus/worm/rootkit hits the streets, is predictability.

Defaults. That's the ticket in. That's what the worm authors are banking on. Nimda relied upon them, and so did Blaster, SQL-Snake, Code Red, and nearly every other self-propagating beastie since the Morris Worm hit the wild 16 years ago.

Here's a look at some of the default conditions that a few mass attacks took advantage of:

  • 1988: Morris Worm -- Sendmail DEBUG enabled, fingered running, C compiler present (Unix);
  • 1999: Melissa -- Outlook mail reader uses MS Word, macros enabled in Word;
  • 2000: I Love You -- Windows Scripting Host enabled, Active Scripting enabled (Windows/IE);
  • 2001: Code Red -- .ide/.ida bindings in IIS;
  • 2001: Nimda -- Sharing enabled, scripts/vti_bin/msadc dirs allow execution:
  • 2003: Blaster -- RPC/DCOM interface enabled;
  • 2003: Slammer -- MSDE 2000 installed and network enabled by many products, and
  • 2004: W32/Bobax -- UPnP enabled (used for target identification).

    Across the board, no firewall was enabled... by default.

    Now, we're not going to delude ourselves into believing that automata are the only things that ail our information systems. There are plenty of other information warfare tactics that are equally, if not more, destructive and costly. But worms sure take a bite.

    Off the top of your head, how many hours have you or your staff spent on cleaning up the past year's worth of pseudo-randomly targeted attacks?

    Continue on to hear how being different, and even being obscure, can be your biggest weapon against attacks.

  • The value of obscurity

    A few years ago, I wanted to make some code available to others via a web-server on my home DSL-connected network. The provider, however, being more security-minded than most, blocked incoming port 80 SYN packets to residential net space. No problem. I merely configured my web-server to listen on port 81, and I made sure that the URL I posted, and linked to, and emailed about, looked like: http://www.mydomain.dom:81/.

    Funny, I never saw a single web attack come my way. Security through obscurity? Nope. Fewer annoyances through obscurity? Yup.

    Plenty of sites are doing it. Why not you? IMAP mail on port 10143, secure shell on port 2222, etc. Of course, you don't want to make a ton of extra work for yourself administering all this non-standardization. RFCs and the IANA exist specifically to promote standards of interoperability. Flying in the face of them is going to cost you if you care to interoperate with users, and systems and processes that aren't familiar to you.

    Every one of you is free to build your own net the way you see fit. Just don't expect anyone else to have a clue about how to communicate with you -- not even the bad guys.

    Back in the glory days of Novell IPX-based LANs, I was a big fan of keeping TCP/IP off of client systems and using IPX/IP gateways to connect to the rest of the world. There wasn't a single IP-based worm that would stand a chance of propagating inside the net. That came at a cost, of course, of efficiency and convenience.

    Efficiency, convenience, interoperability... I sound like an ad-man from Redmond! Truth be told, no one ever noticed any productivity increase after migrating to an all-IP network.

    Sound system and network administration often lends a measure of obscurity by modifying, updating or removing conditions expected by attackers.

    Few worms are monolithic. Instead, they take advantage of the DLLs, commands, directories and other components that the author knows is going to be present on your box. Modern versions of MS Windows often don't allow you to remove ''critical system files", but you often can remove the filename extension mappings that load them.

    Bingo! We've just defeated Code Red without patching.

    Whenever the Morris Worm encountered a system that had its C compiler removed, it couldn't build its tools, rendering it unable to propagate. Period.

    Recently, I was looking into why some User Mode Linux based honeypots weren't vulnerable to most buffer overflow (BOF) attacks. Turns out that there were extra environment variables being passed along by the kernel to the running services. This made the execution stack addresses of those services different from those expected by the attack authors.

    As an experiment, I took a stock Redhat Linux 6.2 system, unpatched, and booted it with an extra unnecessary parameter. This did what I expected. It offset the stack addresses a bit, and three out of four attacks I threw at it failed. Not bad, considering I didn't have to do anything complicated.

    More elaborate stack randomization techniques are used by OpenBSD, grsecurity and others, all serving to obfuscate reality.

    Obfuscation techniques are only worthwhile as long as your adversary is unaware of them. Once a skilled, determined attacker gains knowledge of your ''customized'' environment, the benefit of it is mostly lost. However, by clouding the field, you are less likely to be dealing with the latest worm-du-jour, and can dedicate resources to handling more serious incidents.

    Know What You've Got

    Sure, there are quite a few operating systems and products that do it the right way. OpenBSD, Novell Netware, Gentoo Linux -- they all share a common thread. They don't turn all kinds of things on by default. Even Windows Server 2003 installs with several of the old standards (like IIS) turned off. Heck, some of them even come with industrial-strength firewalls. Cool!

    Unfortunately, none of them are right for my mother. Even if I could convince Mom and Dad to drop a few gigabucks on a modern commercial OS, or to eat up the better part of their retirement years on a secure-out-of-the-box open source installation, they would screw it up by downloading the latest painting package, racing game, MP3 player or whatever, that renders the box wide open.

    We've been demanding convenience for so long that software and OS vendors have geared up to provide it by the bucket-load. In time, home users will demand security and stability. Won't we?

    It is essential to know what is on ''by default''. Part of any sound configuration management scheme is baselining and validation. And that should include sandbox scanning and monitoring.

    Can you justify the expense of responding to a root compromise because you trusted the vendor's defaults? Try taking them to court to recoup your costs.

    There are plenty of tools to help you baseline your systems and remove unnecessary defaults. A great starting point is the Center for Internet Security's site: http://www.cisecurity.org. There are CIS benchmark tools available for most operating systems, including Cisco IOS.

    Linux, MAC OS-X and HP/UX users have Bastille hardening scripts available at: http://www.bastille-linux.org, which walks you through the process of locating and remedying many default security pitfalls. Windows shops should already be using the Baseline Security Analyzer and the MMC Security Configuration and Analysis snap-in.

    And just maybe someone will develop a tool to analyze the configs of the fairer gender by the time my son hits high school. Nahhh, let him suffer like the rest of us.

    George Bakos is a Senior Security Expert with the Institute for Security Technology Studies at Dartmouth College. His research includes worm detection and intrusion analysis. Bakos formerly was a security engineer for Electronic Warfare Associates.

    JOIN THE DISCUSSION

    Loading Comments...