We've thrown millions of dollars at the corporate network in the name of security. We've implemented firewalls and gateways and access control. We've installed IDS, IPS, tokens and biometrics. But despite our best efforts, we're still under attack. Though it's really not surprising, we've overlooked the obvious.

It's somewhat like 'machines gone wild'.

What we've designed to make our lives easier has somehow turned against us. Though most would expect it, application security is definitely not a built-in. When people think of application vulnerabilities, they often think of Microsoft. But the truth is that application vulnerabilities have been present since the beginning in all software. We only notice it in Microsoft's applications, because it's everywhere.

We're buying insecure software, installing it onto our networks, selling it to our customers, and patching it time and again. With each new security incarnation, we run the risk of introducing a new vulnerability to the network.

Someone has to take the blame for what we've done to ourselves.

Is it their fault or ours?

''I call it a haystack-full-of-needles problem. It's hard to understand the magnitude of the problem until you understand the millions of lines of code we have as a nation and as a world in all of our collective applications,'' says Jeff Williams, CEO of Aspect Security, a security company based in Columbia, Md. ''You can't tell a good piece of software from a bad one very easily. There's a limited number of people in the world who can pull apart source code and say, 'Yeah, this is secure,' or not.''

So can we blame the companies that build it?

If we do, then we need to look at the developers of the applications themselves. The problem with that is most developers of the end product aren't even part of the companies we're trying to blame. Most are anonymous coders in far-flung locations with a set of specs and a deadline. If they're building what we've asked them to, how are they at fault?

''You can't really blame the developers,'' says Bill Leavy, vice president of marketing at Parasoft, a Monrovia, Calif.-based company that specializes in tools to prevent software errors. ''It's a lack of definition. If you haven't defined what your expectations are from your outsourcer or your internal development team, then it's really the corporation's fault for not having an established security policy.

''Bringing in material from an outside vendor requires you to have standards that the vendor needs to meet,'' he adds. ''Security standards are criteria for accepting the application.''

Then are we back to pointing fingers at the vendors?

''I do think the folks ultimately responsible are the folks producing the vulnerable product, and that would be the software community,'' says Mike Armistead, co-founder of Fortify, a software solution company based in Palo Alto, Calif. ''I believe they're the owners because a worm or a virus can't attack a flaw that's not in the software. If you outsource your software, you should have criteria about its security when you do the acceptance test. Today, that doesn't happen.''

An economic promise, whether true or not, is that first to market gets the market share. In our highly competitive landscape, the lure of being first on the shelf overpowers the desire to create something that works. Not just merely functioning, but functioning securely with our best interest in mind.

''Application security is more of an ongoing lifecycle,'' says Vikram Desais, CEO of Kavado, an application security company based in Stamford, Conn. ''There are thousands of applications written every day and applications are rewritten, improved and expanded. Every time a human being touches one of these applications, because we're not perfect ourselves, we inadvertently unleash new vulnerabilities. It might work better from a performance perspective but it might be much more hackable.''

OK, so is it the consumer's job to step up to the plate and start fixing things?

''What we need is the buyers of software -- mass consumers, people who buy from outsourcers, anyone who is buying it -- to take responsibility and say, 'Here's what I need the software to have in it from a security perspective,','' says Aspect Security's Williams. ''At the broadest level, the buyers and the sellers need to have a conversation and maybe the buyers need to say, 'I don't want any buffer overflows,' and the sellers need to say, 'Well, that's going to cost you,' or whatever it is they say. But that's how we're going to fix the market. That conversation needs to start working.''

Yet it's even more basic than that.

Application security isn't a mystery. There's a deep knowledge base of vulnerabilities and a history of how they're making it into our networks.

''We have around 40 different categories of vulnerabilities identified like the rock stars of our era -- SQL injection, buffer overflows, etc.,'' says Armistead. ''If you just took care of the first two, that's a great baseline to start from. We really do have a list to work from on things that we can fix. Yet hackers will figure that out too, just like when we shut off their ability to telnet onto machines. If we shut down buffer overflows, they're going to move on. We have to be ready for that. But, absolutely, there's enough there that everyone can start fixing things today.''

So it's actually a communal problem. We need to question why, with a known set of vulnerabilities widely available for anyone to examine, aren't companies requiring these holes to be fixed?

As mass consumers, we need to set security criteria as a requirement. And as a community, we need to refuse to buy software riddled with holes that, at the very least, expose our very livelihoods.