OK, so you’re a model of good corporate security practices. You require users to generate long user names and passwords that must include uppercase letters, numerals, and at least two special characters. You have CAPTCHAs on all your Web forms. You install every security update within an hour of its release, and your servers are locked in a steel-encased vault, a vault to which only three people have the key.

And none of it matters, because your security is only as strong as the weakest link.

The wires have been hopping lately with news of corporate data security breaches that have happened not because of hackers breaking in from the outside, but rather due to employees who abused their power from within.

Last month a Google engineer was fired for breaching his employer’s privacy policies after he broke into the accounts of four users without their consent, spying on their Google Voice call logs in what appeared to be a flashy show of power. It only compounded matters in that the four users in question were teenagers, but the message sent to Google users around the world was pretty clear: It doesn’t matter how tough you think your password is, there are probably thousands of engineers at Google that can read your email, pore over your Google Docs, and listen to your Google Voice messages to their hearts’ content.

This incident is hardly unique.

Also last month, in Baltimore, a former employee was indicted for using a keylogger program to steal the passwords of his former coworkers, then managed to insert pornographic photographs into a live PowerPoint demonstration, among other tech-related crimes.

Last year the city of San Francisco was famously held ransom when a network administrator simply changed a critical master password, ensuring that no one else could access a wealth of city information, from official email to payroll files and bookings at the local jail. It took a visit from the city’s mayor to the perpetrator to pry the password out of him, even after he’d been arrested and held on $5 million bail.

It’s a common failing for us all to ignore the obvious: The human link in any tech equation is almost always the biggest problem. Hackers have known this for years: Social engineering has proven to be, by far, the easiest avenue into any network, with a crook simply claiming to be someone he’s not in order to pry passwords or other secrets out of his unwitting victims. Those on the inside need to learn those lessons, too: “Trusted” employees can on occasion turn to “the dark side” to engage in nefarious activities for unknown ends. The flesh is weak.

So what does a modern-day IT shop do about all of this?

Solutions are easy in the abstract, but perhaps more complex in actual implementation.

The most obvious issue to consider is the creation of redundancy in the security hierarchy and the addition of increase oversight to the staff members who control the codes. It’s similar to the reason you give a copy of your house key to a trusted neighbor or relative. If a key is lost or stolen, a backup is available with little effort. Is it possible to create a system where a master password can’t be changed unless two or more individuals approve it? Yes, but few organizations would think of taking such steps – until it was too late.

The idea of formalizing a security hierarchy is analogous to the reason that police departments have internal affairs divisions, to keep checks and balances on those in power. Who’s keeping tabs on what your help desk or network admins are doing day to day? Surely the CIO is not spot-checking the logs himself. Whose job should it be? And is there a mechanism for non-IT staff to report any issues or problems along these lines?

The next solution is to consider varying levels of security for your IT staff. It’s commonplace for the entire IT department to have “run of network” access to every system under the sun, whether they need that access or not. This policy (codified or not) does have its advantages, for the very reason outlined above: It creates a certain level of redundancy in the network. But universal access to every system in the company opens up a lot of dangerous doors, which can lead to abuse.

Fixing this isn’t easy. It’s hard to tell someone on the IT team that they “don’t need” access to certain parts of the computer infrastructure. I’ve seen that firsthand. There’s a certain unwritten expectation within the IT environment that everything on the network is fair game. In fact, most corporate HR employee handbooks actually codify this: Once you type something on your computer at the office, it may be read, analyzed, and archived forever by the IT department.

Of course, external users and clients have different expectations, and internal policy may not actually jibe with published privacy policies and terms of use. Restricting access to certain parts of the network is a tough step to take, but it’s certainly easier than sending out a press release explaining that one of your staff ran off with half a million of your customers’ credit card numbers.

Christopher Null writes about technology extensively for Wired, PC World, and Maximum PC. He was the founder and Editor-in-Chief of Mobile PC magazine and spent four years blogging about tech daily for Yahoo! You can find his running commentary at chrisnull.com.

Follow eSecurityPlanet on Twitter @eSecurityP.

For more on insider fraud and social engineering, read Fighting Insider Fraud and Companies Fail DefCon Social Engineering Security Test.