I spend quite a bit of time talking in front of groups of people, and one of the things I'm often asked is whether I use open source software (OSS) because I think it's more secure than proprietary, or ''closed source'', software. I'll bet this same topic has come up once or twice (this week) in your office, too.

Now, rest assured that debate over this issue is fraught with zealots on both sides, and the debate isn't likely to end with this column. I do think, though, that my answer to the question is interesting to explore because it gets to the heart of some key issues regarding software security practices.

Before I go on, I should say that, yes, I am an avid user of ''the penguin'' -- Linux. I run my business almost entirely on systems running Debian Linux (Sarge). And, yes, I like to think of myself as being pretty security conscious when it comes to matters of my company's security, as well as the security of my family's personal information.

But that's not why I choose to run Linux. Note, too, that I have a road warrior laptop that runs XP Professional, largely for full compatibility with the (largely) Microsoft Office world of my customers. So with that out of the way, let's explore the issues a bit...

Like so many engineering questions, the answer obviously is ''it depends''. For starters, the OSS believers feel their software is more secure because the source code is available for public scrutiny and security fixes can be incorporated by any among a large community of users. An end user could even modify the source code to an operating system or application himself to include security-related modifications.

All of this is true.

On the other hand, proprietary software believers feel their software is more secure because the professional developers who design and implement the code follow rigorous internal standards. These standards generally include comprehensive testing of the software prior to delivery to customers. What's more, the source code is not available to the general public, so it's not likely that the curious people with way too much free time on their hands will find bugs and flaws in the software.

All of this is true as well.

In my view, both of these positions are fundamentally flawed in numerous ways. And this is where it starts to get interesting.

Sure enough, OSS source code is available for all the world to scrutinize. The problem, though, is that all the world doesn't do that. Take, for example, the ill-fated Sardonix project. It was a DARPA-funded project to provide a public forum for vetting OSS software and making the results available to the world. But 'build it and they will come' wasn't quite what happened. The project languished due to lack of interest and it was eventually scrapped.

Making source code available to the world does little, if anything at all, to advance the security of the software.

Then, on the proprietary side of things, there's the notion that their respective developers rigorously follow security standards. Well, that sure is not consistent with my experiences in the commercial software community. All too often, the developers are laser-focused on coding to functional specifications, and the security of the software is left as an afterthought. On top of that, I've found that the vast majority of software developers I've interacted with do not understand security issues very well.

Now, I'm obviously generalizing here and it's quite likely that there are many exceptions, but when I've polled my audiences on issues around attacks against software, I've found the IT Security folks understand the attacks but not the software, and the software engineers understand the software but not the attacks.

And let's not discount attackers' ability to reverse engineer machine code to find bugs and flaws. Indeed, there is compelling anecdotal evidence to support the claim that attackers use vendor patches -- distributed solely in binary form -- to deduce the problems they address and develop attack tools.

Keeping source code closed to external scrutiny, in and of itself, does little if anything to affect the security of the software.

The basic tenets of both sides of the open vs. closed debate don't have much of anything to do with security. One can build secure or weak software in either form. Building secure software requires careful attention to security issues throughout every phase of the software's development -- from design through implementation, testing, and even deployment. Whether the source code (and design documentation, for that matter) become open or closed is utterly irrelevant to the security of the software.

Oh, and the real answer to why I choose Linux is quite simply because I'm comfortable with it as a user. As a UNIX desktop user since 1985 or so, I'm just more at home there. The fact that so much of today's malware targets Microsoft's operating systems and applications never factored into my decision (much).