Eric Raymond: Why open source will rule

By Matthew Broersma
ZDNet (UK)
March 29, 2002, 6:35 AM PT


Eric Raymond believes Linux is on a roll.

Raymond is best known as the co-founder, with Bruce Perens, of the Open Source Initiative (OSI) to promote Linux and other "free" software to businesses in a language they could understand. The OSI has largely succeeded in its aims, Raymond says, with backers like IBM heavily promoting Linux and big companies adopting the operating system for both back-end systems and desktops. Raymond is also the author of   The Cathedral and the Bazaar  and other open-source texts.

For evidence that open-source movement now has the mainstream credibility it lacked in the late 1990s, Raymond points to Microsoft's failed attempts last year to discredit Linux and the GNU Public Licence (GPL) on which it is based. Now Linux and the open-source development model are well-positioned to succeed in the increasingly complex world of software development.

Raymond spoke to ZDNet UK during a recent European speaking tour under the auspices of the UK and Danish Unix Users Groups.

Q: You co-founded the Open Source Initiative in late 1998 with the goal of making free software credible outside of the hacker community. What progress has been made since then?

A: I think to a large extent we succeeded in our initial program, which was in positioning the open-source label as something the typical corporate manager could deal with without being frightened. And I think one of the effects of that has been to empower advocates of open source inside corporations. Now they have a set of resources to point to that aren't as it were ideologically tainted... there are elements of the Free Software Foundation that are not real happy with that.

Microsoft has been keen to undo some of that work with its anti-Linux, anti-GPL (GNU Public Licence) campaign.

Thankfully they've failed. It looked a little dicey there early last year, but they've failed.

What happened was they floated a few trial balloons, and a group of open-source leaders put out a public statement that basically wrapped a tyre iron around their heads, and the trade press bought our story and didn't buy Microsoft's, and they've kept fairly quiet since.

Is there a danger of new laws coming into effect that could protect proprietary software companies at the expense of open source?

Of course there is existing law on the books that is kind of problematic, most notably the DMCA (Digital Millennium Copyright Act), and UCITA (Uniform Computer Information Transactions Act, a software licensing act) in the US. Those are still problems which we're coping with in various ways through lobbying and legal challenges, but I don't think Microsoft succeeded in creating any new challenges.

It's hard to say at the expense of open source, because open source doesn't threaten anybody's intellectual property, you don't have to join that game unless you want to. The threats are more from legislation that's intended to accomplish IP protection in areas outside of software that have spillover effects on software development, the DMCA of course being the best known example of such a thing. There have been some encouraging developments recently. The EFF has a brief that makes a fairly strong argument that section 12 of the DMCA is unconstitutionally vague. Under US constitutional law they have a pretty strong challenge going.

What do you think of Sun Microsystems' decision to start charging for StarOffice?

In that case StarOffice just died. They just shot StarOffice through the head. It doesn't matter whether I'm in favor of it or not. But if OpenOffice still exists, and it's GPLed, and they're going to start charging for StarOffice, then they just shot StarOffice through the head.

Some open-source companies are starting to add proprietary software to their open-source offerings as a way of increasing revenues. Does that threaten the open-source movement?

In fact the company I'm associated with, VA Software, recently announced they're going to be selling extensions on SourceForge that are held proprietarily.

I don't see it happening to an extent that really jeopardizes any of the core open-source software or applications, otherwise I'd be much more concerned about it. And it's natural that people are going to be more conservative and careful when the economy--not just in the U.S. but all through the developing world, where most of the software development is going on--is fairly recessionary right now. That makes managers more cautious and conservative and amplifies any tendency toward over-protectivity that they have.

How much does that recessionary atmosphere improve the attractiveness of Linux and other open-source software?

It's a two-edged sword. We're seeing an increasing number of stories about increasing Linux take-up, even on the desktop in large corporations, and it's very clear that what's going on here is that IT managers are under tremendous pressure to cut their budgets and cut expenses. Disruptive technologies really thrive under conditions where people are looking to cut budgets.

You get a disruptive technology... that is initially much lower price and reliability, but it's cost-effective in niche markets where low cost is really important. So the technology will get a little bit of a foothold in there, and the manufacturers of the disruptive technology will use the revenue stream they get from that to gradually improve it. And you may get to the point where the disruptive technology improves enough to match actual demand, as opposed to the demand that the sustaining technologies think is there because they're always looking at their highest-margin customers. And when that happens, the sustaining technologies will just fall off a cliff. That market will just collapse.

What's happening in software on the broadest scale is that open source in general and Linux in particular is a disruptive technology attack on the traditional proprietary business model and its titans, including outfits like Microsoft and SAP and so forth. And the thing is that under conditions where everybody is under pressure to cut budgets, disruptive technologies get more and more attractive because they cost a lot less.

It's a tough row to hoe if you're a sustaining technologist, because there's no point at which it looks rationally appropriate to cannibalize your own business and adopt a new technology, because the margins are lower and your shareholders will kill you. It always looks until the moment of collapse like you're going to be able to continue playing the sustaining technology game and collect fatter margins, but then the world changes, the bottom falls out of your market, and you're gone.

It has been argued that in some critical aspects of desktop software, like maintaining a consistent user interface where different applications work seamlessly together, just work better when there's a big controlling company in charge. What do you make of that argument?

I don't think that's a real issue, but there's a closely related issue that is real. I don't think it's necessary to have a single player dominating user interfaces if you have a development community that is alive to the necessity of having a uniform interface, and prepared to make that a priority.

In fact, the Linux desktops have already successfully done this. You may note that drag and drop works correctly between GNOME and KDE applications. That's not an accident, it happened because the GNOME and KDE people reached out to each other, said, "We've got to have a common drag-and-drop protocol," wrote a standard, and now the applications on both sides conform to it. That happened in spite of the fact that there was no single player that controlled both GNOME and KDE that was able to compel that interface uniformity. I think we've demonstrated in open source that it is possible to bridge those gaps and create a uniform interface.

There's a closely related issue, however that I don't know how to solve yet without a big player with a lot of money, which is doing systematic user interface end user testing. We're not very good at that yet, we need to find a way to be good at it.

It's the actual mechanics of setting up large-scale focus group testing with end users. The problem is they're not getting feedback from large-scale end user testing, and that's allowing a certain spikiness in the interfaces to persist that could be smoothed out otherwise.

Another argument says that historically speaking, monopolies like Microsoft or the railway companies are favored when infrastructure is being built, but once the infrastructure phase is over they shrivel up. I think that's absolutely backwards. It's more important when you're standardizing infrastructure that the infrastructure not be under the control of a particular party, otherwise you get critical patents, and trade secret protection and other forms of proprietary control locking everybody else out of that infrastructure forever.

I don't think I buy it. Traditionally, when you see cases of horizontal infrastructure being built by monopolies, typically it happened because some player was able to co-opt the government early, and create barriers to entry. That's obviously not the case in the software business, thank goodness.

Microsoft has exercized a lot of control over the software market, though.

Well they have, but they haven't been able to, for example, prevent new entrants in the software field. They haven't been able to do things like require government certification of applications with standards that favour big players, which is the classic ploy that you see in monopolisation efforts. It's the classic way of co-opting the government, when you can't buy the government outright, which is frequently what's done.

Bruce Perens has argued that big companies like IBM who use open-source software should give some of their own patented intellectual property back to the open source world. Does that seem like a reasonable argument?

Explicitly jaw-boning people on issues like that I think is often counterproductive. The approach I'd prefer is to point it out fairly quietly that there are times when it makes economic sense to give up proprietary control, and lay out all the arguments and let people make their own conclusions. You can't push heads of Fortune 500 companies around; they've got lots of money and they've got big egos. If you try to try and jawbone them, they're going to be counter-suggestible, they're going to dig their heels in. So I'd prefer to take a quieter approach.

Q: Red Hat's Bob Young argues that Linux will never take over the desktop, but that it will make the desktop largely irrelevant by controlling the Internet back-end. What are your views on the desktop debate?

A: I think Linux will take over the desktop, and I think the reason it will doesn't have much to do with whether we clean up and polish our interfaces or not. Linux will take over the desktop because as the price of desktop machines drops, the Microsoft tax represents a larger and larger piece of OEM margin. There's going to come a point at which that's not sustainable, and at which OEMs have to bail out of the Microsoft camp in order to continue making any money at all. At that point, Linux wins even if the UI sucks.

And frankly, the UI doesn't suck. It's not perfect, it's got a few sharp edges and a few spikes on it, but so does Windows.

We broke through the $1,000 (£700) floor some years back. But my threshold figure for when Microsoft isn't viable anymore is when the average desktop configuration drops below $350. I got that figure by looking at the position of Microsoft in the market for PDAs and handhelds. Above $350, Windows CE has some presence, largely because Microsoft is heavily subsidizing it, but below $350, Microsoft is nowhere. And the reason is very clear: if your unit price is that low, you can't pay the Microsoft tax and make any money.

We're heading toward the point where consumer desktops are available at that price. Some of the low-end PC integrators are already there, outfits like E-machines and so forth.

Microsoft has tried to co-opt interest in open source with its "shared source" initiative. Is that going to work?

I don't see any signs that that's changing anybody's minds. I don't see anybody in the press saying "That's wonderful! In fact it's so wonderful it will swallow XP's license restrictions, it will swallow .Net and Passport..." It isn't happening.

Is it just a PR move?

It goes deeper than that. Everything in Microsoft's strategic behavior for the last two years, as far as I'm concerned, can only be accounted for by the hypothesis that they know their packaged software business is doomed.

They're moving from a product base where selling Windows CDs is their major revenue stream, to where they're telling everybody where they want to be is in a business where they're the world's biggest ASP. Now people haven't really thought about this, but being an ASP is harder than being in a product business. It's more difficult: the staff requirements are more demanding, the margins are lower. Why would Microsoft go from being in an easy business to being in a hard business? I think the right answer is that they know the easy business is doomed.

Bill Gates said as much in his famous 1995 e-mail saying the Internet was the future.

They have a strategic problem, which is that somehow they have to make the transition to a Passport and .Net business model before Wall Street figures out that their current business model is screwed. If the investors figure that out before they've changed horses, then they're going to discount the future value of the stock, and the whole financial pyramid that Microsoft is built on will just collapse.

I wouldn't be sleeping too well if I was a Microsoft strategist right now, because that's a really hard job, especially considering that they don't even have the technology in place for the new business model yet. Even if they had the technology in place, they would have a very hard job persuading corporate managers to buy into this, simply because of the control issue. If I have all my business processes farmed out to an ASP, I don't control them any more. It's not just a matter of being dependent on somebody else's downtime as well as my own. How do I know that my core business secrets are still protected?

Speaking of security, the Internet Engineering Taskforce (IETF) recently released a draft protocol for reporting security flaws in software, which was criticized by some people as being too slanted in favor of the software industry.

That was very good, that was very well done. I skimmed it and I didn't feel that way. I remember reading it and thinking that they had chosen the time-outs for reporting requirements just about right. They chose just about the same time-outs I would.

Is there a danger of software companies exercising too much control over how and when software bugs are reported?

There's the obvious threat from the DMCA, if that kind of control is written into the license, but under current software licenses they can't control that kind of disclosure. And in fact if they tried, they'd probably run into serious legal problems. So I don't see that as a major issue.

I'm not worried about that for two reasons. One is that there are very articulate and capable people who have press exposure and credibility in the security community, who are prepared to go out there and say, "full disclosure is the only way you can get decent security" -- I'm thinking for example of Bruce Schneier at Counterpane (Internet Security). He's done an excellent job of educating the trade press on this, and there are other people who are almost as capable as he is in that way. So I think they'll keep that issue alive.

Also, one of the reasons I'm happy about that RFC (request for comment) you just mentioned is because anyone who comes under corporate pressure not to report bugs, can point at that RFC and say, hey, this is Internet best practice here, so get off my back.

Would the IETF proposal make any difference?

In that political sense, yes. I don't think that draft RFC does anything more than just slightly formalize the unwritten guidelines that already exist, as witnessed by the fact that they chose the same time-outs that I would have (laughs).

Managers have a superstitious respect for documentation and procedures, so being able to point at a document does help.

How is the open-source movement different today than, say, 1999?

I think we're more sober now than we used to be. There was a period during the dot-com boom in '99 when I think a lot of people were in some danger of getting distracted by the prospect of lots of easy money. And of course that prospect has gone away now, which is all right if that has the effect of re-concentrating us on the work.

I think also we have a lot more credibility in the global 1,000 and the business press than we had in '99. We've gotten more success stories under our belt. We've got more people who've considered the pro-open-source argument carefully and decided they agree with it. As witnessed by what happened last year when there was some danger that Microsoft was going to go into a full-bore propaganda campaign against us.

If they had done that in mid 1998, just after the Mozilla disclosure, they might have buried us. I was worried about that. I was seriously worried that that was a possibility, that they would turn on the hype machine before we had enough success stories and enough corporate backing to be able to counter that. What happened in early 2001 demonstrated that we had already achieved enough mainstream credibility and recruited enough backers inside the establishment, as it were, that when Microsoft tried it it just bounced. And that's a significant difference from '99.

Mainstream credibility is important to you and the OSI, isn't it?

The thing that I've always kept in mind, and the reason I founded the OSI in the first place is this: if you want to change the world, you have to co-opt the people who write the checks.

Maybe it sounds pretentious to say this, but most of the people who do this mostly care about art, not about money. If that weren't the case they'd be off doing something else. Mind you, I'm not saying that it's necessarily better to care about art than about money, I'm just making an observation about the motivations of the people who do this.

What's the future for the "bazaar" open-source model?

I see that continuing to succeed, in a way that's separate from the debate about business models. The reason I'm very sure that will be the case is because of the scaling problems that software development is having as machines grow more capable and software grows more complex.

The fundamental problem here is that machines roughly double in capability every eighteen months, and as you know, the size of the average software project in lines of code tends to be double that. That's a real problem, because bugs generally arise from unanticipated interactions between different pieces of code in a project. And that means that the number of bugs in the project tends to rise with the square of the number of lines of code. That means that as projects get larger, and their bug density increases, the verification problem gets worse, and it doesn't get worse linearly, it gets worse quadratically.

The reason I'm confident that the bazaar model, the open-source model, will continue to thrive and claim new territory, is because all of the other verification models have run out of steam. It's not that open sourcing is perfect, it's not that the many-eyeballs effect is in some theoretical sense necessarily the best possible way to do things, the problem is that we don't know anything that works as well. And the scale of problems with other methods of QA (quality assurance) is actually increasing in severity as the size of projects goes up. On the other hand, open-source development, open-source verification, the many-eyeballs effect, seems to scale pretty well. And in fact it works better as your development community gets larger.

If you want to go to a really fundamental analysis, what we're perpetually rediscovering on a scale of complexity is that centralization doesn't work. Centralization doesn't scale, and when you push any human endeavor to a certain threshold of complexity you rediscover that.

That recalls the argument of a few weeks ago about whether Linus Torvalds should get an assistant.

That's another illustration of the problem. Centralization doesn't scale even when the center is Linus.