And if you thought today’s Java compilers are buggy and unpredictable, just wait until you have your first prima donna programmer problem. Managing teams of humans makes C++ templates look positively trivial.


In a relational database every row in a table is exactly the same length in bytes, and every field is always a fixed offset from the beginning of the row. Basically, it’s one CPU instruction.

By contrast, we need to parse the XML into a tree in memory so that we can operate on it reasonably quickly. As every compiler writer knows, lexing and parsing are the slowest part of compiling. Suffice it to say that it involves a lot of string stuff, which we discovered is slow, and a lot of memory allocation stuff, which we discovered is slow, as we lex, parse, and build an AST in memory.


To correct the problem, Microsoft universally adopted something called a zero defects methodology. Many of the programmers in the company giggled, since it sounded like management thought they could reduce the bug count by executive fiat. Actually, zero defects meant that at any given time, the highest priority is to eliminate bugs before writing any new code.


If you find a bug in code that has already shipped, you’re going to incur incredible expense getting it fixed.


Programmers are notoriously crabby about making schedule. “It will be done when it’s done!” they scream at the business people.

Unfortunately, that just doesn’t cut it. There are too many planning decisions that the business needs to make well in advance of shipping the code: demos, trade shows, advertising, etc. And the only way to do this is to have a schedule, and to keep it up to date.

The other crucial thing about having a schedule is that it forces you to decide what features you are going to do, and then it forces you to pick the least important features and cut them rather than slipping into featuritis (aka scope creep).


We all know that knowledge workers work best by getting into “flow,” also known as being “in the zone,” where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration. This is when they get all of their productive work done.


You’re wasting money by having $100/hour programmers do work that can be done by $30/hour testers.


Why won’t people write specs? People claim that it’s because they’re saving time by skipping the spec-writing phase. They act as if spec writing was a luxury for NASA space shuttle engineers, or people who work for giant, established insurance companies.

First of all, failing to write a spec is the single biggest unnecessary risk you take in a software project. It’s as stupid as setting off to cross the Mojave desert with just the clothes on your back, hoping to “wing it.” Programmers and software engineers who dive into code without writing a spec tend to think they’re cool gunslingers, shooting from the hip. They’re not. They are terribly unproductive. They write bad code and produce shoddy software, and they threaten their projects by taking giant risks which are completely uncalled for.


The most important function of a spec is to design the program. Even if you are working on code all by yourself, and you write spec solely for your own benefit, the act of writing the spec — describing how the program works in minute detail — will force you to actually design the program.


The moral of the story is that when you design your product in a human language, it takes only a few minutes to try thinking about several possibilities, revising, and improving your design. Nobody feels bad just deleting a paragraph in a word processor. A programmer who’s just spent two weeks writing some code is going to be quite attached to that code, no matter how wrong it is.


In too many programming organizations, every time there’s a design debate, nobody ever manages to make a decision, usually for political reasons. So the programmers only work on uncontroversial stuff. As time goes on, all the hard decisions are pushed to the end. These projects are the most likely to fail. If you are starting a new company around a new technology and you notice that your company is constitutionally incapable of making decisions, you might as well close down now and return the money to the investors, because you ain’t never gonna ship nothing.


A functional specification describes how a product will work entirely from the user’s perspective. It doesn’t care how the thing is implemented. It talks about features. It specifies screens, menus, dialogs, and so on.

A technical specification describes the internal implementation of the program. It talks about data structures, relational database models, choice of programming languages and tools, algorithms, etc.

When you design a product, inside and out, the most important thing is to nail down the user experience. What are the screens, how do they work, what do they do.


Details are the most important thing in a functional spec. You’ll notice in the sample spec how I go into outrageous detail talking about all the error cases for the login pace. All of these cases correspond to real code that’s going to be written, but, more importantly, these cases correspond to decisions that somebody is going to have to make. Somebody has to decide what the policy is going to be for a forgotten password. If you don’t decide, you can’t write the code. The spec needs to document the decision.


This approach is why specs have such a bad reputation. A lot of people have said to me, “specs are useless, because nobody follows them, they’re always out of date, and they never reflect the product.”

Excuse me. Maybe your specs are out of date and don’t reflect the product. My specs are updated frequently. The updating continues as the product is developed and new decisions are made. The spec always reflects our best collective understanding of how the product is going to work. The spec is only frozen when the product is code complete.


If you show a chessboard, in the middle of a real game of chess, to an experienced chess player for even a second or two, they will instantly be able to memorize the position of every piece. But if you move around a couple of pieces in nonsensical ways that couldn’t happen in normal play, it becomes much, much harder for them to memorize the board. This is different from the way computers think. A computer program that could memorize a chess board could memorize both possible and impossible layouts with equal ease. The way the human brain works is not random access; pathways tend to be strengthened in our brains and some things are just easier to understand than other things because they are more common.

So, when you’re writing a spec, try to imagine the person you are addressing it to, and try to imagine what you’re asking them to understand at every step.


As I write this, Netscape 5.0 is almost two years late. Partially, this is because they made the suicidal mistake of throwing out all their code and starting over: the same mistake that doomed Ashton-Tate, Lotus, and Apple’s MacOS to the recycle bins of software history.


It’s rare in the open source world to have a face-to-face conversation around a whiteboard while drawing boxes and arrows, so the kind of design decisions that benefit from drawing boxes and arrows are usually decided poorly on such projects. As a result, geographically dispersed teams have done far better at cloning existing software where little or no design is required. But most open software still needs to run in the wild and is therefore shrinkwrap.


The bigger issue with the development of games is that there’s only one version. Once people have played to the end of Duke Nukem 3D and killed the big boss, they are not going to upgrade to Duke Nukem 3.1D just to get some bug fixes and new weapons. It’s too boring.


When great thinkers think about problems, they start to see patterns. They look at the problem of people sending each other world-processor files, and they look at the problem of people sending each other spreadsheets, and they realize that there’s a general pattern: sending files. That’s one level of abstraction already. Then they go up one more level: People send files, but web browsers also “send” requests for web pages. And when you think about it, calling a method on an object is like sending a message to an object! It’s the same thing again! Those are all sending operations, so our clever thinker invents a new, higher abstraction called messaging, but now it’s getting really vague and nobody really knows what they’re talking about any more.

When you go too far up, abstraction-wise, you run out of oxygen. Sometimes, smart thinkers just don’t know when to stop, and they create these absurd, all-encompassing, high-level pictures of the universe that are all good and fine, but don’t actually mean anything at all.


Another common thing Architecture Astronauts like to do is invent some new architecture and claim it solves something. Java, XML, Soap, XML-RPC, Hailstorm, .NET, Jini, oh lord I can’t keep up. And that’s just in the last 12 months!


Maybe this is the key to productivity: just getting started. Maybe when pair programming works, it works because when you schedule a pair programming session with your buddy, you force each other to get started.


But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features. Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP. The companies who stumble are the ones who spend too much time reading tea leaves to figure out the future direction of Microsoft. People get worried about .NET and decide to rewrite their whole architecture for .NET because they think they have to. Microsoft is shooting at you, and it’s just cover fire so that they can move forward and you can’t, because this is how the game is played.

Fire and Motion, for small companies like mine, means two things. You have to have time on your side, and you have to move forward every day. Sooner or later you will win. Every day, our software is better and better, and we have more and more customers and that’s all that matters. Until we’re a company the size of Oracle, we don’t have to think about grand strategies. We just have to come in every morning and somehow, launch the editor.


It comes down to an attribute of software that most people think of as craftsmanship. When software is built by a true craftsman, all the screws line up. When you do something rare, the application behaves intelligently. More effort went into getting rare cases exactly right than went into getting the main code working. Even if it took an extra 500 percent effort to handle 1 percent of the cases.

Craftsmanship is, of course, incredibly expensive. The only way you can afford it is when you are developing software for a mass audience.


When UNIX was created and when it formed its cultural values, there were no end users. Computers were expensive, CPU time was expensive, and learning about computers meant learning how to program. It’s no wonder that the culture that emerged valued things that are useful to other programmers. By contrast, Windows was created with one goal only: to sell as many copies as conceivable at a profit. Programmers, as an audience, were an extreme afterthought.


I have heard economists claim that Silicon Valley could never be re-created in, say, France, because the French culture puts such a high penalty on failure that entrepreneurs are not willing to risk it. Maybe the same thing is true of Linux: It may never be a desktop OS because the culture values things which prevent it.


They even renamed core directories — heretical! — to use common English words like “applications” and “library” instead of “bin” and “lib.”


An important aspect of automatic crash collection is that the same crash probably will happen many times to many people, and you don’t really want a new bug in your database for every duplicate of the crash. We handle this by constructing a unique string that contains key elements of the crash data.


Everybody gives lip service to the idea that people are the most important part of a software project, but nobody is quite sure what you can do about it. The very first thing you have to do right if you want to have good programmers is to hire the right programmers, and that means you have to be able to figure out who the right programmers are, and this is usually done in the interview process.


You’re going to see three types of people in your interviews. At one end of the scale, there are the unwashed masses, lacking even the most basic skills for this job. They are easy to ferret out and eliminate, often just by asking two or three questions. At the other extreme you’ve got your brilliant superstars who write lisp compilers for fun, in a weekend. And in the middle, you have a large number of “maybes” who seem like they might just be able to contribute something. The trick is to tell the difference between the superstars and the maybes, because the secret is that you don’t want to hire any of the maybes. Ever.

At the end of the interview, you must be prepared to make a sharp decision about the candidate. There are only two possible outcomes to this decision: Hire or No Hire. There is no other possible answer. Never say, “Hire, but not for my team.” This is rude and implies that the candidate is not smart enough to work with you, but maybe he’s smart enough for those losers over in that other team.

Never say “Maybe, I can’t tell.” If you can’t tell, that means No Hire.

Why am I so hardnosed about this? It’s because it’s much, much better to reject a good candidate than to accept a bad candidate. A bad candidate will cost a lot of money and effort and waste other people’s time fixing all their bugs. Firing someone you hired by mistake can take months and be nightmarishly difficult, especially if they decide to be litigious about it. In some situations it may be completely impossible to fire anyone. Bad employees demoralize the good employees. And they might be bad programmers but really nice people or maybe they really need this job, so you can’t bear to fire them, or you can’t fire them without pissing everybody off, or whatever. It’s just a bad scene.

On the other hand, if you reject a good candidate, I mean, I guess in some existential sense an injustice has been done, but, hey, if they’re so smart, don’t worry, they’ll get lots of good offers.


People who are Smart but don’t Get Things Done often have PhDs and work in big companies where nobody listens to them because they are completely impractical. They would rather mull over something academic about a problem rather than ship on time. These kind of people can be identified because they love to point out the theoretical similarity between two widely divergent concepts.


How do you detect smart in an interview? The first good sign is that you don’t have to explain things over and over again. The conversation just flows. Often, the candidate says something that shows real insight, or brains, or mental acuity. So an important part of the interview is creating a situation where someone can show you how smart they are. The worst kind of interviewer is the blowhard. That’s the kind who blabs the whole time and barely leaves the candidate time to say, “yes, that’s so true, I couldn’t agree with you more.” Blowhards hire everyone; they think that the candidate must be smart because “he thinks so much like me!”


I’m very, very careful to avoid anything that might give me some preconceived notions about the candidate. If you think someone is smart before they even walk into the room, just because they have a PhD from MIT, then nothing they can say in one hour is going to overcome your initial prejudice. If you think they are a bozo because they went to community college, nothing they can say will overcome that initial impression. An interview is a very, very delicate scale — it’s very hard to judge someone based on a one-hour interview, and it may seem like a very close call. But if you know a little bit about the candidate beforehand, it’s like a big weight on one side of the scale, and the interview is useless.


Look for passion. Smart people are passionate about the projects they work on. They get very excited talking about the subject. They talk quickly and get animated. Being passionately negative can be just as good a sign. There are far too many people around who can work on something and not really care one way or the other. It’s hard to get people like this motivated about something.

Bad candidates just don’t care and will not get enthusiastic at all during the interview. A really good sign that a candidate is passionate about something is that when they are talking about it, they will forget for a moment that they are in an interview.


Good candidates are careful to explain things well, at whatever level.


If it was a team project, look for signs of a leadership role.


They are having a good ol’ time learning Pascal in college, until one day their professor introduces pointers, and suddenly, they don’t get it. They just don’t understand anything anymore. 99 percent of the class goes off and becomes Political Science majors, then they tell their friends that there weren’t enough good-looking member of the appropriate sex in the CompSci classes, that’s why they switched.


Some interviewees try to judge if the candidate asks “intelligent” questions. Personally, I don’t care what question they ask; by this point I’ve already made my decision. The trouble is, candidates have to see about five or six people in one day, and it’s hard for them to ask five or six people different, brilliant questions, so if they don’t have any questions, fine.

I always leave about five minutes at the end of the interview to sell the candidate on the company and the job. This is actually important even if your decision is No Hire. If you’ve been lucky enough to find a really good candidate, you want to do everything you can at this point to make sure that they want to come work for you. But even if they are a bad candidate, you want them to like your company and go away with a positive impression.


Negative reviews, obviously, have a devastating effect on morale. In fact, giving somebody a review that is positive, but not as positive as that person expected, also has a negative effect on morale.

The effect of reviews on morale is lopsided: While negative reviews hurt morale a lot, positive reviews have no effect on morale or productivity. The people who get them are already working productively. For them, a positive review makes them feel like they are doing good work in order to get the positive review — as if they were Pavlovian dogs working for a treat, instead of professionals who actually care about the quality of the work they do.

And herein lies the rub. Most people think that they do pretty good work (even if they don’t). It’s just a little trick our minds play on us to keep life bearable. So if everybody thinks they do good work, and the reviews are merely correct (which is not very easy to achieve), then most people will be disappointed by their reviews. The cost of this in morale is hard to understate. On teams where performance reviews are done honestly, they tend to result in a week or so of depressed morale, moping, and some resignations. The tend to drive wedges between team members, often because the poorly rated are jealous of the highly rated, in a process that DeMarco and Lister call teamicide, the inadvertent destruction of jelled teams.


….at least two dozen studies over the last three decades have conclusively shown that people who expect to receive a reward for completing a task or for doing that task successfully simply do not perform as well as those who expect no reward at all.

Incentives (or bribes) simply can’t work in the workplace. Any scheme of rewards and punishments, even the old-fashioned trick of catching people doing something right and rewarding them, all do more harm than good. Giving somebody positive reinforcement implies that they only did it for the Lucite plaque; it implies that they are not independent enough to work unless they are going to get a cookie; and it’s insulting and demeaning.


Microsoft almost made the same mistake, trying to rewrite Word for Windows from scratch in a doomed project called Pyramid, which was shut down, thrown away, and swept under the rug. Lucky for Microsoft, they had never stopped working on the old code base, so they had something to ship — making it merely a financial disaster, not a strategic one.


All new source code! As if source code rusted.

The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your had drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?


Each of these bugs required weeks of real-world usage before being found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those characters.

When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.


The code may be doggone ugly. One project I worked on actually had a data type called a FuckedString. Another project had started out using the convention of starting member variables with an underscore, but later switched to the more standard “m_”. So half the functions started with “” and half with “m”, which looked ugly. Frankly, this is the kind of thing you solve in five minutes with a macro in Emacs, not by starting from scratch.

It’s important to remember that when you start from scratch, there is absolutely no reason to believe that you will do a better job than you did the first time. First of all, you probably don’t even have the same programming team that worked on version 1.0, so you don’t actually have “more experience.” You will just make most of the old mistakes again, and introduce some new problems that weren’t in the original version.


A second reason programmers think that their code is a mess is that it is inefficient. The rendering code in Netscape was rumored to be slow. But this only affects a small part of the project, which you can optimize or even rewrite. You don’t have to rewrite the whole thing. When optimizing for speed, 1 percent of the work gets you 99 percent of the bang.


Since I started working in the software industry, almost all the software I’ve worked on has been what might be called “speculative” software. That is, the software is not being built for a particular customer; it’s being built in hopes that zillions of people will buy it. But many software developers don’t have that luxury. They may be consultants developing a project for a single client, or they may be in-house programmers working on a complicated whatsit for Accounting (or whatever it is your in-house programmers do; it’s rather mysterious to me).


Customers don’t know what they want. Stop expecting customers to know what they want.

It’s just never going to happen. Get over it.

Instead, assume that you’re going to have to build something anyway, and the customer is going to have to like it, but they’re going to be a little bit surprised. You have to do the research. You have to figure out a design that solves the customer’s problem in a pleasing way.


You know how an iceberg is 90 percent underwater? Well, most software is like that too — there’s a pretty user interface that takes about 10 percent of the work, and then 90 percent of the programming work is under the covers. And if you take into account the fact that about half of your time is spent fixing bugs, the UI takes only 5 percent of the work. And if you limit yourself to the visual part of the UI, the pixels, what you would see in PowerPoint, now we’re talking less than 1 percent.


When you’re showing off, the only thing that matters is the screenshot. Make it 100 percent beautiful.

Don’t, for a minute, think that you can get away with asking anybody to imagine how cool this would be. Don’t think that they’re looking at the functionality. They’re not. They want to see pretty pixels.


That is, approximately, the magic of TCP. It is what computer scientists like to call an abstraction: a simplification of something much more complicated that is going on under the covers. As it turns out, a lot of computer programming consists of building abstractions. What is a string library? It’s a way to pretend that computers can manipulate strings just as easily as they can manipulate numbers. What is a file system? It’s a way to pretend that a hard drive isn’t really a bunch of spinning magnetic platters that can store bits at certain locations, but rather a hierarchical system of folders-within-folders containing individual files that in turn consist of one or more strings of bytes.


A famous example of this is that some SQL servers are dramatically faster if you specify “where a=b and b=c and c=a” than if you only specify “where a=b and b=c” even though the result set is the same.


The law of leaky abstractions means that whenever somebody comes up with a new wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying “learn how to do it manually first, then use the wizzy tool to save time.” Code-generation tools that pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So, the abstractions save us time working, but they don’t save us time learning.

And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder.


Leaky abstractions mean that we live with a hockey stick learning curve: You can learn 90 percent of what you use day by day with a week of learning. But the other 10 percent might take you a couple of years catching up. That’s where the really experienced programmers will shine over the people who say “whatever you want me to do, I can just pick up the book and learn how to do it.” If you’re building a team, it’s OK to have a lot of less-experienced programmers cranking out big blocks of code using the abstract tools, but the team is not going to work if you don’t have some really experienced members to do the really hard stuff.


People who know only one world get really smarmy, and every time they hear about the complications in the other world, it makes them think that their world doesn’t have complications. But they do. You’ve just moved beyond them because you are proficient in them. These worlds are just too big and complicated to compare any more. The software worlds are so huge and complicated and multifaceted that when I see otherwise intelligent people writing blog entries saying something vacuous like “Microsoft is bad at operating systems,” frankly, they just look dumb. Imagine trying to summarize millions of lines of code with hundreds of major feature areas created by thousands of programmers over a decade or two, where no one person can begin to understand even a large portion of it. I’m not even defending Microsoft; I’m just saying that big handwavy generalizations made from a position of deep ignorance is one of the biggest wastes of time on the Net today.


Java attempted this, but Sun didn’t grok GUIs well enough to deliver really sick native-feeling applications. Like the space alien in Star Trek watching Earth through a telescope; he knew exactly what human food was supposed to look like, but he didn’t realize it was supposed to taste like something.


At a previous job, we had to live with some pretty bad architecture because the first programmers used the project to teach themselves C++ and Windows programming at the same time. Some of the oldest code was written without any comprehension of event-driven programming. The core string class (of course, we had our own string class) was a textbook example of all the mistakes you could make in designing a C++ class. Eventually, we cleaned up and refactored a lot of that old code, but it haunted us for a while.


Every single company except Microsoft has disappeared from the top 10. Also notice, that Microsoft is so much larger than the next largest player, it’s not even funny.


According to Rick Chapman, the answer is simpler: Microsoft was the only company on the list that never made a fatal, stupid mistake. Whether it was by dint of superior brainpower or just dumb luck, the biggest mistake Microsoft made was the dancing paperclip.


Sometimes you download software and you just can’t believe how bad it is, or how hard it is to accomplish the very simple tasks that the software tries to accomplish. Chances are, it’s because the developers of the software don’t use it.


And in fact even if you’re a manager, you’ve probably discovered that managing developers is a lot like herding cats, only not as fun. Merely saying “make it so” doesn’t make it so.


None of these strategies works if you’re not really an excellent contributor. If you don’t write good code, and lots of it, you’re just going to be resented for messing around with bug databases when you “should be” writing code. There’s not thing more deadly to your career than having a reputation of being so concerned with process that you don’t accomplish anything.


You can make things better, even when you’re not in charge, but you have to be Caesar’s wife: above suspicion. Otherwise, you’ll make enemies as you go along.


The secret of Big Macs is that they’re not very good, but every one is not very good in exactly the same way. If you’re willing to live with not-very-goodness, you can have a Big Mac with absolutely no chance of being surprised in the slightest.


McDonald’s real secret sauce is its huge operations manual, describing in stunning detail the exact procedure that every franchisee must follow in creating a Big Mac.


The rules have been carefully designed by reasonably intelligent people so that dumdums can follow them just as well as smart people.


Now a problem starts to develop — what we in the technical fields call the scalability problem. When you try to clone a restaurant, you must decide between hiring another great chef of your caliber (in which case, that chef will probably want and expect to keep most of the extra profits that he created, so why bother), or else you’ll hire a cheaper, younger chef who’s not quite as good, but pretty soon your patrons will figure that out and they won’t go to the clone restaurant.

The common way of dealing with the scalability problem is to hire cheap chefs who don’t know anything, and give them such precise rules about how to create every dish that they “can’t” screw it up. Just follow these rules, and you’ll make great gourmet food!

Problem: It doesn’t work exactly right. There are a million things that a good chef does that have to do with improvisation. Without real talent and skill, you will not be able to improvise.


It’s hard to scale talent.


Much better. Three minutes of design work saved me hours of coding.

If you’ve spent more than 20 minutes of your life writing code, you’ve probably discovered a good rule of thumb by now: nothing is as simple as it seems.

Something as simple as copying a file is full of perils.


Find the dependencies — and eliminate them.

If you’ve ever had to outsource a critical business function, you realize that outsourcing is hell. Without direct control over customer service, you’re going to get nightmarishly bad customer service — the kind people write about in their weblogs when they tried to get someone, anyone, from some phone company to do even the most basic thing.

If it’s a core business function — do it yourself, no matter what.


With the Ben & Jerry’s model, if you’re even reasonably smart, you’re going to succeed. It may be a bit of a struggle, there may be good years and bad years, but unless we have another Great Depression, you’re certainly not going to lose too much money, because you didn’t put in too much to begin with.

The trouble with the Amazon model is that all anybody thinks about is Amazon. And there’s only one Amazon. At least, if you follow the Ben & Jerry’s model, you’ll know that nobody wants your product long before you spend more than one MasterCard’s worth of credit limit on it.


Amazon companies absolutely must substitute cash for time whenever they can. You may think you’re smart and frugal by insisting on finding programmers who will work at market rates. But you’re not so smart, because that’s going to take you six months, not two months, and those four months might mean you miss the Christmas shopping season, so now it costs you a year, and probably made your whole business plan unviable.


The idea of advertising is to lie without getting caught. Most companies, when they run an advertising campaign, simply take the most unfortunate truth about their company, turn it upside down (“lie”), and drill that lie home. Let’s call it “proof by repeated assertion.” For example, plane travel is cramped and uncomfortable and airline employees are rude and unpleasant. So almost all airline ads are going to be about how comfortable and pleasant it is to fly and how pampered you will be every step of the way.


Because they are platforms, they are, by definition, not very interesting in an of themselves without juicy software to run on them. But, with every few exceptions, no software developer with the least bit of common sense would intentionally write software for a platform with 100,000 users on a good day, like BeOS, when they could do the same amount of work and create software for a platform with 100,000,000 users, like Windows. The fact that anybody writes software for those oddball system at all proves that the profit motive isn’t everything; religious fervor is still alive and well.


Therein lies the other problem: Only a small handful of merchants will bill you over this system. So for all your other bills, you’ll have to go elsewhere.

End result? It’s not worth it.


Now, here’s a little-known fact: Even DOS 1.0 was designed with a CP/M backward-compatibility mode built in. It could almost run CP/M software.

DOS was popular because it had software from day one.


Windows 3.0 was the first version that could run multiple DOS programs respectably. Windows 3.0 was the first version that could actually do a reasonable job running all your old software.


You should be starting to get some ideas about how to break the chicken-and-egg problem: Provide a backward-compatibility mode that either delivers a truckload of chickens, or a truckload of eggs.


That’s a childish approach to strategy. It reminds me of independent booksellers, who said “why should I make it comfortable for people to read books in my store? I want them to buy the books!” And then one day B&N puts couches and cafes in their stores and practically begged people to read books in their store without buying them.

The mature approach to strategy is not to try to force things on potential customers. If somebody isn’t even your customer yet, trying to lock them in just isn’t a good idea. When you have 100 percent market share, come talk to me about lock-in. Until then, if you try to lock them in now, it’s too early, and if any customer catches you in the act, you’ll just wind up locking them out. Nobody wants to switch to a product that is going to eliminate their freedom in the future.


Jamie Zawinski says it best, discussing the original version of Netscape that changed the world. “Convenient though it would be if it were true, Mozilla [Netscape 1.0] is not big because it’s full of useless crap. Mozilla is big because your needs are big. Your needs are big because the Internet is big. There are lots of small, lean web browsers out there that, incidentally, do almost nothing useful. But being a shining jewel of perfection was not a goal when we wrote Mozilla.”


Reality: They’re doing this because IBM is becoming an IT consulting company. IT consulting is a complement of enterprise software. Thus IBM needs to commoditize enterprise software, and the best way to do this is by supporting open source. Lo and behold, their consulting division is winning big with this strategy.


An important thing you notice from all these examples is that it’s easy for software to commoditize hardware, but it’s incredibly hard for hardware to commoditize software. Software is not interchangeable, as the StarOffice marketing team is learning. Even when the prize is zero, the cost of switching form Microsoft Office is non-zero. Even the smallest differences can make two software packages a pain to switch between.


Why? Because Apple and Sun computers don’t run Windows programs, or if they do, it’s in some kind of expensive emulation mode that doesn’t work so great. Remember, people buy computers for the applications that they run, and there’s so much more great desktop software available for Windows than Mac that it’s very hard to be a Mac user.


This was not an unusual case. The Windows testing team is huge and one of the most important responsibility is guaranteeing that everyone can safely upgrade their operating system, no matter what application they have installed, and those applications will continue to run, even if those applications do bad things or use undocumented functions or rely on buggy behavior that happens to be buggy in Windows n but is no longer buggy in Windows n+1.


The Raymond Chen Camp believes in making things easy for developers by making it easy to write once and run anywhere. The MSDN Magazine Camp believes in making things easy for developers by giving them really powerful chunks of code that they can leverage, if they are willing to pay the price of incredibly complicated deployment and installation headaches, not to mention the huge learning curve. The Raymon Chen Camp is all about consolidation. Please, don’t make things any worse, let’s just keep making what we already have still work. The MSDN Magazine Camp needs to keep churning out new gigantic pieces of technology that nobody can keep up with.


And personally I still haven’t had time to learn .NET very deeply, and we haven’t ported FogCreek’s two applications from classic ASP and VB 6.0 to .NET because there’s no return on investment for us. None. It’s just Fire and Motion as far as I’m concerned.

No developer with a day job has time to keep up with all the new development tools coming out of Redmond, if only because there are too many dang employees at Microsoft making development tools!


The reason it takes $130,000 to hire someone with COM experience is because nobody bothered learning COM programming in the last eight years or so, so you have to find somebody really senior, usually they’re already in management, and convince them to take a job as a grunt programmer.


The truth is, Microsoft noticed way back in 1991 that an increasing amount of their revenue came from upgrades, and that it’s hard to get everybody to upgrade, and they’ve been trying to get their customers to agree to a subscription model for buying software for almost a decade. But it hasn’t worked because the customers don’t want it. Microsoft sees .NET as a way to finally enforce the subscription model which suits their bottom line.

It almost seems as if Microsoft .NET doesn’t fill a single customer’s need, it only fills Microsoft’s need to find something fro 10,000 programmers to do for the next ten years. We all now it’s been a long time since they’ve thought of a new word processing feature that anybody needs, so what else are all those programmers going to do?


“But Joel, eventually enough people will have the runtime and this problem will go away.”

I thought that too, then I realized that every six or twelve months, Microsoft ships a new version of the runtime, and the gradually increasing number of people that have it deflates again to zero.


Listen to your customers, not your competitors.

There are dozens of competitors for our bug tracking software, and I have no idea what they do or why they’re better or worse than ours. I couldn’t care less. All I care about is what my customers tell me, which gives me plenty of work to keep busy.