Last week, the Gartner Group and Arthur Andersen released very different studies regarding the total cost of ownership (TCO) associated with Windows 2000 (Win2K). Supposedly, the Gartner Group and Arthur Andersen looked at the same OS and the same issues, but you could have fooled me. For more on the specifics, see C. Thi Nguyen's, "Analysts Dispute the True Cost of Windows 2000," at http://www.winntmag.com/Articles/Index.cfm?ArticleID=7189. It's tempting to pick apart the road each study took to the conclusions and how each group of experts examined and determined costs--but that's buying into the theory, and that's the fatal flaw. The theory, simply put, is that to determine the TCO or return on investment (ROI), you have to follow a path paved with a logical set of standards. If there were any truth to that statement, I don't think we'd be looking at two diametrically opposed sets of conclusions. Historically, Gartner has been rather pessimistic about migrating to new versions of Microsoft Windows (while touting the wonders of thin clients). I could also point out that Microsoft commissioned the Andersen study, but you've probably already realized that. The nature of the problem is TCO, and we should be questioning TCO as a theory--not because it's theoretically bad, but because no standards exist for "what to include." When I buy motor oil for my car, I can read the label to ascertain whether the grade I'm looking at is good for my car if I live in an area where winter temperatures drop to 10 degrees below zero. The American Society for Testing and Materials (ASTM) tests motor oil using reliable, defensible, empirical standards I can rely on. If the oil that provides the best protection is extremely expensive, I can match it against the cost of replacing a destroyed engine or replacing my car. In other words, I know the cost per quart. I also know it takes one person to put the oil in the car, even if I buy 5 quarts of oil. I can even make a scientific guess about the amount of time it will take to pour each can of oil into the car. All this information is easy to understand for my budgeting purposes. I'm still searching for the equivalent, "How many IT professionals does it take to deploy an OS across 10,000 computers, and how long will it take to install the OS on one computer?" The absence of a measurement makes TCO studies less than scientific. The only acceptable cost in TCO reporting is the price of the hardware. After that, it's all nebulous, full of guesswork, and easily influenced by predetermined opinions. And even if TCO were more scientifically sound, I still have serious questions about whether it should be an important factor in making decisions. Computers are tools, and the computer system is not the product. Based on my entrepreneurial experience, I look at products when estimating profit and loss. Although I'd consider the cost of the tool as part of the Cost of Good Sold, I'm not hampered by the need to build in tool replacement after a given number of years. Tools aren't consumables (unless you change your widget and make your dies obsolete), but the nature of TCO approach treats them as consumables. If you update an OS because the new features makes it easier to run and administer the system that helps you create your company's product, which in turn increases your profit on that product, that's the measurement to heed. If your business will run and expand (and increase profits) without upgrading the OS (possibly using more aggressive salespersons?), that's another easy decision. We're all letting the jargon-speaking research folks set the priorities for us (we being IT personnel and the executives who make the financial decisions). We let them invent the terminology for us, and after a while we even start using their jargon and their paradigms. Maybe it's time to put the TCO fad to rest. I'm willing to bet that IT professionals and representatives from appropriate departments in their companies could come up with relevant, valid decisions without it.