As Windows NT Magazine's Senior Technical Editor Michael Otey wrote in a recent column in SQL Server Magazine, there are "lies, damn lies, and benchmarks." OK, so he paraphrased Mark Twain—but if you're going to steal material, steal from the best. Ambiguous attribution doesn't make Mike's statement any less true.
I'm not saying that benchmarks are useless—just that they reflect only a small portion of any product's capabilities. When a vendor publishes benchmark results for a product, you can be sure that the benchmark shows the product in the best possible light. The vendor has either configured the benchmark to achieve a desired result or has configured the product (usually unrealistically) to achieve the desired result with a standardized benchmark (such as benchmarks from the Transaction Processing Council—TPC).
This situation leaves the Lab in a bit of a quandary. On the one hand, we need to use benchmarking tools to provide a context in which we can compare products that perform the same functions. On the other hand, our goal is to highlight neither the benchmark nor the product, but rather to provide useful information to help readers select products. Keeping this need and this goal in mind, we attempt to design benchmarks that duplicate the behavior of the systems we test under actual corporate conditions.
To determine what "actual corporate conditions" means, we have to look within. The Lab staff has a fair amount of corporate experience, and we can always step back and look at our own business environment. But what we really need is feedback from you, our readers. We're also limited in the tools we can use to benchmark products. We don't have a thousand workstations to pound a server with, although we can easily virtualize large numbers of clients and saturate a server or network. However, if that type of heavy load isn't the kind of workload that your servers see, then our testing won't be as valuable to you as it could be.
We're aware of how readers make decisions about whether to read a product review. If we review something that interests you, there's a good chance you'll read the review. If we review a product that addresses a problem you're trying to solve, you're almost certain to read the review. Otherwise, the best we can hope for is that the review catches your attention. So we'd like to ask for your help. Drop us a note at email@example.com and tell us a few things:
- What products and technologies are you evaluating now?
- What products and technologies do you expect to be looking at over the next 6 months? 12 months?
- Most important, how would you like to see us evaluate those products?
One thing I've discovered over the years, after talking to thousands of readers, is that almost no problem is unique to one environment. If you must deal with a specific problem, you can bet that many of your peers are dealing with the same problem. When you let us in on the process, we can act as point men, filtering out the products and solutions that dozens of vendors offer and providing you with a focused starting point to guide your product evaluations. Our review process will thus become more valuable to you and to us.