The best part of writing this column is the reader mail I get. It's a source of endless fascination to me to find out about the details of your Exchange environments, and often the questions you ask are things that never would have occurred to me. For example, a reader wrote to ask me about my recent columns on 64-bit Exchange and the performance benefits it should offer. He wrote:

"In your last e-letter you mentioned the added performance boost putting Exchange on a 64-bit box. For those of us that connect our Exchange servers to an iSCSI SAN, would we not run into bottlenecks at the NIC (1Gb backbone, assuming we were not using a \[TCP/IP offload engines\] TOE card or maybe even if we do), before a 32-bit setup cut into performance?"

This is a great question, so I'll trot out my all-purpose answer: "It depends." First, let's assume that you have a Gigabit Ethernet connection to the iSCSI SAN, with a host bus adapter (HBA) that has a native x64 driver--no thunking required. That's just a clarification, but in the end it doesn't really matter. Why? Assuming that you have "enough" RAM (where the precise value of "enough" varies according to the user workload on your server), the Exchange 12 implementation of the JET database engine will be able to cache a significantly larger portion of the EDB file than it can now. Therefore, the amount of bandwidth between your server and the iSCSI cabinet becomes much less relevant from a performance standpoint.

We already see a similar effect: When SAN vendors are hunting for business, they often make their solutions seem more attractive by adding a very large cache to the controller. This speeds performance a great deal. Of course, this only works until the disks hit 70 percent or so of capacity, then the cache loses its advantage and performance drops like a rock. (That's why the Jetstress tool fills up the disks before starting the actual performance testing.)

In this case, this behavior is a problem only because the SAN controller has no idea what the application is doing; it's not a problem for Exchange because the Extensible Storage Engine (ESE) is in charge of the cache. Given enough RAM, the amount of bandwidth you use for a given set of user behaviors should decrease because you'll be making fewer requests to the actual disk.

What about page size? My gut feeling is that the page size change will be a wash; caching will reduce the total number of I/O operations per second (IOPS) that have to go over the wire, but those pages that do go will be 8KB instead of 4KB. I'm looking forward to seeing hard data to confirm or disprove that theory, though.

Why did I say "it depends," if the performance news is so rosy? Because one of the key reasons people will be deploying Exchange 12 is to consolidate servers. Obviously, if you take four or five Exchange Server 2003 servers and stuff their mailboxes onto an Exchange 12 server, the new server will require a significant amount of SAN bandwidth, and I suspect it'll easily be possible to build configurations that would saturate a Gigabit Ethernet HBA. So, don't do that and you should be good to go!