In the past few weeks, Microsoft has suggested not using its own network load balancing solutions for its own product (i.e., Exchange Server). Instead, Microsoft suggests using hardware load balancers. It seems that Microsoft’s "eat your own dog food" motto is now a distant memory.

This seems like a good time to question the legitimacy of load balancers in the Exchange world. I think the use of load balancers for Client Access servers is completely unnecessary.

When you install a Client Access server role on an Exchange server, it registers this role's information in Active Directory, as a Service Connection Point (SCP). Starting with Outlook 2007, the Outlook client queries these SCP records. Active Directory returns information about all Client Access servers with the site scope information. The Outlook client sorts these records according to their site information and tries to reach a Client Access server in its own site. If the Outlook client fails to connect to the first Client Access server, it tries the other Client Access servers respectively.

If the Client Access server that the Outlook client chose and connected to fails, the Outlook client tries the other servers. It might be necessary for the Outlook client to disconnect and reconnect.

This method seems dumb to me, because it lacks balancing. The client tries to connect to the first server on the list, no matter what the load is on that server. In contrast, hardware load balancers continuously monitor the load on the server and try to distribute the load evenly.

However, hardware load balancers are very expensive and complex. In addition, computers are getting better every day from a reliability and fiscal point of view.

So, for a manageable and cost-effective environment, I recommend that you forget the load balancers and leave the Client Access servers as they are.