Sometimes we don't recognize or realize the value of the data logged by an application. Exchange logs lots of information but it's only useful when someone takes the time to make sense of that data. ActiveSync and the Information Store are pretty important parts of the product, so it's a good idea to know what they're up to. Two recent blog posts focus in on just this aspect. It's all good stuff.
Two recent posts on Microsoft’s EHLO blog have pleased me very much because they focused in on an area that is often overlooked by Exchange administrators – using the data logged by Exchange for analysis or planning purposes.
The first post covers how to understand the information logged by ActiveSync (EAS) as it copes with connections from the wide range of mobile devices used to interact with Exchange. As we all know, sometimes client applications don’t behave so well, load EAS with an unnatural transaction volume, and cause problems for servers. That problem has lessened in the recent past as Microsoft has worked with client developers to improve their code (well, Microsoft offers suggestions, the developers then contemplate their navel and decide what to do). More proactively, Microsoft has also improved the ability of EAS to detect and manage bad client behavior. You won’t see the effect of this work in Exchange 2007 or Exchange 2010 as it’s built into the latest versions of and Exchange Online ( ).
Nevertheless, as was obvious by the frantic note-taking by attendees during the excellent session on “Taming ActiveSync” by MVPs Steve Goodman and Michael van Horenbeeck at the recent Exchange Connections event, a knowledge deficit exists in terms of how to find out what’s happening when EAS deals with clients. Michael and Steve focused on how to use the LogParser Studio to interrogate the EAS logs and Excel to make sense of what’s found there. It all made sense to me.
The EHLO blog adds context by explaining the XML traffic that flows from client to EAS as the client connects, is provisioned with a policy, performs an initial synchronization, and then sets up the longstanding HTTP transaction that provides the foundation for clients to keep a watching eye for the arrival of new mail, contacts, tasks, or calendar entries in a mailbox and then download that data when it is available. All in all, a pretty good read that complements the well-known Slideshare presentation on “Troubleshooting ActiveSync” that’s also packed with good information for an administrator to have. At the time of writing, this deck has clocked up 7,444 views, so some people know its secrets!
The other post covers reporting of transaction log volumes, or rather “Analyzing transaction log growth”. It’s also a good read that covers a simple idea that offers real value to administrators: run a script to analyze how many transaction logs are created by databases over a period. Once again proving the power of PowerShell (no pun intended), the script ignores the obvious method of looking at the number of transaction logs in the log directory for each database and instead plunges into Performance Monitor to capture information about the current transaction log generation from a counter maintained for each database.
You’ll recall that transaction logs form a stream of information about transactions captured by the Store that are committed to databases when they are complete and valid. Each 1MB transaction log is a separate generation within the log stream, so log 10 comes after log 9, log 8, and so on. The transaction number allows the Store to know the relative position of the transactions contained in a log within the log stream and is also used by the Replication service to ship transaction logs around to keep passive database copies up-to-date within a Database Availability Group (DAG).
Like all good ideas, focusing on the PerfMon counter instead of the logs themselves is a simple but elegant insight. The script is functional and useful and should be part of your armory, if only so that you have a solid idea of transaction log growth over time and can feed that information into the planning cycle for future hardware upgrades. For example, Exchange 2013 takes advantage of the swelling size of modern disks to support the co-location of multiple databases on a single volume. Before you can take advantage of the feature, you have to be able to size accurately, including information about the database size, logs, and content indexes. You now have an excellent tool to help!
In closing, let me note that episode #28 of the UC Architects podcast is now available for download from iTunes, the Zune store, or RSS. This is the episode recorded in front of a live audience at Exchange Connections. I enjoyed participating in the podcast very much and hope that you like it too!
Follow Tony @12Knocksinna