The upside and downside of Microsoft's focus on the cloud

The upside and downside flowing from Microsoft’s growing investment in cloud-based services was illustrated by two recent blog posts. On the downside, Microsoft announced that they were cancelling a range of on-premises security products including Threat Management Gateway (TMG) and Forefront Protection for Exchange (FPE). Microsoft’s new focus is on cloud-based security, a move that makes absolute business sense for Microsoft as it allows them to move out of what has become an area of low profitability when measured against engineering investment. In addition, Microsoft already has to provide anti-virus and anti-spam protection for Office 365 and can offset their costs against the monthly subscriptions that are now flooding into “the service”. All in all, it’s a good deal for Microsoft that will cause some pain for customers who need to get their heads around the new situation, especially those who have recently decided to deploy TMG (I met quite a few of these at MEC).

It's important to remember that if you have TMG deployed today, it won't disappear in a puff of smoke when the product disappears from Microsoft's price book in December 2012. Indeed, many of the Exchange team at MEC were at pains to point out that TMG will be fully supported until 2015 and that it can handle some of the problematic issues that customers will encounter during early Exchange 2013 deployments, including the potential for double authentication screens being displayed to OWA users when Exchange 2013 co-exists with Exchange 2007 (an issue that "sucks" according to comments expressed at the Exchange 2007 co-existence session at MEC). The advice, which I think is good, is to keep TMG around until you have a suitable replacement.

Overall, even acknowledging the importance of TMG to many Exchange customers, I don’t treat the situation as a problem because I think it opens up a space where Microsoft previously took a lot of the available oxygen to innovation that will hopefully flow from other companies. Although the traditional on-premises anti-malware products will continue to handle situations such as regular scans of mailbox databases, I think that hardware-based appliances (perhaps virtualized) might be the right way to process the ever-increasing volume of inbound email. Time and investment will tell here.

The upside of Microsoft’s focus on the cloud platform can be seen in the new monitoring and reporting functionality built into Exchange 2013. Multiple MEC speakers used this as an example of how the experience of running tens of millions of mailboxes in Office 365 is flowing back into the on-premises version of Exchange to benefit customers.

I think the assertion is valid, if only because it’s obviously true that the Exchange developers can’t look forward to getting out of their warm beds every time an Office 365 datacenter reports a problem with Exchange Online. This is the situation that the developers find themselves in because the Exchange development group is responsible for supporting Exchange Online. It therefore makes perfect sense for the developers to extend a great deal of energy to build automated probes into the various components of the service that can detect when things are going wrong and then take whatever action is appropriate and necessary to rectify matters, right up to the point where a bugcheck is forced on a server to take it offline.

Of course, there’s a lot more sophistication built into the system than simply taking components offline or forcing servers to reboot. Many years of measurement and analysis of service incidents has identified the most common problem sources in an Exchange infrastructure and the steps that should be taken to resolve issues, or indeed to escalate to a more stringent level of resolution should the first attempt fail. Ross Smith’s blog on “Managed Availability" explains a lot about how Microsoft approached the development of suitable probes to measure service availability, how they detect issues from the data gathered by the probes, and how actions are taken to bring services back to full health. It’s a good read.

SCOM is the obvious partner for the Exchange team to work with on such a problem and it comes as no surprise that you’ll be able to exploit Exchange 2013’s new capabilities with SCOM. I hope that other third-party software monitoring platforms such as HP OpenView upgrade to use the new capabilities too so that customer choice is preserved. Ideally, they’d have this done by the time Exchange 2013 ships, so it’s time to get cracking if they haven’t been listening to what Microsoft has been saying.

Human beings are awfully innovative when a situation causes them personal pain. Figuring out how to stay safely tucked up in bed without being disturbed by automated phone calls reporting that “users can’t connect to OWA” or “we’re seeing a lot of event log entries on Server-XYZ” seems like a great incentive to me. I wish that Microsoft had discovered this method of encouraging engineer involvement for previous versions of Exchange. Although it sounds like something that Dilbert’s pointy-haired boss might embrace, it certainly seems to have done the trick in terms of increasing reliability in Exchange 2013.

Of course, Exchange 2013 isn’t released yet, but isn’t it good to know that it holds the promise that on-premises administrators might also get some more undisturbed sleep?

Follow Tony @12Knocksinna

Please or Register to post comments.

What's Tony Redmond's Exchange Unwashed Blog?

On-premises and cloud-based Microsoft Exchange Server and all the associated technology that runs alongside Microsoft's enterprise messaging server.

Contributors

Tony Redmond

Tony Redmond is a senior contributing editor for Windows IT Pro and the author of Microsoft Exchange Server 2010 Inside Out (Microsoft Press) and Microsoft Exchange Server 2013 Inside Out: Mailbox...
Blog Archive

Sponsored Introduction Continue on to (or wait seconds) ×