Part of the challenge of running a typical virtualized infrastructure is that you can have many varying application workloads. Each of these workloads can have different network requirements and demands. Real-time applications such as voice or streaming media are sensitive to latency, whereas file transfers just need the ability to shove as much data as possible over the wire, as quickly as possible.

Admins who run a real-time communications service such as Microsoft Lync server understand full well how network latency or congestion can cause a real downgrade in the user experience. Lync and other Unified Communications (UC) applications, especially, are sensitive to network quality issues. Think about the kinds of traffic that these applications, such as video or audio chats or whiteboard sharing, generate: always in real time, , originating from any number of devices, spread across any number of networks—from your corporate campus to remote offices or even home offices over variable networks. Unlike other IP-based traffic, which is bursty and rarely long-lived, UC traffic can last for minutes or hours.

These UC workloads can benefit substantially from a network that can accommodate quality of service. QoS, as it’s built into an OS on which the workloads are running, examines packets coming through the network stack. QoS can make determinations to prioritize those packets, based on the type of data that is passed or even on the consumer of that data.

Imagine a scenario in which any traffic that comes from your CEO gets the highest priority on the network. That scenario is not farfetched with the features that are built into the QoS scheduler in Windows Server 2012 and Windows Server 2012 R2. QoS has been present in Windows Server since Windows Server 2008 R2, and the kind of policy-based QoS that our CEO scenario describes has, too. But Microsoft made some key improvements in QoS in Windows Server 2012 to make QoS more powerful, especially in virtualized environments, where you might have different workloads transiting a single physical NIC in a host server.

In earlier versions, you could set maximum bandwidth values for an application, to specify that it could never use more than a fixed amount of bandwidth. Now, you can also specify a minimum amount of bandwidth, to ensure that when bandwidth is congested, all apps using that bandwidth get as much of the network as they need to continue operating responsively. You can imagine how handy this might be if your users perform real-time tasks such as video streaming or Lync video conferencing.

In addition, features such as traffic classification and tagging allow you to specify types of network workload; you can’t prioritize or guarantee bandwidth for a packet unless you know which application that packet belongs to. Finally, with a policy framework for QoS delivered through management tools such as PowerShell or Group Policy, you can implement the kinds of scenario I mentioned—such as ensuring that the CEO’s Lync video conferences always get a certain amount of bandwidth and are always prioritized over less important traffic.

Applications such as Lync are valuable to the business only if you can deliver a consistent end-to-end experience for the customer. And technologies such as QoS in Windows Server 2012 and Windows Server 2012 R2 make that possible by reducing the variability of traffic flows on IP-based networks of all kinds. Learn more about QoS at http://technet.microsoft.com/en-us/library/hh831679.aspx and http://technet.microsoft.com/en-us/library/jj735302.aspx. Learn more about networking at http://www.microsoft.com/en-us/server-cloud/solutions/software-defined-networking.aspx.