Today's content-heavy networks are transmitting richer, more critical data - and as a result, more and more enterprises are demanding quality-of-service (QoS) agreements for a higher level of network service, one that assures reliable delivery of content along with consistent availability of bandwidth.
But even when an enterprise strikes a QoS agreement with a network provider, there is no guarantee that the promised available bandwidth will be utilized for the traffic of highest priority to the enterprise.
If a company is truly interested in ensuring the rapid delivery of critical data, it must first determine which network traffic is most important and then develop a QoS policy (priorities, guarantees and limits) that ensures the best performance and reliable delivery for specific types of network traffic. The end result, ideally, is that critical data always has a guaranteed clear line through which to pass, quickly and reliably.
As newer, richer content, such as voice-over IP (VoIP), streaming media and peer-to-peer (P2P) applications, depends more heavily on fast delivery for successful transmission, QoS is becoming extremely important. What's more, in the interest of keeping budgets down without sacrificing critical applications, it is becoming necessary for enterprises to integrate QoS into their network architectures so that bandwidth allocation is most efficient. Not surprisingly, the edge point location of most corporate firewalls and virtual private networks (VPNs) is proving to be an ideal site for QoS devices. It's also proving to be the only location that enables both the QoS device and the VPN to function optimally. Here's why:
- Firewalls and VPNs are the 'gateways' to the network; they are the only devices through which all traffic must pass. As such, integrating a firewall/VPN solution with a QoS device not only ensures the easiest point of enforcement for the QoS policy, but it also enables the most convenient, secure and efficient management of the QoS policy for the entire network.
- The QoS device is able to fully leverage the advantages of the VPN to promote strengthened security practices and maximize the ability of the QoS tool to identify and classify traffic on the network, regardless of encryption.
What happens when you don't integrate QoS with your VPN? Quite simply, you introduce security vulnerabilities and inaccuracies.
- When placed on the WAN side, the QoS device faces potential denial-of-service (DoS) attacks, and can affect the performance of other critical tools and protocols. With regard to network address translation (NAT), the QoS device is blinded by its WAN-side location and cannot manage or prioritize traffic by user, group or network. The QoS device is also unable to classify or shape traffic by application - preventing the VPN from doing its own job effectively.
- If placed on the LAN side, the QoS link can flood inadvertently because encryption software frequently increases packet size. In addition, any traffic that might otherwise be rejected as low priority can mistakenly be taken into consideration by the QoS device. This not only reduces the efficiency of the QoS device, but it also allows potentially risky traffic into the network without sufficient scrutiny.
QoS Maximizes Bandwidth Potential
Before selecting a QoS device, most enterprises should determine the desired benefits of the implementation. If properly integrated, a QoS solution can layer firewall, VPN and functionality so that users can easily extend a firewall or VPN deployment to include QoS, without the complexities of managing multiple discrete function devices. Not only does this type of implementation provide integrated security throughout all network devices, but it also ensures a QoS policy in keeping with a company's security policy. Additionally, an effective QoS device will provide two main advantages when deployed as part of a perimeter security solution.
First and foremost, the device will optimize network performance by assigning priority to business-critical traffic. By aligning network resources with business goals, a QoS policy makes it possible to realize the true potential of IP networks.
Next, a QoS device will smooth connections and drastically reduce retransmit counts, thereby substantially improving the efficiency of the enterprise's existing lines. The bandwidth that then becomes available to important applications comes at the expense of less important (or unimportant) applications. As a result, the purchase of additional bandwidth can now be an intelligent business decision.
Effective Prioritizing and Intelligent Queuing
In order to provide these benefits, a QoS device should employ specific features that work together to ensure comprehensive, reliable network performance. One such feature, called stateful inspection (patented by Check Point Software), works to initially classify communications by accessing and analyzing data derived from all communication layers. Also known as dynamic packet filtering, it is critical because it enables a QoS device to perform its primary function of prioritizing network traffic and allocating bandwidth according to the QoS policy.
How? Instead of simply examining packet headers, the QoS solution can leverage additional functionality to inspect packets down to the application level, and then parse URLs and set priority levels based on file types. (For example, it could identify http file downloads with *.EXE or *.ZIP extensions and then work to allocate bandwidth accordingly.) The data accumulated on packet state and context is then stored and updated dynamically, enabling communications to be consistently classified, as well as providing virtual session information for tracking both connection-oriented and connectionless protocols (for example, UDP-based applications).
A second critical element for an optimal QoS device is intelligent queuing, featuring an enhanced weighted fair queuing (WFQ) algorithm, to manage bandwidth allocation. WFQ is a method for packet scheduling that works to either guarantee bandwidth to particular applications, or to establish absolute limits. With the right queuing engine and the right WFQ algorithm in play, a packet scheduler will be installed on each network interface to move packets through a dynamically changing scheduling tree. The packets move at different rates according to the QoS policy; naturally, high-priority packets move through the scheduling tree and emerge on the other side of the interface more quickly than low-priority packets. As a result, critical traffic reaches its destination faster, and bandwidth is kept free for additional traffic of high importance.
The concept of prioritizing network traffic is an interesting one; it means that some information will be rated relevant, while other data is considered unnecessary. In terms of QoS, this leads to the requirement for some type of functionality that enables the device to appropriately direct traffic - even that of the 'less important' variety. Clearly, high-priority traffic will be funneled through the network as quickly as possible, and will take precedence over low-priority traffic. But what happens to this low-priority traffic in the meantime? Does it hang around and clog the network? Is it simply dropped?
An effective QoS device takes into consideration the fact that low-importance traffic must be as efficiently handled as high-importance traffic in order to maintain bandwidth free of congestion. To accomplish this, the QoS solution's queuing engine should employ a weighted flow random early discard (WFRED) intelligent drop policy, which prevents network clogs before they happen. WFRED monitors the flow of network traffic as it lines up and, when the queue length reaches a specified traffic threshold as determined by the QoS policy, it intelligently discards packets classified as low-priority to slow TCP. In other words, most packets that are dropped are not forgotten - they are simply retransmitted at a later time by a slower, but nevertheless reliable, protocol. On the off chance that any packets being transmitted represent a potentially threatening denial-of-service (DoS) attack, the QoS device integrated with the VPN solution can identify the risk and, if specified by both the QoS and the security policies, drop the packets altogether.
In addition, the QoS device should ideally have functionality that enables the administrator to direct bandwidth to applications in the 'right' ratio with the greatest of ease - as established in the enterprise's QoS policy. To make the most efficient use of the investment in network capacity, the functionality referred to as a retransmission detection early drop (RDED) mechanism ensures that re-transmitted packets don't seize valuable resources required for other business needs. This prevents redundant TCP retransmits to further maximize available bandwidth.
QoS Today - Better Networks Tomorrow
When combined, these features will provide the necessary foundation for a valuable QoS solution - something that is becoming more and more important when building and operating efficient networks today. Especially when deployed at the firewall/VPN level, a QoS device can optimize network performance to save an enterprise time and money in implementation costs as well as routine management efforts. Furthermore, with an effective QoS solution in place, companies can maximize bandwidth potential while learning from relevant bandwidth trends - so purchasing decisions for future bandwidth needs can be made with greater intelligence and efficiency. As bandwidth scales in today's world of ever-increasing availability requirements and ever-richer content, QoS is more than just a service guarantee. It's a necessary investment toward ensuring the scalability and performance of critical networks.
Neil Gehani is senior product manager, QoS, Check Point Software Technologies (www.checkpoint.com).