Money Alone Won't Solve Network Security Issues
The combination of terrorism fears and high-profile virus, worm and denial-of-service attacks, along with ongoing break-in revelations, has given new urgency to network and application security issues.
But as companies scramble to shore up their infrastructures with elaborate and expensive security solutions, they may be overlooking the greatest source of network exploits - misconfigured networks. According to Gartner, Inc. vice president John Pescatore, misconfigured networks account for nearly two-thirds of all network security problems.
With the vast majority of security spending focused on the smallest part of the problem, IT managers need to look more closely at a variety of common network security mistakes - from misconfigured servers, routers, firewalls or other devices, to the existence of rogue segments and improper application usage - before making further investments. Without a correctly configured corporate network, increased security spending alone can't keep the intruders out.
Three common examples help illustrate the costly and time-consuming issues IT managers face both in discovering the network security lapses that may exist and in rooting them out. First, unexpected and unauthorized network traffic may be operating on the network, yet go completely unnoticed. Next, undiscovered rogue subnets may wreak havoc on IT departments. Finally, connections to third-party networks outside their control may create additional security burdens for a growing number of companies. As these examples show, the dynamic nature of today's networks requires regular network traffic discovery - ongoing monitoring, analyzing, and comparing to established security policies and business requirements - to ensure the network is actually secure.
Preventing unauthorized traffic
It is common practice to close unused firewall ports and employ appropriate protocol proxies to restrict traffic. Despite this, it is surprising how many times IT administrators, mainly due to the lack of security experience, will simply open port 80, for example, to all traffic, without restricting it to web traffic or to specific hosts. All too often, many more ports are left open than are actually necessary, or allow unrestricted communications between the corporate intranet and the Internet.
These networks are particularly susceptible to unexpected or unauthorized traffic that may result in both network security and performance issues. A misconfigured file-sharing application can expose sensitive information to the Internet. An unauthorized web server behind the corporate firewall can create all sorts of security problems. The network congestion caused by bandwidth-hungry streaming applications can result in the loss of critical real-time data. From file sharing applications like Gnutella and Morpheus to instant messaging applications to bandwidth-gobbling streaming media and other applications, IT administrators must constantly balance business requirements (which are usually known but not documented) with security policies, to allow certain types of network traffic while restricting other unwanted traffic that may attempt to utilize the same ports.
The most effective way to prevent these abuses is to validate the traffic actually moving through these ports. Known protocols can be monitored regularly and analyzed to determine if anything unexpected is occurring, such as finding binary data where web pages are expected. But in some cases, such as the ports used for SSH or SSL, the traffic is encrypted, making it much more difficult to monitor, and so the traffic should be analyzed for proper encryption strength and protocol usage. Beyond these measures, it is imperative to restrict the number of open ports to only those necessary for business requirements, and restrict them to specific hosts or subnets wherever possible. Internally, firewalls could be used to separate large corporate network segments, such as between the corporate network and the production network, and port-based filtering on routers could be used to isolate certain kinds of traffic. However, constant monitoring of network traffic may provide the most effective way of maintaining a high security level.
Failure to configure routers to reject and block IP addresses not specifically allowed is another common configuration error, one that opens the door to the usage-unauthorized, or rogue subnets. At the same time, IT departments know they cannot reliably trust that someone isn't going to just change the IP address on their computer, or attach a computer from outside the company (such as a laptop) to the network and assign it an IP address themselves. These rogue networks, are "invisible" to other network users, and to the IT department as well.
Although they are likely to contain proprietary company information, computers on these rogue networks are unlikely to be backed up by the IT department, increasing the likelihood this information may be lost or compromised. Also, because the IT department is unaware that these IP addresses are already in use, they may try to use them again. When they don't work, they may not be able to trace the problem back to the rogue network, and may spend unnecessary resources in trying to figure the problem out, if that is even possible.
A cost-effective way to detect a rogue network is by monitoring network activity and specifically looking for it. Another way to appropriately deal with rogue subnets is to have routers properly configured to reject traffic from all network addresses not approved. While this won't stop small rogue networks from operating within a limited range, it will mitigate the risk of damage caused by the rouge subnet by isolating the traffic and preventing it from affecting the greater enterprise LAN.
Connecting to outside networks
Misconfigurations at partner sites can also affect a company's network security, yet companies have no direct control over these systems. A partner providing credit card verification, for example, may have misconfigured the SSL encryption strength, have an expired digital certificate on their server or other quality of service issues, all of which could impact the validity of the transaction.
It is incumbent upon businesses to know as much as possible about what their customers are experiencing with regard to transactions that occur between them and their partners' networks. Analyzing the data can identify the strength of the encryption, validate digital certificates and ensure an adequate level of performance and other aspects of the remote server for enforcing service level agreements.
Before deploying a new enforcement architecture, experienced IT managers will gain visibility into how their existing security infrastructure is working. Proper border device configuration and verification at regular intervals through monitoring is the most cost-effective way to ensure that network traffic satisfies security requirements and maintains business requirements as well.
While constant monitoring and analysis are time consuming and costly endeavors for IT departments, new solutions are appearing which automate these processes and make them practical as a way of ensuring the ongoing security health of enterprise networks. This new generation of security applications enables enterprises to easily and cost-effectively ensure their security policies are in effect across the network, their security implementations are correct and their business practices are being effectively executed.
Taher Elgamal is co-chairman and chief technology office, Securify, Inc. (www.securify.com).