Fundamentally, the transition to cloud is an infrastructure play – moving various types of services, servers and computer resources into an environment with ubiquitous hosting and access. It is no surprise, then, that when an enterprise IT manager considers migrating to cloud architecture, one of the primary concerns is the potential security threats caused by this increased accessibility to the network.
However, the shift to cloud architecture can improve on current security practices by moving computing power and application intelligence to a centralized complex of servers, accessible via light clients. Computer terminals of the 1970s, 1980s and early 1990s were the ultimate light client because they housed no software. Before desktop computers, machines accessed information on mainframes and provided significant security benefits. For example, there was no need for end-user software patching and no end-user platform for targeted malware. As we move IT infrastructure to the cloud, the familiar arrangement of computer terminals and mainframes provide a good example of how cloud architecture – and security – could be set up.
One of the key advantages of mainframes involved the amortization and centralization of administration by those who did it best – as is the case with cloud computing. Massive distribution of security responsibility requires vigilance on the part of every user, and this is not a reasonable expectation.
Cloud computing also provides considerable security benefits by removing non-diverse devices from the environment. This requires that the infrastructure supporting the cloud applications be properly secured. If the infrastructure is not protected, we are simply moving non-diversity vulnerabilities from the device to the servers.
Another critical element of security in the cloud is identity management. This becomes the front line of defense to differentiate between someone who should have access to the enterprise network and a malicious source trying to gain access. However, securing the underlying infrastructure in combination with improved identity management means a given bot, virus or worm sent from a malicious source will have to find its way through at least two layers of filtering in order to reach its intended target. The degree of diversity in these layers will also have a direct impact on their effectiveness.
Cloud computing provides fundamental advantages for those who use assets online as a part of their business, and we need to do a far better job – as an industry – of demonstrating that the infrastructure and services we are putting into the cloud are as good or, in my opinion, superior to what we have today.
»Building from within
At AT&T, says Ed Amoroso, the approach is to build security strategies into the underlying infrastructure for the company’s cloud systems – within the AT&T global network.
»Detour for “bad” traffic
The underlying platform at AT&T has built-in DDoS protection, which can divert suspicious traffic in real-time, filter it and then send it into an MPLS network.
»Using route reflectors
Amoroso’s company is also managing cloud performance by using route reflectors to optimize the manner in which data, traffic and services flow in and out of its infrastructure.
»Taking the lead
The approaches presented here may still be unique, says Amoroso, but he believes the rest of the industry will eventually adopt advanced routing systems as well.