Is bigger better when it comes to DDoS attacks?
- 07 December, 2012 15:51
DDoS attacks have remained on the front page again in 2012 for a very simple reason; they continue to attack the largest and most secure networks in the world, from governments’ web properties to Wall Street. Is this simply a function of increasing size of attacks that overwhelm these websites? Yes and no.
Arbor’s ATLAS internet monitoring system shows that without question, DDoS attacks are getting bigger, much bigger.
The average attack in September 2012 was 1.67Gbps, 72 per cent growth from September 2011. The number of mid-range attacks (2 to 10Gbps) is also up 14.35 per cent so far this year.
Furthermore, very large attacks (10Gbps and over) are up by 90 per cent this year over 2011 and the largest attack in 2012 was 100.84Gbps.
This increase in attack size has significant implications not only for service providers, but specifically enterprises that continue to rely on firewalls/IPS to protect them from DDoS attacks.
Because these devices have to keep state information on every session, they can easily be overwhelmed with botnet-based attacks. This often makes them among the first points of failure during DDoS attacks. The larger the attacks get, the more likely these devices are to fail.
All of that said, when it comes to DDoS, size isn’t everything. That is why it is best to deploy a layered defence strategy as a best practice for all enterprises.
The most robust defence is achieved by combining a cloud-based DDoS managed service that protects the network from larger attacks, together with an on-premise DDoS solution.
This will keep services available and to protect existing security infrastructure, such as the firewall and IPS, by detecting and mitigating application-layer attacks at the perimeter of the network.
Recent attacks prove it’s not all about size
Recent bank attacks in the United States show DDoS is not all about size.
These attacks are becoming increasingly complex. They often include multiple techniques and targets. Take, for example, the recent attacks on financial services companies in the US.
These attacks used a combination of attack tools with vectors mixing application layer attacks on HTTP, HTTPS and DNS with volumetric attack traffic on a variety of protocols including TCP, UDP, ICMP and others.
The other obvious and uncommon factor used in this series of attacks was the simultaneous attacks, at high bandwidth, to multiple companies in the same vertical.
Many of the compromised hosts used in these attacks were servers with significant upstream bandwidth at their disposal. The majority of these bots resulted from PHP web applications that were exploited.
Many Wordpress sites, often using the out-of-date TimThumb plugin, were being compromised around the same time. Joomla and other PHP-based applications were also used.
Often these were unmaintained servers that attackers uploaded PHP web shells to and then used the web shells to further deploy attack tools.
Attackers connected to the tools either directly or through intermediate servers/proxies/scripts and therefore the concept of command and control did not apply in the usual manner.
Without question, DDoS attacks are growing larger. More significantly, they are becoming increasingly complex, blending multiple attack tools, techniques and targets. One reason why DDoS remains such an effective weapon is that too many enterprise networks continue to rely on solutions that were designed for other problems, to combat DDoS.
The complex, rapidly evolving attack vector requires purpose-built tools on-premise as well as cloud-based security. This provides comprehensive protection against both large attacks and those that target the application layer.
Until we see pervasive deployment of best practices defences, we can expect to see DDoS in the headlines for many years to come.
Gary Sockrider is a network solutions architect at Arbor Networks.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.
- The New Normal: Mobile Device and App Behaviour Over Wi-Fi and Best Practices for Wireless Networks
- Controlling Peer-to-Peer and Recreational Internet Traffic
- Worst Practices in Predictive Analytics
- Why the future of the cloud is open
- From Virtualisation to Private Cloud - Cut through the cloud clutter
Larry Page wants to see your medical records
Dual-Persona Smartphones Not a BYOD Panacea
After two-year hiatus, EFF accepts bitcoin donations again
CIOs struggle to deliver timely mobile business apps: survey
Spiceworks' free management software gets integrated MDM
Tips Choosing a Cloud Service Provider
Because cloud is still a new and evolving business model, it can be argued that the decision to select a cloud service provider should be approached with even greater diligence than other IT decisions. Many providers use the same term to define very different services, “hybrid cloud” is one example, making it difficult to compare offers. This whitepaper will help enterprises evaluate their options in two critical areas: the cloud service portfolio and the service provider itself. Read now.
Russian Underground 101
This research paper intends to provide a brief summary of the cybercriminal underground and shed light on the basic types of hacker activity in Russia. It discusses fundamental concepts that Russian hackers follow and the information they share with their peers. It also examines prices charged for various types of services, along with how prevalent the given services are in advertisements. The primary features of each type of activity and examples of associated service offerings are discussed as well. Read this paper.
Agentless Security for Virtual Environments
Virtualised datacentres, desktops, and cloud computing should be secured by the same strong protection technologies as physical machines. However, traditional agent-based solutions that are not architected for virtualisation can result in a number of significant operational security issues. Find out more about the first agentless security platform solution.