By the end of 2023, the majority of traffic on the internet may no longer be generated by human users. Automated traffic will replace human traffic in the form of bots. A “bot” runs automated tasks over the internet.
Last year, bots were responsible for 47.4% of all internet traffic — up from 42.3% the year before. The proportion of human traffic (52.6%) decreased to its lowest level in eight years. It’s a worrying trend, and one that shows no signs of stopping.
Bots aren’t necessarily bad — in fact, there are plenty of good bots, responsible for indexing websites for search engines, monitoring website performance, and other helpful functions. But, a substantial proportion of automation falls into the category of “bad bots.” They scrape data from websites without permission, perpetrate aggressive and disruptive distributed denial of service (DDoS) attacks, and even engage in online fraud and theft. In 2022, bad bots comprised 30.2% of all web traffic — nearly twice as much as good bots — and that number has increased annually.
Today, bots are a significant problem across every industry – but it wasn't always that way. The sophistication of bot technology has evolved over the past decade, putting dangerous new capabilities in the hands of operators. Organizations that fail to recognize the threat posed by today’s bad bots risk leaving themselves vulnerable to attack.
How bad bots evolved
In 2013, the Pushdo botnet was the most widespread bad bot, infecting more than 4.2 million IPs, including those belonging to private companies, government agencies, and military networks. But while Pushdo highlighted the broad scope of bad bot activity, it was used primarily to distribute spam and spread malicious trojans — much like the bots of the early 2000s. The most notable advancement came in 2014 when security researchers first observed bots exploiting mobile browser settings to scrape data. This was an important indicator that attackers were adapting to the increased prevalence of mobile web and application environments.
The sophistication of bad bots accelerated from there. By 2015, operators had shifted their tactics to emphasize quality over quantity. Rather than using one IP address to make 1,000 requests, one bot might cycle through 1,000 different IP addresses and make one request per address. This let operators better disguise their identities, and the growing sophistication of bots made them harder to distinguish from other web traffic. In addition to the malicious activity carried out by bots, this had the additional side effect of skewing web traffic numbers and marking analytics, creating downstream effects for businesses.
In 2016, mobile web browsing first overtook desktop browsing, leading to a 42.8% year-over-year increase in bad bots claiming to be mobile browsers. Awareness of the issue also increased during this period, thanks in large part to the 2016 U.S. presidential election. At that time, bad bots made headlines for engaging in social media-based disinformation campaigns designed to influence the outcome of the election. This was a milestone in the evolution of bot technology. It was the first time bots became part of mainstream discourse and was a signal of how sophisticated and evolved the technology had become.
Bad bots are more advanced and dangerous
Over the past five years, the evolution of bad bots has accelerated at a concerning rate. In 2018, bots demonstrated the ability to mimic human behavior like mouse movements and page scrolling. This let automation evade detection because it appeared as human behavior. Security professionals began to recognize DDoS protections and web application firewall (WAF) solutions were no longer sufficient for stopping sophisticated automation. There was greater recognition within the industry that bad bots were a serious business threat, and bot management solutions were needed.
While bots were used in online fraud campaigns in the past, there’s been a rise of “mega credential stuffing” attacks in the past few years. During one observed attack, operators spent 60 hours attempting more than 44 million login attempts. The availability of breached credentials has spurred an increase in these large-scale attacks, which can cause significant strain on infrastructure. The high volume of bot traffic associated with a large-scale credential stuffing attack can cause slowdowns or downtime on par with a DDoS attack.
The COVID-19 pandemic further thrust bad bots into the mainstream. During the pandemic, bots were used to snatch up next-generation gaming systems, vaccine appointments, and essential goods. They were also used for large-scale fraud, perpetrating scams designed to acquire COVID relief funds meant for those in need. These tactics have become commonplace among bot operators — and today’s bots are using more advanced evasion techniques than ever. The most sophisticated bots can now defeat CAPTCHA challenges. Increasingly, bots target vulnerable APIs to scrape valuable and sensitive data from unsuspecting organizations. With billions of dollars potentially at stake, stopping these bots must be a priority.
In the past 12 months alone, the sophistication of bots has roughly doubled, and the advent of tools like generative AI will only accelerate their rate of advancement. The more sophisticated these bots become, the more difficult they are to stop. Organizations must act quickly to ensure that they have effective protections in place. As bot activity closes in on 50% of all internet traffic, security teams must make mitigating the potential impact of those bots a high priority. Those who fail to act are putting themselves, their customers, and their reputations at risk.
Karl Triebes, senior vice president and general manager, application security, Imperva