Application security

How bots abuse APIs and tips to protect against it

AI growth risk as Good Bots and a Bad Bot and chatbot as a social vulnerability for Robots gone rogue and the danger of robotic or artificial intelligence technology in a 3D illustration style.

The rapid evolution of application architectures, driven by accelerated digital transformation during the pandemic, has transformed traditional monolithic applications into agile, distributed microservices connected via application programming interfaces (APIs). These APIs, now the bedrock of modern business operations, facilitate seamless communication between diverse applications and services.

As a result of this shift, API usage has surged in recent years, with API calls now accounting for the majority (71%) of web traffic. Enterprises, in particular, rely heavily on APIs to create engaging online customer experiences. In 2023, the typical enterprise site saw an average of 1.5 billion API calls. This increased reliance on APIs has introduced new cybersecurity risks as cybercriminals seek to exploit APIs, which act as direct pathways to sensitive data.

The growing threat of automated API abuse

Today, the biggest security risk impacting APIs is automated abuse by bad bots, one of the most pervasive and growing threats facing every industry. Bots are software applications that leverage APIs and run automated tasks across the internet every day. Good bots, such as those that improve search results or monitor website performance, serve useful functions. Alternatively, bad bots abuse the same APIs meant for good bots to carry out a range of tasks, from scalping to content scraping—resulting in higher infrastructure and support costs, increased customer churn, and damaged brand reputations.

Account takeover (ATO) attacks, in which cybercriminals use bad bots to facilitate credential stuffing and brute force attacks to break into online accounts, are a persistent risk. In fact, 44% of all ATO attacks in 2023 targeted APIs. These APIs handle crucial identity verification processes, making them an ideal target to gain unauthorized access to user accounts.

Challenges in protecting APIs

Protecting APIs from automated abuse by bots is challenging because all traffic looks the same to the API, as they are machine-readable by nature. Conventional bot protection mechanisms also do not work for APIs. For example, implementing a CAPTCHA challenge on an API request can break the calling application. This makes it difficult to detect and block malicious bot traffic, allowing attackers to use automation without raising alarms.

More broadly, organizations lack visibility over all their APIs, where they’re located, and the associated risks—making it difficult to identify and protect all public, private, and shadow APIs within their ecosystem against automated abuse.

Shadow APIs, which are undocumented and not maintained by normal IT management and security processes, account for about 4.7% of an organization's active APIs. Often introduced for specific purposes such as supporting legacy clients, diagnostics, or testing, these APIs are not properly cataloged or managed. These endpoints typically have access to sensitive information and pose significant risks since their existence and connections are unknown. A single shadow API can be exploited by bots to gain unauthorized access, steal sensitive data, or abuse legitimate functionality, resulting in compliance violations, regulatory fines, or a security incident.

Proactive Measures to Mitigate Automated API Abuse

To effectively mitigate API abuse by bots, organizations should:

  1. Follow Best Authentication Practices: Implement robust authentication mechanisms such as OAuth, OpenID Connect, and multi-factor authentication (MFA) to secure access to APIs. Ensure that credentials and tokens are securely managed, periodically rotated, and adhere to the principle of least privilege.
  2. Avoid Storing Keys: Do not store API keys or other sensitive credentials. Use keys once only when generating them and then discard them. If you need to use a key again, generate a new one to prevent potential security breaches.
  3. Maintain an Up-to-Date Inventory: Discover, classify, and inventory all APIs, endpoints, parameters, and payloads. Use continuous discovery to maintain an always up-to-date API inventory and disclose exposure of sensitive data.
  4. Proactively Address Potential Vulnerabilities: Security and development teams should work together to carry out threat modeling exercises to identify, understand, and address potential vulnerabilities before they can be exploited. Certain website features, such as login pages or checkout forms, are particularly susceptible to malicious bot activities. To mitigate such risks, it is essential to apply enhanced security measures and enforce stricter rules on these pages.
  5. Establish a Baseline for Expected Behavior: By understanding the baseline behavior of an API—including expected usage rates, geographic patterns, and client types—organizations can more readily detect abnormal activity indicative of bot-driven attacks. Monitoring for unusual spikes in traffic, unexpected upticks in API calls, or an increase in requests from a new client can signal potential security threats requiring immediate attention.
  6. Implement Comprehensive Access Controls: Using stringent access controls, such as token-based authentication and rate limiting, can fortify APIs against malicious bot activity. Enforcing rate limits on requests per minute/session and implementing IP-based restrictions can mitigate the risk of bot-driven attacks attempting to circumvent authentication mechanisms or overwhelm API endpoints.
  7. Maintain an Audit Trail: Establishing a comprehensive audit trail enables organizations to monitor user activity across APIs effectively. By logging and analyzing traffic logs, security teams can identify and respond promptly to potentially malicious bot activity, ensuring the integrity and security of API endpoints.

While these steps are a great start, organizations will be challenged to mitigate automated attacks targeting their API libraries until they use bot management and API security in tandem. This combined approach identifies vulnerable APIs, continuously monitors for automated attacks, and delivers actionable insights to promptly detect and mitigate potential threats.

As the backbone of modern digital ecosystems, APIs are essential for seamless communication between applications and services. However, this critical role also makes them a prime target for automated abuse by bots. The rise in API traffic and the growing volume and sophistication of bot attacks presents a significant business risk. Protecting APIs from automated threats requires a multi-faceted approach. By staying vigilant and taking proactive steps, organizations can protect their APIs against automated abuse by bots, ensuring the integrity, performance, and reliability of their digital services.

About the authors

Lebin Cheng is a technologist and serial entrepreneur with more than 20 years of experience in cybersecurity. Cheng co-founded Netskope and later cofounded CloudVector, acquired by Imperva. He was awarded 15 patents in areas such as network security, application infrastructure and API inspection. He holds an MBA degree from the Haas School of Business at the University of California Berkeley and a MS in Computer Science from Purdue University.

Lynn Marks is a skilled product manager with more than 10+ years of experience in R&D and B2B product management. Previously, she was product manager at Model N and Distil Networks (acquired by Imperva) where she oversaw the product roadmap and innovation. At Imperva she manages Imperva Advanced Bot Protection, Imperva Client Side Protection, and works closely with customers to solve complex business challenges. She holds a Bachelor's Degree in Economics from UC Santa Barbara.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.