There’s always some company in the news that’s accidentally exposed their cloud database full of confidential data. Whether it’s companies targeted by bad actors like Robinhood, The Telegraph newspapers, and CapitalOne, or the many organizations affected by issues like the ChaosDB vulnerability, most companies have been exposed at some point. An astonishing 2020 IDC research study found that nearly 80% of organizations had experienced a cloud data breach within the preceding 18 months.
None of us want to be the next big headline. There are best practices organizations can take up to protect their cloud database and data. It boils down to this: Know the company’s cloud and databases, know the data, know who has access and has accessed the data, and automate everything possible. Here are seven cloud database security best practices:
Know the company’s cloud database. Map the cloud environment and workloads to discover all databases and data storage resources. Use an automated approach that will do this continuously and will show databases in context with the company’s cloud architecture, networked applications, and APIs.
Detect and classify sensitive cloud data. Use data loss prevention (DLP) to scan and identify where the company has confidential intellectual property and privacy-regulated data like personally identifiable information, healthcare records, and payment card information. Know exactly where the company keeps this data in cloud databases and storage.
Monitor and control application access to the cloud database. It’s important to audit and control access permissions. Continuously audit what applications have permissions to access the database and are actively accessing it, as well as the east-west data flow between the database and those applications.
Monitor and control user access to the cloud database and the apps that access it. Control user access. Security should continuously monitor user behavior with UEBA capabilities and use automated policies to respond to any abnormal or high-risk user actions to challenge or cut off their access.
Continuously assess and correct risky database configurations. It’s so easy to accidentally use default configurations, but these are often highly risky. With new services spinning up all the time in a dynamic cloud environment, the security team needs its security to continuously watch for high-risk permissions and configurations.
Correct all the database-related vulnerabilities. Security researchers find new vulnerabilities all the time, and it’s easy for an old vulnerability to slip into the environment with a simple software update. Cloud security can continuously detect existing and new vulnerabilities in the cloud -- and those associated with databases are important to fix fast.
Monitor and control east-west traffic that touches the database and the apps that access it: Detect which kind of sensitive data runs east-west traffic and where it flows. If sensitive data flows to an unauthorized workload, external API, or unusual application or the company gets hit with a DDoS attack – make sure the team can be immediately informed so it can cut that traffic flow - or get really serious and automate cutting the traffic flow through network microsegmentation policies.
In general, it’s always a good practice to stay up-to-date on the most current patches from the company’s cloud and database vendor. But security threats and risks are constantly changing and cloud environments move fast. No one has unlimited time to manually check everything, so automate security as much as possible. The good news: with cloud-native security that operates in runtime and includes DLP, organizations can automate a lot of this today.
Mitthan Meena, chief executive officer, Microsec.ai