Security Architecture, Endpoint/Device Security, Endpoint/Device Security, Endpoint/Device Security, Endpoint/Device Security, Endpoint/Device Security

How AI Can Prevent Dangerous Email Mistakes

By Marcos Colon

We're sure you've likely heard the term artificial intelligence (AI) while walking down the expo floor at one of the many security conferences taking place throughout the country. But are you aware of its current impact on the enterprise? Chances are you may have an idea, but when it comes to leveraging this technology, how does it squarely fit into what you're trying to achieve at your organization? When it comes to getting your grips on end-user behavior, it could offer quite the advantage. InfoSec Insider caught up with Neil Larkins, CTO at Egress Software, who gave us a breakdown of the technology, how it's being used in the enterprise today, but most importantly, how security leaders can take advantage of it to measurably reduce risk within the business.

InfoSec Insider: What is AI primarily associated within the enterprise today? 


Neil Larkins: 
Business applications for AI are still very new but they are gaining traction in areas that generate large volumes of data. This can include providing insights about customers – from suggesting the coupons a shopper might like based on their previous purchases, to detecting a client account that might be at risk of churn due to certain behaviors. While there’s often a lot of fear that AI might replace human jobs, on the whole, it’s just going to make employees work more effectively. It won’t necessarily replace the customer service representative on the end of the phone, but it will enhance the interactions they have with clients.

What’s the perception of AI as it relates to data security in the business today? 


NL: There’s a lot of noise about AI in the security industry – so one of the challenges is to cut through this with technology that can actually add value for end-users. To do this, we need to look at users’ pain points, for example, usability or disrupted workflows. Then we can use smart technology to help to ease these problems – for example, preventing over-encryption of emails (where a user encrypts everything, including information that isn’t sensitive), which can cause recipients to pushback. By doing this, we can make security technology something that is embraced by the user, and ultimately protect their organizations from data breaches.

Accidental data breaches are pretty common when you consider employee behaviors. What are some of the consequences that businesses can face as a result of this and how can AI factor into being a solution to the problem?


NL: Whatever the cause, data breaches can have a whole host of consequences. Immediate impacts can be felt in the loss of business reputation, which is enforced by the media coverage these incidents now receive. With consumers increasingly aware of their rights as data subjects, this can translate into lower acquisition of new customers and churn of existing customers – all of which will ultimately hit the bottom line. Managing the company’s response to a data breach can also come with a significant price tag – from forensic and investigative costs, to crisis team management. Where there’s a risk of fraud, costs can also include identity protection services. On top of this, organizations can face punitive fines for non-compliance with data privacy regulations.

Human error plays a significant role in the likelihood of a data breach occurring, so one of the biggest applications we’re seeing is where AI can help users make smarter security decisions. This can include scenarios like preventing sensitive information being disclosed via email to the wrong people, to helping employees avoid phishing and spear phishing attacks.

What about email threat trends? Have there been any developments on that front, or are the tried and true phishing attacks still being utilized the most by malicious actors?


NL: Organizations globally are increasingly concerned by spear phishing attacks, also known as business email compromise (BEC), because anyone can fall for a well-orchestrated attack, including c-level execs! This is where the application of technologies like machine learning, deep learning and NLP have made it increasingly possible to mitigate this risk. By analyzing various attributes –  from the sender’s authenticity to the recipient’s ‘normal’ email behavior – we can start to highlight anomalies and truly begin to tackle this threat.

Tried and true malware/ransomware attacks sent via phishing emails have always remained a problem area for businesses, as it’s difficult for static filters on the network to detect evolving threats. AI and machine learning can help to tackle this by looking at the authenticity of any URLs contained within emails – ultimately preventing these types of attacks.

What should security practitioners be worried about more? Accidental breaches or proactive attempts by malicious actors?


NL: Both! A robust security policy looks at both external and internal threats, building defenses against as many threats as possible. Traditionally, however, we’ve seen information security and IT teams prioritize external attacks – probably because it was easier to put spam filters on the network boundary than to get employees to use encryption solutions or stop them from clicking on links. But times have changed. AI and machine learning are increasing the value that end-users get from their security tools by helping them to work more productively and securely. Combined with a growing awareness about the need to protect personal data, users now see the benefits of tools that can, for example, encrypt the emails they send, ensure they don’t go to the wrong person or prevent them from clicking on a malicious link.

Although AI can provide an added layer of security, it’s not a silver bullet. What additional steps should security pros be taking? 


NL: At Egress, we prioritize the user. If you look at how rapidly tech applications have changed in the last five years, and think about how they might change in the next five, the only constant that remains is the user. So I would advise security pros to examine the users in their business and introduce technology that can help them do their jobs more securely – whether they include AI or not. We also need to promote a more pervasive culture of openness, as frequently users hesitate to report a breach because they might get into trouble. While actions have consequences, we can also all learn from others’ experiences – for example, sharing examples of phishing attacks received by other users, so that employees are aware of the threat they face.

Let’s say you had a crystal ball…look into it. Where do you see email security in the next 5 years?


NL: Email security will be as important as ever – because email isn’t really going to disappear any time soon. We’ll probably see email usage decline and then plateau over the next few years, as collaboration and IM applications increase in popularity. However, email is one of the only communication mechanisms that almost everyone within an organization has access to, so email security is also here to stay! However, AI and machine learning will change the face of email security, making it much more ubiquitous across organizations and developing to meet any advancements in the threats we experience.

Interested in learning more about tools and techniques? Mark your calendars for April 1 for the InfoSec World Conference & Expo.

Kal Loftus

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.