Threat Management, Threat Management, Risk Assessments/Management

What happens when an insider threat vendor has an insider threat?

The Salesforce logo is seen at its headquarters on Dec. 1, 2020, in San Francisco. (Stephen Lam/Getty Images)

Insider threats affect everybody. If there had ever been any doubt, look no further than Code42, a vendor specializing in insider threat protection. The company will soon release a blog post describing how a Code42 feature implemented only around a week ago caught an exiting former employee who had improperly downloaded Salesforce data.

SC spoke with Mark Wojtasiak, VP of portfolio strategy, product and customer marketing, who managed the employee, about what Code42 did, what they learned, and what it's like to go from advising other firms about insider threats in the abstract to enacting those contingencies themselves.

One of the most notable things about the incident is how open you're being about it — reaching out to reporters, writing a blog, when being quieter might have been an option. Why not be quieter?

It's funny because I was just talking to people here at the office and they were asking the same thing: Why'd you put your thoughts down on paper? It's because we're a big believer in transparency.

As a leader, I had a member of our team departing, someone who had put in a resignation. It's standard processes for us to check back to see if this employee moved any data to locations that we don't trust and we discovered that they did. And in that process, the investigation was kicked off and, it was eye-opening. I've been talking about this for years as a marketer and researcher, and still, when it happens to you personally, when you get thrown into the process, your eyes get wide open. You see this is exactly how this is supposed to work, in terms of getting the phone call from the CSO, having a meeting within hours to discuss the event — what was moved, where did it go? Going through that process was really eye-opening

You started to go into it, but what actually happened?

We had an employee remove data two weeks before he was supposed to leave. And it was shocking, quite frankly.

Was it someone you would have expected this from?

Not at all. This person was, full transparency, totally trustworthy. And I presume positive intent. I'm just not going to jump to malicious intent. I'll just deal with the facts and let security, HR and legal go from there. But was there any suspicion? No, not at all. That's what made it so shocking.

One of the things that traditionally makes insider threats difficult to handle is that discovering them means tracking employees. A lot of managers feel like they are violating employees trust, or that it would never be their employee.

This is all in how you position and message. When you think about even the term "insider threat," it's got that malicious kind of tone to it. So we talk about "insider risk." We are all human beings, we're all employees, we're all working super fast. We're inherently putting data at risk, all the time. Right. So we tell employees "Hey, we understand you're trying to get work done or work fast. But we are going to pay attention to where data goes so that you don't inadvertently put the company at risk, yourself at risk, or your co-workers at risk."

We're not saying we're going to monitor you in case you become a threat. We're going to monitor data movement, because in the era in which we live, with collaboration, technology employees coming and going working from home working remotely on and off the network, etcetera, we just have to have a better grasp of where that risk exposure is. From the inside out. So that we can best manage it and the most effective way.

How sensitive was the data that was removed?

I can tell you it was a download of data from Salesforce to a personal device. So our technology recognized that. It was interesting because it was pre-released technology. We run our tech in house for weeks, if not a couple months, to test it out. And this was technology that we actually launched last week.

What surprised you about the experience of going through the process yourself?

We're a big on presuming positive intent. So the fact that it felt personal was surprising. I actually called one of the leaders and I told them, "I just want to let you know, I'm on this emotional roller coaster right now." I first find out and I'm angry. Then I'm shocked. How can this happen? And then, shock quickly turned to kind of not excitement, but relief we caught this in time. After relief, it turned into, "This is kind of embarrassing, right, as a leader that one of my team members would pose this risk to the organization, even inadvertently?

Then I started to get over that and talk about what were their opportunities for improvement. Are we all on the same page, whether around risk tolerance, are we all on the same page relative to our risk awareness. Let's take this as an opportunity to build upon. What did we learn and what would be transferable to others?

What did you learn that would be transferable to others?

I learned that our security team practices what they preach. They were completely transparent with me, with legal with HR, and vice versa. I was transparent with other key stakeholders that would be impacted by this. We were very transparent with the organization that this person was going to. "So, heads up: We have a departing employee. This employee is heading to your organization. I just want to let you know that he or she has access to sensitive information."

On the flip side, that's also happened in terms of new employees that come in. When we see data being added to our org or being brought into our organization, we're transparent with that previous employer to say, "Hey, FYI, we recognize this. It may not have been mal-intent. It was just pure negligence," like syncing to an iCloud and all of a sudden there's new corporate data from some other company and someone's iCloud coming into our environment.

How important is it for the process to treat accidents as the same risk of a malicious actor?

Very. If an organization in one industry deems PII as their most valuable data most sensitive and an employee decides to pull everything down to a thumb drive and sell that on the open market with mal-intent is that any different than an employee that accidentally has their system syncing to an untrusted cloud service, like Google Drive, OneDrive, or iCloud? The PII is still exposed, right? The data risk is still severe, regardless if that employee was stealing it, leaking it, or just plain old losing it.

You can determine intent towards the end of the investigation after the detection or the investigation process. Intent should drive response, not necessarily drive level of risk.

Joe Uchill

Joe is a senior reporter at SC Weekly, focused on policy issues. He previously covered cybersecurity for Axios, The Hill and the Christian Science Monitor’s short-lived Passcode website.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.