08/27/2021: This story has been updated with new information about the results of re-tests from the study, which included 14 Endpoint Detection and Response systems and five Endpoint Protection Platforms.
The researchers behind a study claiming to test and compare detection capabilities for 11 different commercial endpoint detection and response (EDR) systems said they are re-running the experiment for five of the vendors after a “misclassification error” resulted in them testing the wrong software.
On July 9, two researchers at the University of Piraeus in Greece published a study, titled “An Empirical Assessment of Endpoint Detection and Response Systems against Advanced Persistent Threats Attack Vectors,” that purported to test and compare EDR products from 11 different commercial vendors.
Since then, SC Media has confirmed with at least two of the vendors whose products were named in the experiment – ESET and F-Secure – that they have contacted the researchers with suspicions that were running tests of their Endpoint Protection Platform product, not their EDR. Sandra Proske, vice president of corporate communications and brand at F-Secure, told SC Media that the researchers “indeed tested our EPP though they stated in their original report they would have tested our EDR.”
“As the researchers aimed to test EDR products their methodology was mostly simulating attack techniques without a malicious payload and hence our EPP product - which by design checks for a malicious payload that performs system changes or a malicious process execution - didn’t detect any of the attack scenarios,” Proske added.
While the definition and capabilities of endpoint protection platforms can vary from vendor to vendor, endpoint protection platforms generally perform more basic malware scanning activities and can lack more sophisticated detection and remediation capabilities that vendors promise for their EDR products.
Both F-Secure and ESET said they have been in contact with the researchers to discuss the matter and provide a license for their EDR product in order to allow for re-tests. A third firm, McAfee, told SC Media they have also reached out to the authors to inquire about the version of their product used.
In an interview, the authors of the study confirmed to SC Media that for five of the 11 companies referenced (ESET, F-Secure, McAfee, Kaspersky and Symantec) they mistakenly tested their endpoint protection platforms, not their endpoint detection and response systems. They characterized the oversight as a “misclassification error” that stemmed from both a desire to test a wide range of vendors, as well as confusion over how some EPP systems were described on vendor websites.
One of the researchers, Constantinos Patsakis, told SC Media that they initially started out evaluating five EDR system, but feedback during the review process left them with a desire to search for more systems in order to capture “as much of the market as possible.” They said it was not always clear from the vendors’ website or marketing documents that there were separate EPP and EDR products.
“In some cases I can say that it was an obvious clear and honest mistake, we downloaded the wrong version” said Patsakis. “In other cases, if you try to find the appropriate solution from the web page, you might end up with the wrong one like we did.”
The researchers said they have spent the past week engaging with three of the vendors to obtain a license for the software and re-do the tests. An updated version of the study released Aug. 23 shows results for the re-tests, which encompassed 14 different EDR and five EPP products. According to the new aggregated results, the companies who had their full EDRs tested include BitDefender, Carbon Black, Check Point, Cisco, Comodo, CrowdStrike, Elastic, F-Secure, Forti, Microsoft, Panda Security, Sentinel One, Sophos and Trend Micro. Four of the companies named in the original study (ESET, Kaspersky, McAfee and Symantec) were not included in the the EDR portion of the re-test results, though performance data for their EPP systems were included in a separate box.
In an email update on Aug. 25, Patsaskis told SC Media that while complaints about the underlying misclassification were valid, he believes the re-tests validate their original conclusions that most EDR systems are unable to identify or detect signs of the four different types of attacks outlined in the previous version of the study.
"The fact that this pattern is seen in so many cases is very alarming," said Patsakis. "Both me and George are from academia and we do not have something to gain out of this study other than raising awareness and make people understand that EDRs and EPPs are no silver bullets that will magically solve the security issues of organisations."
They stressed that despite the error, they still believe the underlying methodology used for the research is sound. Moreover, they reiterated that they are actively seeking to address the issue and are open to working with other vendors named in the study to answer questions and re-run the experiment where necessary.
“We’re not their enemies, we’re here to help them. We’ve never refused help to anyone, we replied to all the emails [we received] and we’re open to re-testing if they’re willing to provide us with their EDRs,” said George Karantzas, the study’s other author.