According to a report from security firm Carbon Black that surveyed 410 cybersecurity researchers, AI-driven security solutions are flawed. The report finds 70% of respondents claimed attackers can bypass machine learning techniques. They did not write off AI or machine learning as unhelpful, but rather said that they just aren’t there yet and cannot be solely relied on to make big decisions when it comes to security. AI and machine learning should be used “primarily to assist and augment human decision making,” said the report. Eighty-seven percent of those surveyed said it will be more than three years before they really feel comfortable trusting AI to carry out any significant cybersecurity decisions.
AI and machine learning have become more prominent in cybersecurity research and commercial products as a way to keep up with an ever-evolving threat landscape. Among these new threats are non-malware attacks or fileless attacks. As the names suggest, these are attacks that do not use any malicious file or program. Rather, they use existing software on a system, making them largely undetectable for traditional antivirus programs that rely on detecting suspicious-looking files before acting. Sixty-four percent of Carbon Black’s respondents said that they had seen an increase in such tactics since early 2016.
Non-malware attacks will be the scourge of organizations over the next year, said the report, and will continue to need a human approach. Learn more at https://www.carbonblack.com