Download our in-depth report: The Ultimate Guide to IT Security Vendors
SAN FRANCISCO — Artificial intelligence and machine learning tools are seen by some vendors as something of a panacea for improving cybersecurity.
While IBM is optimistic about AI, it is also warning that machine learning systems can be tricked and manipulated by attackers. IBM released new tools and research at RSA Conference 2018 designed to help researchers understand how certain types of malicious inputs can confuse AI systems and lead to inaccurate outcomes.
In a video interview with eSecurity Planet, IBM machine learning researcher Maria-Irina Nicolae and Sridhar Muppidi, CTO of IBM Security, explained how the new IBM tools work and what risks organizations need to know.
"In the toolkit, what we have are attack and defense methods, as well as some metrics for measuring robustness," Nicolae told eSecurity Planet.https://o1.qnsr.com/log/p.gif?;n=203;c=204652390;s=9477;x=7936;f=201803191633120;u=j;z=TIMESTAMP;a=20396194;e=i
She added that developers can use the IBM tools to attack their own AI models, testing for resilience and to help determine what defensive measure should be enabled. The approach to tricking the AI models is similar in nature to the security concept known as fuzzing. In fuzz testing, arbitrary garbage inputs are tested against a system to see what will happen.
"With fuzzing you try a lot of different inputs, while here we're talking about something that is very specific and has been engineered," Nicolae said.
Research into how to break AI models isn't an entirely new area, but what is new according to Muppidi is that IBM is operationalizing the research such that organizations can actually test the impact of the AI research. The open-source adversarial robustness toolkit is currently freely available at the project's GitHub repository.
Watch the full video interview below:
Sean Michael Kerner is a senior editor at eSecurityPlanet and InternetNews.com. Follow him on Twitter @TechJournalist.