Computing That Serves

Machine Learning for Detecting (and Generating!) Maliciousness in Information Security


Thursday, October 26, 2017 - 11:00am


Hyrum Anderson


Casey Deccio

Colloquium presented by Hyrum Anderson, Technical Director of Data Science at Endgame
Thursday, October 26, 2017 at 11:00 A.M.
Location: 1170 TMCB

In recent years, computer security solutions have been shifting their reliance from signatures to statistics.  The reason is that machine learning provides an effective framework to detect never-before-seen threats, such as new malware families.  And for this, machine learning has been quite successful.  However, machine learning is also especially susceptible to evasion attacks by, ironically, other machine learning methods and models.  In this talk, I’ll outline the latest trends in machine learning for detecting malware.  Then, I’ll outline how to create machine learning models explicitly trained to evade these detection models.  Why create maliciousness?  By proactively probing for weaknesses in the defensive machine learning models, vulnerabilities can be patched before they are discovered by sophisticated and motivated adversaries.


Dr. Hyrum Anderson is the Technical Director of Data Science at Endgame, where he leads research on detecting adversaries and their tools using machine learning. Prior to joining Endgame he conducted information security and situational awareness research as a researcher at FireEye, Mandiant, Sandia National Laboratories and MIT Lincoln Laboratory. He received his PhD in Electrical Engineering (signal and image processing + machine learning) from the University of Washington and BS/MS degrees from Brigham Young University. Research activities include adversarial machine learning, deep learning, and large-scale malware classification.