Protecting Users: When Security and Privacy Collide (pdf, video)

Aleatha Parker-Wood

Abstract: Machine learning for security is data hungry, and the scope of the data used is expanding over time, especially as more attacks shift to exploiting human vulnerabilities. Where will that data come from and what are the consequences of collecting it? This talk will cover the risks and benefits of data collection for security ML, as well as recent advances in private learning that change the risk landscape, including differential privacy and federated learning. The talk will discuss some lessons learned around using private learning in practice, and give an overview of recent research.

Bio: Dr. Aleatha Parker-Wood is the Machine Learning and Algorithmic Privacy lead at Humu, a company dedicated to making work better for everyone everywhere. Prior to Humu, she was a Sr. Principal Research Engineer and manager in the Center for Advanced Machine Learning at Symantec, where her team did original research and contributed machine learning to numerous Symantec products including SEP 14, Email Security.cloud, Norton Core, phishing page detection, and more. She holds multiple security-related patents, and serves on the steering committee for ScAINet, the SeCurity AI Networking conference. She received her Ph.D. in Computer Science from the University of California, Santa Cruz.

On Evaluating Adversarial Robustness (video)

Nicholas Carlini

Abstract:

Several hundred papers have been written over the last few years proposing defenses to adversarial examples (test-time evasion attacks on machine learning classifiers). In this setting, a defense is a model that is not easily fooled by such adversarial examples. Unfortunately, most proposed defenses to adversarial examples are quickly broken.

This talk examines the ways in which defenses to adversarial examples have been broken in the past, and what lessons we can learn from these breaks. Begin with a discussion of common evaluation pitfalls when performing the initial analysis, it then turns to recommendations for how we can perform more thorough defense evaluations.

Bio: Nicholas Carlini is a research scientist at Google Brain. He analyzes the security and privacy of machine learning, for which he has received best paper awards at IEEE S&P and ICML. He graduated with his PhD from the the University of California, Berkeley in 2018.