学术报告-Robustness of Deep Learning Systems Against Deception

www.4001.com:发布时间:2019-06-03   来源:本站原创   浏览次数:0

Robustness of Deep Learning Systems Against Deception

主讲人 : Ling Liu  教授




This talk provides a comprehensive analysis and characterization of the state of art attacks and defenses. As more mission critical systems are incorporating machine learning and AI as an essential component in our social, cyber, and physical systems, such as Internet of things, self-driving cars, smart planets, smart manufacturing, understanding and ensuring the verifiable robustness of deep learning becomes a pressing challenge. This includes (1) the development of formal metrics to quantitatively evaluate and measure the robustness of a DNN prediction with respect of intentional and unintentional artifacts and deceptions, (2) the comprehensive understanding of the blind spots and the invariants in the DNN trained models and the DNN training process, and (3) the statistical measurement of trust and distrust that we can place on a deep learning algorithm to perform reliably and truthfully. In this talk, I will use our cross-layer strategic teaming defense framework and techniques to illustrate the feasibility of ensuring robust deep learning through scenario-based empirical analysis.


Prof. Ling Liu is a Professor in the School of Computer Science at Georgia Institute of Technology. She directs the research programs in Distributed Data Intensive Systems Lab (DiSL), examining various aspects of large-scale data intensive systems. Prof. Liu is an internationally recognized expert in the areas of Big Data Systems and Analytics, Distributed Systems, Database and Storage Systems, Internet Computing, Privacy, Security and Trust. Prof. Liu has published over 300 international journal and conference articles, and is a recipient of the best paper award from a number of top venues, including ICDCS 2003, WWW 2004, 2005 Pat Goldberg Memorial Best Paper Award, et al. Prof. Liu’s research is primarily sponsored by NSF, IBM and Intel.