MIT
Quest for
Intelligence

Robust, Interpretable Deep Learning Systems

November 20, 2018

Quest Symposium on Robust, Interpretable Deep Learning Systems

Posters

Poster format: up to 48''×72'', paper.

Set up posters at 2:00pm before the event.

NumberPoster TitleName
A01On Algorithms for Adversarial DynamicsAbdullah Al-Dujaili
A02Defense Against Adversarial Attacks using Web-Scale Nearest Neighbors SearchAbhimanyu Dubey
A03Clean-Label Backdoor AttacksAlexander Turner
A04Generally Exciting Inputs and How to Get Rid Of Them: A Little Network IntrospectionAspen
A05Combining Machine Learning with Deductive Reasoning for Improved ExplainabilityBen Z Yuan
A06Safe Reinforcement Learning with Model Uncertainty Estimates for Dynamic Collision AvoidanceBjörn Lütjens
A07Evaluating 'Graphical Perception' with CNNsDaniel Haehn
A08Individual Neurons in Neural Machine TranslationYonatan Belinkov
A09Robustness may be at odds with accuracyDimitris Tsipras
A10Analyzing Gradients to Detect Backdoors in Deep Neural NetworksEbube Chuba
A11Towards Functional Transparency: a Game-Theoretic ApproachGuang-He Lee
A12All you need to train deep residual networks is a good initializationHongyi Zhang
A13ResNet with one-neuron hidden layers is a Universal ApproximatorHongzhou Lin
A14Quantum optical neural networksJacques Carolan
A15Comparing deep neural network and human representations via sound synthesisJenelle Feather
A16Defensive Quantization: When Efficiency Meets RobustnessJi Lin
A17The Lottery Ticket HypothesisJonathan Frankle
A18Symbolic Relation Networks for Reinforcement LearningJosh Joseph
A19Automating Stylistic Bias Detection in Sentiment AnalysisJudy Hanwen Shen
A20Visual Inspection of Saliency Maps Can Provide a False Sense of SecurityJulius Adebayo
B21Towards Robust, Locally Linear Deep NetworksGuang-He Lee
B22Learning Symbolic Rules Through ExplanationLeilani H. Gilpin
B23Efficient Neural Network Robustness Certification with General Activation FunctionsLily Weng
B24Training for Faster Adversarial Robustness Verification via Inducing ReLU StabilityMahi Shafiullah
B25Inverse Graphics with Probabilistic Programming and Deep Learning for InferenceMarco Cusumano-Towner
B26Examining Learned Class Relationships in Deep Neural NetworksMathew Monfort
B27Neural Networks Trained to Estimate F0 from Natural Sounds Replicate Properties of Human Pitch PerceptionRay Gonzalez
B28Targeted Syntactic Evaluation of LSTMs and Recurrent Neural Network GrammarsRoger Levy
B29Understanding Phase-Coded Neural Networks and their ScalabilityRumen Dangovski
B30Verification of Dynamical Systems with Piecewise Affine Machine Learning ElementsSadra Sadraddini
B31Minimal Images in Deep Neural Networks: DNN Failures on Natural ImagesSanjana Srivastava
B32Verification of Recurrent Neural Networks via Systems and Control TheoryShen Shen
B33Utility Interpretation in Deep Neural NetworkShenhao Wang
B34Providing Rationales and Pragmatically Effective ModificationsSudhanshu Mishra
B35Detecting Egregious Responses in Neurl Sequence-to-Sequence ModelsTianxing He
B36Redundancy Emerges in Overparametrized Deep Neural NetworksXavier Boix
B37Emergence of topographical correspondences between deep neural network and human brain visual cortexYalda Mohsenzadeh
B38Comparing multi-task and single-task network interpretabilityKandan Ramakrishnan
B39Visualizing and Understanding Generative Adversarial NetworksDavid Bau