back

Deep Learning Blindspots

Tools for Fooling the "Black Box"

If you suspend your transcription on amara.org, please add a timestamp below to indicate how far you progressed! This will help others to resume your work!

Please do not press “publish” on amara.org to save your progress, use “save draft” instead. Only press “publish” when you're done with quality control.

Video duration
00:53:48
Language
English
Abstract
In the past decade, machine learning researchers and theorists have created deep learning architectures which seem to learn complex topics with little intervention. Newer research in adversarial learning questions just how much “learning" these networks are doing. Several theories have arisen regarding neural network “blind spots” which can be exploited to fool the network. For example, by changing a series of pixels which are imperceptible to the human eye, you can render an image recognition model useless. This talk will review the current state of adversarial learning research and showcase some open-source tools to trick the "black box."

This talk aims to:

- present recent research on adversarial networks
- showcase open-source libraries for fooling a neural network with adversarial learning
- recommend possible applications of adversarial networks for social good

This talk will include several open-source libraries and research papers on adversarial learning including:

Intriguing Properties of neural networks (Szegedy et al., 2013): https://arxiv.org/abs/1312.6199
Explaining and Harnessing Adversarial Examples (Goodfellow et al., 2014) https://arxiv.org/abs/1412.6572
DeepFool: https://github.com/LTS4/DeepFool
Deeppwning: https://github.com/cchio/deep-pwning

Talk ID
8860
Event:
34c3
Day
2
Room
Saal Adams
Start
2 p.m.
Duration
01:00:00
Track
Resilience
Type of
lecture
Speaker
Katharine Jarmul
Talk Slug & media link
34c3-8860-deep_learning_blindspots

Talk & Speaker speed statistics

Very rough underestimation:
161.7 wpm
876.6 spm
163.7 wpm
886.5 spm
100.0% Checking done100.0%
0.0% Syncing done0.0%
0.0% Transcribing done0.0%
0.0% Nothing done yet0.0%
  

Work on this video on Amara!

Talk & Speaker speed statistics with word clouds

Whole talk:
161.7 wpm
876.6 spm
datalearningnetworkfooladversarialpeoplemodelmachinewaystalknetworksessentiallytrainingneuraltimefinallybitfindcomputerlearnsystemsthingsinputaimethoddeeptypeworkstrainstartsigncompaniestodayfacegoodworkgradientresearchersfacebookpoisoningsharegdprlayersarealibrarycatimagespamtargetlayer
Katharine Jarmul:
163.7 wpm
886.5 spm
learningnetworkdatafooladversarialpeoplemodelmachineessentiallyneuralnetworkswaysfinallytalkbittrainingcomputerfindsystemsdeepaisignmethodtodayworksworkfacetypethingslearngradienttimefacebookstartinputresearchersarealayerlayerstrainspamimagetargetlibrarycatcomputerscallpopularsendideas