Bookmark and Share

"AI Reliability Against Adversarial Inputs," a Presentation from Intel

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Gokcen Cilingir, AI Software Architect, and Li Chen, Data Scientist and Research Scientist, both at Intel, presents the "AI Reliability Against Adversarial Inputs" tutorial at the May 2019 Embedded Vision Summit.

As artificial intelligence solutions are becoming ubiquitous, the security and reliability of AI algorithms is becoming an important consideration and a key differentiator for both solution providers and end users. AI solutions, especially those based on deep learning, are vulnerable to adversarial inputs, which can cause inconsistent and faulty system responses. Since adversarial inputs are intentionally designed to cause an AI solution to make mistakes, they are a form of security threat.

Although security-critical functions like login based on face, voice or fingerprint are the most obvious solutions requiring robustness against adversarial threats, many other AI solutions will also benefit from robustness against adversarial inputs, as this enables improved reliability and therefore enhanced user experience and trust. In this presentation, Cilingir and Chen explore selected adversarial machine learning techniques and principles from the point of view of enhancing the reliability of AI-based solutions.