fbpx

“At the Edge of AI At the Edge: Ultra-efficient AI on Low-power Compute Platforms,” a Presentation from Xnor.ai

Mohammad Rastegari, CTO of Xnor.ai, presents the “At the Edge of AI At the Edge: Ultra-efficient AI on Low-power Compute Platforms” tutorial at the May 2018 Embedded Vision Summit.

Improvements in deep learning models have increased the demand for AI in several domains. These models demand massive amounts of computation and memory, so current AI applications have to resort to cloud-based solutions. However, AI applications cannot scale via cloud solutions, and sending data over the cloud is not always desired for many reasons (e.g. privacy, bandwidth, …). Therefore, there is a significant demand for running AI models on edge devices. These devices often have limited compute and memory capacity, so porting deep learning algorithms to these platforms is extremely challenging.

In this presentation, Rastegari introduces Xnor.ai’s optimized software platforms, which enable deploying AI models on a variety of low-power compute platforms with extreme resource constraints. The company’s solution is rooted in the efficient design of deep neural networks using binary operations and network compression, along with optimization algorithms for training.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top