In the summer of 2017, I was involved in the type of project that very few get to work on during their careers: the launch of a new category of devices. While new product launches happen all the time, it’s rare to witness—let alone help launch—an entirely new type of product. The device in question is the Intel® Movidius™ Neural Compute Stick, a USB-based neural network development kit designed to assist in deploying more AI applications out into the real world. While previous platforms included full-blown workstations, messy breakout boards, and cloud instances, the Neural Compute Stick presented itself in a very different form factor.
Without any historical market data for similar products, we launched the Neural Compute Stick and held our collective breath…
Luckily, we didn’t have to wait too long for a collective sigh of relief. Interest in the device was huge, and we sold out of our initial stockpile almost immediately. During the Computer Vision and Pattern Recognition conference (CVPR), where there were onsite sales of the device, we were met with lines stretching halfway across the conference show floor. We were humbled by the response we received for the Neural Compute Stick and had to rapidly learn how to keep up with unprecedented demand. From our initial launch, we’re proud to say that this device carved out its own corner of the AI market as a dedicated low power inference kit.
One year on and we could not be happier with how the machine learning community has adopted the device. Take one look at Youtube to discover the hundreds of projects where people have ported various neural networks onto the device. We’ve spoken with people building problem-solving devices ranging from a shark detecting drone, to a bacteria detector for water supplies, to a low-cost skin cancer screening camera. These intrepid innovators were able to go from idea to working proof of concept both at high speed and low cost, thanks to the benefits the Neural Compute Stick delivers.
Just like the examples above, the small size and relatively low cost of the Neural Compute Stick have meant that thousands of people have been able to take AI applications into places that were previously impossible, or too much of a financial risk to entertain. We’re truly amazed by the creativity and innovation driving the community forward, and we’re delighted to see that the Neural Compute Stick has become a valuable tool in extending the benefits of machine learning applications to new devices and product categories.
We continue to listen and grow with our development community, adding additional support across machine learning frameworks and for diverse compute platforms. We quickly realized that the Raspberry Pi community was clamoring for support, so we enabled this platform soon after launch. We also added support for Tensorflow* – one of the largest machine learning frameworks in use today. Even more recently, the team has released version 2.0 of the Neural Compute SDK which comes with tighter software support for more model architectures, and the ability to deploy multiple graph files to a single Neural Compute Stick. We’ve also added support for the new OpenVINO™ toolkit, allowing developers to target a wide array of Intel silicon solutions through one common, optimized framework. Taken together, the Neural Compute Stick ecosystem has grown into a cross-platform, cross-OS solution for deploying AI applications relying solely on affordable, off-the-shelf hardware.
Thanks to our new AI: In Production program, developers are prototyping at their workbenches with the Neural Compute Stick, then building products through modules such as the UP AI Core. For those requiring further help with their software development, we’ve enabled ISVs such as Deepomatic, which provides a computer vision software that allows companies to build and manage custom video recognition applications. We’re incredibly excited to see the first products hit the market that follow the path to production we’ve developed with our manufacturing and technology partners. What began as a skunkworks project has now grown into a full-blown software and hardware solution ecosystem delivering a cornucopia of diverse and innovative AI-centric products to the market. We’re proud that the Neural Compute Stick is enabling the creative minds out there to utilize the power of deep neural networks in cheaper and in more flexible ways. We’re incredibly excited for what we are going to be bringing to the community over the next 12 months, and we’re even more excited to see what the community itself builds with Intel’s Neural Compute Stick.
Want to sync up with the Intel Movidius team at CVPR 2018? Come visit us at BOOTH 1337!
Interested in getting started with the Intel Neural Compute Stick? We have a ton of getting started examples on our GitHub repo: https://developer.movidius.com/examples
Want some inspiration for your own AI projects? Go beyond the hello world or blink code example; learn how to build real-world products: https://developer.movidius.com/blog
Are you a hardcore developer? Want to customize the SDK? The source code is available on GitHub: https://developer.movidius.com/repo
Notices and Disclaimers
Intel, the Intel logo, and Movidius are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
© Intel Corporation
*Other names and brands may be claimed as the property of others.