Intel Accelerates Development of Artificial Intelligence Solutions
OREANDA-NEWS. October 11, 2017. Intel announced that it joined the Open Neural Network Exchange (ONNX) to enable enhanced framework interoperability for developers that boosts efficiency and speeds creation of artificial intelligence (AI) and deep learning models. AI and deep learning are transforming how people engage with the world and how businesses make smarter decisions.
The ONNX format was first announced last month by Microsoft* and Facebook* to give users more choice within AI frameworks, as every modeling project has its own special set of requirements that often require different tools for different stages. Intel, along with others, is participating in the project to provide greater flexibility to the developer community by giving access to the most suitable tools for each unique AI project and the ability to easily switch between frameworks and tools.
Intel’s addition to the open ecosystem for AI will broaden the toolset available to developers through neon and the Intel® Nervana™ Graph as well as deployment through the Intel® Deep Learning Deployment Toolkit. neon will be compatible with other deep learning frameworks through the Intel Nervana Graph and ONNX, providing customers with more choices for frameworks and compatibility with the right hardware platform to fit their needs.
Project Brainwave, Microsoft’s FPGA-based deep learning platform for accelerating real-time AI, will also support ONNX in order to help customers accelerate models from a variety of frameworks. Project Brainwave leverages Intel® Stratix® 10 FPGAs to enable the acceleration of deep neural networks (DNNs) that replicate “thinking” in a manner that is conceptually similar to that of the human brain. Microsoft was the first major cloud service provider to deploy FPGAs in its public cloud infrastructure and the technology advancements it is demonstrating today with Intel Stratix 10 FPGAs.