To further its goal of passing trained frameworks to embedded inference engines, the Khronos Group adds to its existing converters with two new bidirectional converters. Now available on the NNEF GitHub, these new tools enable easy conversion of trained models, including quantized models, between TensorFlow or Caffe2 formats and NNEF format.
How do extension mechanisms enable standards to keep up with fast changing fields?
Standards make life easier, and we depend on them for more than we might realize — from knowing exactly how to drive any car, to knowing how to get hot or cold water from a faucet. When they fail us, the outcome can be comical or disastrous: non-standard plumbing, for instance, can result in an unexpected cold shower or a nasty scald. We need standards, and the entire computing world is built on them.
NNEF and ONNX are two similar open formats to represent and interchange neural networks among deep learning frameworks and inference engines. At the core, both formats are based on a collection of often used operations from which networks can be built. Because of the similar goals of ONNX and NNEF, we often get asked for insights into what the differences are between the two. Although Khronos has not been involved in the detailed design principles of ONNX, in this post we explain how we see the differences according to our understanding of the two projects. We welcome constructive discussion as the industry explores the need for neural network exchange and hope this post may be a constructive start to that conversation.
NNEF design philosophy: network structure and target use cases
Previous blog posts have stressed that the deployment process of neural networks to inference engines is becoming fragmented. An accepted standard can facilitate the industrial use of artificial intelligence by creating mutual compatibility between deep-learning frameworks and inference engines. The Neural Network Exchange Format (NNEF) is the Khronos Group’s solution to this problem.
Machine learning’s fragmentation problem — and the solution from Khronos
There is a wide range of open-source deep learning training networks available today offering researchers and designers plenty of choice when they are setting up their project. Caffe, Tensorflow, Chainer, Theano, Caffe2, the list goes on and is getting longer all the time. This diversity is great for encouraging innovation, as the different approaches taken by the various frameworks make it possible to access a very wide range of capabilities, and, of course, to add functionality that’s then given back to the community. This helps to drive the virtuous cycle of innovation.
NNEF (Neural Net Exchange Format) from Khronos will enable universal interoperability for machine learning developers and implementers
The Khronos™ Group is about to release a new standard method of moving trained neural networks among frameworks, and between frameworks and inference engines. The new standard is the Neural Network Exchange Format (NNEF™); it has been in design for over a year and will be available to the public by the end of 2017.
SIGGRAPH Highlights: OpenGL’s 25th, BOF Blitz Party, and News
In early August the team was at SIGGRAPH in Los Angeles, where we celebrated OpenGL’s 25th anniversary at the BOF Blitz Party. We also announced a new website, as well as OpenGL 4.6, a growing glTF ecosystem, and the Vulkan Portability Initiative.
After the Khronos BOF Blitz at Siggraph, there’s the Khronos After-Party; Don’t Miss It!
If you are going to be at the 44th SIGGRAPH, the largest conference and exhibition in computer graphics and exhibition techniques, from July 30 – August 3, 2017 at the Los Angeles Convention Center, don’t miss the opportunity to eat, drink, and learn about all things Khronos!