SEATTLE, Dec. 21, 2020 (GLOBE NEWSWIRE) — OctoML, the MLOps automation company for superior model performance, portability and productivity, today announced that the 3rd Annual Apache TVM and Deep Learning Compilation Conference concluded with hundreds of attendees representing 35 countries gathering to discuss the latest advances in deep learning compiler optimization. This was the biggest TVM Conference yet, with more than 20 hours of live-streamed content presented by 65 speakers spanning more than 20 organizations.
In his opening keynote, Tianqi Chen, co-founder and CTO of OctoML and co-creator of Apache TVM, gave an update on the TVM project. In a major step forward for the growth and success of the project, he announced that TVM had officially graduated as a top-level Apache Software Foundation project. This was alongside updates on four major areas for which TVM greatly improved support in the last year:
- Improved model optimization through automatic optimization scheduling with the Ansor framework.
- Increased edge device coverage through µTVM for tinyML bare metal device support, enabling model optimization for resource constrained embedded targets.
- A new unified intermediate language to support even more advanced models and optimizations.
- Improved support for heterogeneous accelerators, giving TVM the ability to use the full variety of resources available on target systems.
Chen then outlined a roadmap for the future of TVM, including further development of the unified intermediate language, deeper integration with standard computation libraries like Numpy, an improved on-ramp for new users and developers with a stabilized API (looking towards a full 1.0 release), new user tutorials, and expanded developer documentation.
“I couldn’t be more proud to see TVM become a Top Level Apache Project,” said Chen. “That recognition, of the Apache way of open development, combined with a 50% growth in the TVM contributor and continued commitment to TVM from many of the world’s largest technology companies, sets the stage for another year of incredible success as TVM becomes the de facto industry standard for deep learning compilation.”
Leading cloud and edge AI providers, including Amazon, AMD, Arm and Sima.ai, were also featured in keynote sessions about how their engagement with TVM improved their ML pipelines. For example, Amazon showcased how TVM enabled 38% higher throughput on BERT models, resulting in 49% lower cost for running its models.
Other presenting companies included Alibaba, Bosch, Google, Huawei, Microsoft, NTT and Qualcomm. These talks covered a range of topics about how TVM is being used and extended for production workloads. The range of uses highlighted the power and flexibility of the TVM framework and the power of open source in allowing users to extend the software for their needs. For example, Amazon and Bosch are actively extending the optimization framework, with improved search algorithms and cost analysis through virtualization. Xilinx, AMD and Arm are able to quickly target new and emerging hardware platforms with TVMs extensible compiler framework.
In addition to these industry highlights, the research and development community was also well represented at the conference, including a full session on Ansor, the new automatic optimization framework, was delivered by Lianmin Zheng of U.C. Berkeley, who gave a comprehensive overview of this powerful new feature. Jared Roesch of OctoML and Joey Chou of Amazon delivered talks about extending TVM to support new languages and custom ML hardware.
The conference also included general talks from KubeFlow co-founder David Aronchik of Microsoft about securing ML systems, Jacques Pienaar of Google about the MLIR project, and a presentation from Dr. Joey Gonzalez of U.C. Berkeley about recent advances in ML research.
All the conference presentations are available to watch at https://tvmconf.org.
About Apache TVM
Apache TVM is an open source deep learning compiler and runtime that optimizes the performance of machine learning models across a multitude of processor types, including CPUs, GPUs, accelerators and mobile/edge chips. It uses machine learning to optimize and compile models for deep learning applications, closing the gap between productivity-focused deep learning frameworks and performance-oriented hardware backends. It is used by some of the world’s biggest companies like Amazon, AMD, ARM, Facebook, Intel, Microsoft and Qualcomm.
About the Apache TVM and Deep Learning Compilation Conference
The 3rd Annual Apache TVM and Deep Learning Compilation Conference covered the state-of-the-art of deep learning compilation and optimization and recent advances in frameworks, compilers, systems and architecture support, security, training and hardware acceleration. Speakers included technology leaders from Alibaba, Amazon, AMD, ARM, Bosch, Microsoft, NTT, OctoML, Qualcomm, Sima.ai and Xilinx, as well as researchers from Beihang University, Carnegie Mellon University, Cornell, National Tsing-Hua University (Taiwan), UCLA, University of California at Berkeley, University of Toronto and University of Washington.
OctoML applies cutting-edge machine learning-based automation to make it easier and faster for machine learning teams to put high-performance machine learning models into production on any hardware. OctoML, founded by the creators of the Apache TVM machine learning compiler project, offers seamless optimization and deployment of machine learning models as a managed service. For more information, visit https://octoml.ai or follow @octoml.
Media and Analyst Contact:
Credit: Google News