Landing as a great Christmas present for LLVM developers interested in heterogeneous hardware compilation, TensorFlow and other machine learning use-cases was MLIR within the LLVM source tree.
MLIR is the Multi-Level Intermediate Representation open-sourced by Google earlier this year. MLIR aims to be a common IR/format between machine learning models and frameworks.
LLVM founder Chris Lattner has been involved with MLIR’s development at Google and they’ve been working for months on contributing MLIR to LLVM as an official sub-project.
As of a few days ago, the MLIR code was merged into the LLVM source tree. This comes a few weeks ahead of the LLVM 10.0 code branching.
As explained via the mlir.llvm.org project site, MLIR is designed for:
– The ability to represent dataflow graph (such as TensorFlow), including dynamic shapes, the user-extensible op ecosystem, TensorFlow variables, etc.
– Optimizations and transformations typically done on a such graph (e.g. in Grappler).
– Representation of kernels for ML operations in a form suitable for optimization.
– Ability to host high-performance-computing-style loop optimizations across kernels (fusion, loop interchange, tiling, etc) and to transform memory layouts of data.
– Code generation “lowering” transformations such as DMA insertion, explicit cache management, memory tiling, and vectorization for 1D and 2D register architectures.
– Ability to represent target-specific operations, e.g. accelerator-specific high-level operations.
– Quantization and other graph transformations done on a Deep-Learning graph.
With the recently covered Google IREE project they are also experimenting with MLIR for the likes of accelerating machine learning on Vulkan.
Exciting times ahead for 2020.
Credit: Google News