One of the new open-source compiler IR advancements of 2019 has been the Google/Tensorflow MLIR as the Multi-Level Intermediate Representation designed for machine learning models/frameworks. With Google’s “IREE” project, MLIR can be accelerated by Vulkan and thus allowing machine learning via this high-performance graphics/compute API.
MLIR is becoming an LLVM sub-project and has growing industry support for this machine learning IR. Google’s IREE is an experimental execution environment for MLIR to make use of modern hardware acceleration APIs. In other words, getting MLIR running on the likes of Vulkan and other hardware abstraction layers. IREE also has a CPU interpreter too for running on traditional x86/ARM CPUs.
IREE also hopes to demonstrate the potential for machine learning within game engines. From the official documentation, “An observation that has driven the development of IREE is one of ML workloads not being much different than traditional game rendering workloads: math is performed on buffers with varying levels of concurrency and ordering in a pipelined fashion against accelerators designed to make such operations fast. In fact, most ML is performed on the same hardware that was designed for games! Our approach is to use the compiler to transform ML workloads to ones that look eerily (pun intended) similar to what a game performs in per-frame render workloads, optimize for low-latency and predictable execution, and integrate well into existing systems both for batched and interactive usage.“
But before getting too excited, Google’s IREE environment is being done for research / demonstration purposes. But while it may not morph into a fully-supported offering for running MLIR on different accelerators, all of the code is open-source via GitHub and we’ll see where it goes into 2020.
Credit: Google News