Alphabet’s Google unit is trying outprograms to advance its internal development of dedicated chips to accelerate its software, according to Google’s head of AI research, Jeff Dean.
“We are using it internally for a few chip design projects,” said Dean in an interview with ZDNet Monday, following a keynote talk he gave at the International Solid State Circuits Conference, an annual technical symposium held in San Francisco.
Google has over the course of several years developed a family of AI hardware, its Tensor Processing Unit, or TPU, chip, for processing AI in its server computers.
Using AI to design those chips would represent a kind of virtuous cycle, where AI makes chips better, and then those improved chips boost the power of the AI algorithms, and so on.
During his keynote, Dean described to the audience how a machine learning program can be used to make some decisions about how to lay out circuits of a computer chip, with the resultant design having equal or greater acumen compared to a human chip designer.
Also: AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun
In the traditional “place and route” task, chip designers use software to determine the layout in a chip of the circuits that form the chip’s operations, analogous to designing the floor plan of a building. A number of variables come into play to find an optimal layout that fulfills several objectives, including delivering chip performance, but also avoiding unnecessary complexity that can drive up the cost to manufacture the chip. That balancing act requires a lot of human heuristics about how best to pursue design. Now, AI algorithms may be able to experiment in ways that can be competitive with those heuristics.
In one example, Dean told the audience that a deep learning neural network, after only twenty-four hours on the problem, found a better solution than human designers six to eight weeks on the problem. The design resulted in a reduction of the total wiring needed in the chip, an improvement.
The deep learning program is akin to the AlphaZero program developed by Google’s DeepMind unit to conquer the game of Go. Like AlphaZero, the chip design program is a form of what’s called reinforcement learning. In order to achieve a goal, the program tries various steps to see which ones lead to better results. Rather than pieces on a game board, the moves are choices of how to place the right circuit layout in the total chip design.
Unlike in Go, however, the solution “space,” the number of possible circuit layouts, are vastly larger. And, as mentioned above, numerous objectives have to be accommodated, rather than the single objective in Go of winning the game.
Dean, talking with ZDNet, described the internal efforts as being in the early stages of understanding the utility of the technology. “We’re getting our designers to experiment with it and see how they start to make use of it in their workflows,” said Dean.
Also: AI is changing the entire nature of compute
“We’re trying to understand how it’s useful, and what areas does it improve on.”
Google’s foray into AI design comes amidst a renaissance in chip production, as companies large and small design dedicated silicon to run machine learning faster. Dedicated AI hardware can lead to larger and more efficient machine learning software projects, according to some machine learning scientists.
The diversity created by AI hardware startup companies, such as Cerebras Systems and Graphcore, can be expected to continue apace, said Dean, even as Google expands its own efforts.
Dean said the variety that’s emerging is intriguing.
“I’m not sure if they’re all going to survive, but it’s pretty interesting because many of them are taking very different design points in the design space,” Dean said of the startups. “Just as one distinction, some are accelerating models that are very small, that can fit in on-chip SRAM,” he said, meaning, the size of the machine learning model is so small it doesn’t need external memory.
“And if your model fits in SRAM, those things are going to be very effective, but if you’re model doesn’t, that’s not the chip for you.”
Asked if the chips will converge on some standard design, Dean suggested diversity is more likely, at least for the time being.
“I do think there’s going to be more heterogeneity in the kind of approaches used, not less,” he said, “because if you look at the explosion in machine learning research, and uses of machine learning in lots of different kinds of problems, it’s going to be a large enough set of things in the world that you’re not going to want just one design, you’re going to want five or six — not a thousand, but five or six different design points.”
Added Dean, “It’ll be interesting to see which ones hold up, in terms of, are they generally useful for a lot of things, or are they very specialized and accelerate one kind of thing but don’t do well on others.”
As for Google’s own efforts beyond the TPU, Dean indicated there’s an appetite for more and more dedicated silicon at Google. Asked if the trend to AI hardware at Google “has legs,” meaning, can extend beyond its current offerings, Dean replied, “Oh, yeah.”
“Definitely there’s growing use of machine learning across Google products, both data-center-based services, but also much more of our stuff is running on device on the phone,” said Dean. The Google Translate application is an escape of a sophisticated program, now at seventy different languages, that can run on a phone even in airplane mode, he noted, when there’s no connection back to the data center.
The family of Google silicon for AI has already broadened, he indicated. The “Edge TPU,” for example, is a designation that covers “different design points,” said Dean, including low-power applications, on the one hand, and high-performance applications at the heart of the data center. Asked if the variety could broaden still further, Dean replied, “I think it could.”
“Even within non-data-center things, you’re already seeing a distinction of higher power environments like autonomous vehicles, things that don’t have to be at the 1-watt level, they can be fifty or a hundred watts,” he said. “So you want different parts for that versus something on a phone.” At the same time, there will be ultra-low-power applications like sensors in agriculture that do some AI processing without sending any data to the cloud. Equipped with AI, such a sensor can assess whether there is any data of interest being picked up, say, via a camera, and stream those individual data points back to the cloud for analysis.
Credit: Google News