BERKELEY, Calif., Nov. 11, 2019 — A team of researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California, Berkeley (UC Berkeley) have demonstrated how machine learning can improve the stability of synchrotron light beam performance.
Synchrotrons, such as the Advanced Light Source (ALS) at Berkeley Lab, are a type of particle accelerator that accelerate electrons to emit light in controlled beams. They allow scientists to explore samples using a variety of colors and wavelengths, and many synchrotron facilities deliver different types of light for dozens of experiments happening simultaneously.
This chart shows how vertical beam-size stability greatly improves when a neural network is implemented during Advanced Light Source operations. When the so-called feed-forward correction is implemented, the fluctuations in the vertical beam size are stabilized down to the sub-percent level (see yellow-highlighted section) from levels that otherwise range to several percent. Courtesy of Lawrence Berkeley National Laboratory.
However, tweaks and adjustments at individual beamlines can feed back into the overall performance of the entire facility. Those fluctuations in performance can present problems for certain experiments.
Researchers successfully tested the machine learning algorithm at two different sites around the ALS ring earlier in 2019. They alerted ALS users conducting experiments about the testing of the new algorithm and asked them to give feedback on any unexpected performance issues.
“We had consistent tests in user operations from April to June this year,” said C. Nathan Melton, a postdoctoral fellow at the ALS who joined the machine learning team in 2018 and worked closely with Shuai Liu, a former UC Berkeley graduate student who contributed to the effort and is a co-author of the study.
Simon Leemann, deputy for accelerator operations and development at the ALS and the principal investigator in the machine learning effort, said, “We didn’t have any negative feedback to the testing. One of the monitoring beamlines the team used is a diagnostic beamline that constantly measures accelerator performance, and another was a beamline where experiments were actively running.”
Machine learning tools were able to improve the stability of the light beam’s size via adjustments that largely cancel out those fluctuations — reducing them from a level of a few percent down to 0.4%, with submicron precision.
“Machine learning fundamentally requires two things: The problem needs to be reproducible, and you need huge amounts of data,” Leemann said. “We realized we could put all of our data to use and have an algorithm recognize patterns.”
Researchers fed electron-beam data from the ALS, which included the positions of the magnetic devices used to produce light from the electron beam, into the neural network. The network recognized patterns in this data and identified how different device parameters affected the width of the electron beam. The machine learning algorithm also recommended adjustments to the magnets to optimize the electron beam.
Because the size of the electron beam mirrors the resulting light beam produced by the magnets, the algorithm also optimized the light beam that is used to study material properties at the ALS.
The data showed the little blips in electron-beam performance as adjustments were made at individual beamlines, and the algorithm found a way to tune the electron beam so that it negated this impact better than conventional methods could.
The algorithm-directed system can now make corrections at a rate up to 10 times per second, though three times a second appears to be adequate for improving performance at this stage, Leemann said.
The machine learning team received two years of funding from the U.S. Department of Energy in August 2018 to pursue this and other machine learning projects in collaboration with the Stanford Synchrotron Radiation Lightsource at SLAC National Accelerator Laboratory.
“We have plans to keep developing this and we also have a couple of new machine learning ideas we’d like to try out,” Leemann said.
Credit: Google News