Updated 29 Jun 2023Reading time: 4 mins

Breaking the netgain Fusion barrier

Andy Corbett thought leadership on AI and machine learning in nuclear fusion.

How can AI take us the rest of the way to limitless clean energy?

This headline could be the most significant in our lifetime. Just this December, 2022, US scientists have announced a decades-awaited breakthrough: a nuclear fusion reactor at the Lawrence Livermore Laboratory in California has created a positive amount of net energy, where more energy was output than was used to maintain the reactor. 

The amount of energy itself is not yet the headline. Just enough to brew a few cups of tea–Darjeeling or English Breakfast, that’s up for grabs. But crossing the threshold into a net gain has been a milestone in nuclear engineering for over half a century. So why is this of lifetime-level importance?

 

Fusion will solve our biggest environmental problems

Hannah Fry recently spoke to the BBCs Newscast to say there is reason to be optimistic with many of the global concerns our planet faces: “If you can get nuclear fusion to work, you have totally limitless clean energy for almost free.” This would redraft the solution approach to the problems society faces. 

Hannah goes on to predict, “everything that we want is doable [...] all the things we are concerned about at the moment do melt away if you have this one big technological advance”. For example, clean water: if you take the salt out of seawater you have limitless clean water; it just takes a lot of energy. 

But why is she so optimistic? She points to the significant investment from private companies and investors rapidly innovating the fusion industry. The recent threshold breakthrough demonstrates this momentum.

It has so far taken half a century to break even on energy production. Why should we expect the energy output to increase from a few kettles worth to solving the climate crisis any time soon?  What do we know now that we didn’t know then? 

Science and industry are looking to the advent of AI and Machine Learning (ML) whose impact on engineering has been dramatic across the board. ML tools are typically agnostic to the specific engineering problem at hand. Moreover they can be used to greatly simplify complex dynamical systems and assess the uncertainty in predictions. But what exactly does machine learning mean for plasma physics?

 

Another breakthrough: AI control of fusion generators

In February 2022, another long awaited, perhaps even more important, milestone was published in Nature. In collaboration with the Swiss Plasma Centre at EPFL, researchers at the Google-backed DeepMind have successfully trained a machine learning model to control the hydrogen plasma inside a nuclear fusion reactor, a Tokamak.

Traditionally, one of the key obstacles in reactor control  is the nuanced understanding of the complex physical process involved in keeping the plasma stable. Enter: deep learning. Using highly parameterised models that learn from many training examples, these nuances can be detected as patterns in data, outside the formalism of nuclear physics.

 

What sort of ML model can control a fusion generator?

DeepMind uses a deep reinforcement learning algorithm by setting up the control problem as a “game” that the algorithm tries to win, much like DeepMind’s reinforcement learning algorithms AlphaZero and AlphaGo, which have been systematically taking out world champions in both chess and Go.

The reinforcement part of the algorithm refers to the process of learning to make decisions via trial & error. The algorithm is then rewarded, or not, based on the success of its choices. In the context of nuclear fusion reactors, the algorithm makes voltage adjustments to the systems, up to thousands of times per second, and is rewarded when the plasma is successfully confined to the magnetic fields of the reactor.

Compared to classical approaches, such as optimal control, a reinforcement doesn’t need to know the dynamics of the system in advance. It functions by querying the system, or sampling, on a probabilistic basis. This is important in many engineering scenarios in which the dynamics of the system is difficult to assess.

The deep part is equally as important. Deep networks are composed of many layers of complex (non-linear) components, each containing a vast number of trainable parameters. As such, they can be used to analyse high-dimensional input data, as is the case in the reactor, and learn many subtle effects related to the system.

 

How do we work with expensive data?

A common obstacle in providing robust and trustworthy AI solutions is in obtaining quality data which may be expensive, or impossible to obtain. Running a nuclear fusion reactor several thousand times is not most people’s idea of cost efficiency: expensive in cost and effort. The scientist at EPFL built a computational model of the system to assist DeepMind in harvesting examples of the reactor’s functionality. However, even this model, so vast in its structure, is still computationally expensive to run. 

So what do we in the machine learning community have up our sleeves? Given limited runs of a complex dynamical system, it is possible to build surrogate models to emulate the behaviour of the recorded values. Importantly, these surrogates, cheap to run, may be probabilistic in nature so that when we sample from them to train a new algorithm, we have accurately quantified the uncertainty in the predictions they make.

Of course, as was the case with DeepMind’s new algorithm, the proof is in the pudding: they demonstrated successful control of the plasma in the reactor, offering physicists new insights into the dynamical systems to which it is subject.

 

Where does fusion technology go from here?

 

The work from DeepMind gives a proof of concept of success of deep learning to provide new solutions to longstanding problems. In fusion in particular, there are various aspects of the engineering system that may be refined using an AI toolkit. 

 

From another angle, Prof. Diogo Ferria, working with the Joint European Torus (JET) in the UK, reviews how AI methods such as convolutional neural networks, recurrent neural networks, and variational autoencoders may be successfully applied in the fusion pipeline for image processing, disruption prediction, and anomaly detection on diagnostics data.

 

Just like Hannah, we are optimistic about the speedy arrival of nuclear fusion technology. This optimism underpins our 5 year framework agreement with UKAEA at the Culham Centre for Nuclear Fusion. We strongly believe the fastest route to get to developed netgain technologies is through our modern understanding of dynamical systems through deep learning.

 

Author

Andy Corbett is a specialist in machine learning at digiLab with expertise in computer vision and the role of dynamical systems in deep learning. He is particularly interested in the use of AI solutions to ecological challenges facing the planet and is currently working on building digital twins of modern cities.

 

digiLab was founded to provide top-tier data science to the engineering industries. A spinout from the University of Exeter, digiLab uses pioneering machine learning to transform the efficiency, resilience and environmental sustainability of its customers.