ISOLOGO.png

Large Language Models for Quantum State Representation

 

Generative artificial neural networks, such as restricted Boltzmann machines and recurrent neural networks, can be used as a variational ansatz to reconstruct quantum many-body systems. Such networks can be trained in two ways. One way is to use data generated via projective measurements from an experimentally prepared or numerically simulated quantum state. The generative neural network is then trained to encode the probability distribution underlying the data and can generate additional measurement data. The second approach is based on variational Monte Carlo. Generative neural networks can be trained to minimize the energy expectation value of a given Hamiltonian and represent the ground state.

We have shown that the combination of the two training approaches is a promising method to advance the performance of variational Monte Carlo with a limited amount of measurement data. Furthermore, we explore the power of large language models, specifically the transformer architecture, as wave function ansatz. We have shown that these models can overcome limitations of previously used generative wave function ansätze and allow for efficient studies of large-scale quantum many-body systems. We use this power to explore phase diagrams and quantum phase transitions of barely understood qubit systems.

Experts

oyedemi_small.pdf

Faith Oyedemi

Related Published Works

Page 2.pdf

Variational Monte Carlo with Large Patched Transformers

K. Sprague and S. Czischek
Page 1.pdf

Data-enhanced variational Monte Carlo simulations for Rydberg atom arrays

S. Czischek, M.S. Moss, M. Radzihovsky, E. Merali, and R.G. Melko
Page 3.pdf

Neural-network quantum state tomography in a two-qubit experiment

M. Neugebauer, L. Fischer, A. Jäger, S. Czischek, S. Joachim, M. Weidemüller, and M. Gärtner
Page 4.pdf

Neural-Network Simulation of Strongly Correlated Quantum Systems

S. Czischek
Page 5.pdf

Quenches near Ising quantum criticality as a challenge for artificial neural networks

S. Czischek, M. Gärttner, and T. Gasenzer