Quantum

Program

Time Speaker Title
10:20-10:50 am Leonardo Banchi Learning, Compression and Quantum Information
10:50-11:20 am Thomas Elliott Quantum memory advantages for modelling complex dynamics
11:20-11:40 am Chengran Yang An Efficient Quantum Algorithm for Discovering Rare Events of Stochastic Processes
11:40-12:00 am Magdalini Zonnios Memory-minimal quantum generation of stochastic processes: spectral invariants of quantum hidden Markov models
12:00-12:30 am Rudy Raymond Combinatorial Optimization with Few Qubits
Lunch
1:30-2:00 pm Robert Huang What cannot be learned in the quantum universe
2:00-2:30 pm Daniel park Learning with Repeated Classical-Quantum Interactions
2:30-2:50 pm Josep Lumbreras Learning finitely correlated states
2:50-3:10 pm Spiros Kechrimparis How Quantum Agents Can Change Which Strategies Are More Complex
Intermission
3:20-3:50 pm Felix Binder Learning in Continuously-Monitored and Repeatedly-Interacting Quantum Systems
3:50-4:10 pm Graeme Berk Fundamental limits on quantum prediction
4:10-4:30 pm Seok Hyung Lie Multipartite Quantum State Over Time and Markovianity
Afternoon Tea + Free Discussion

Abstracts

Combinatorial Optimization with Few Qubits

Rudy Raymond
Abstract: The potential of quantum computing for Combinatorial Optimization (CO) is significant, but current quantum computers lack the necessary physical resources, particularly the number of qubits, to tackle problem instances on the same scale as classical computers. Various methods have been developed to scale problem instances that can be addressed with a limited number of qubits, showing promise. Here, we introduce a method for CO using few qubits, involving the application of Quantum Random Access Code (QRAC) for encoding variables into few qubits, Classical Shadow (CS) for decoding the variables efficiently, and Coordinate Descent (CD) for optimizing quantum circuits to generate desirable quantum states for CO instances. The effectiveness of the proposed method is demonstrated through experiments on parameterized quantum circuits for solving instances of NP-hard problems such as the minimum bisection.

Quantum memory advantages for modelling complex dynamics

Thomas Elliott
Abstract: Simulating quantum dynamics on a classical computer bears a resource cost that grows exponentially with the size of the system, and even the simplest of quantum systems often exhibit seemingly complex behaviours. This apparent problem can be recast as a positive - complex classical systems can be simulated efficiently on simple quantum computers. In this talk I will discuss the application of quantum technologies to the modelling of stochastic processes, for which quantum simulators can operate with lower memory cost than any classical alternative, in both lossless and lossy compression settings. Particularly, I will highlight examples of quantitative scaling divergences in modelling highly non-Markovian processes, wherein the provably-memory-minimal classical simulator must store diverging amounts of information with increasing precision, while arbitrary precision can be achieved with a finite-sized quantum simulator. I will also discuss recent works on the experimental implementation of such quantum memory advantages, and the extension to modelling adaptive agent.

What cannot be learned in the quantum universe

Robert Huang
Abstract: Recent advances have significantly enhanced our understanding of what can be efficiently learned in the quantum universe. However, certain fundamental aspects remain resistant to efficient learning using known algorithms. This talk explores several fundamental properties—including time, causal structure, topological order, noise—and demonstrates how they can be provably hard to learn. These results stem from our recent work on how to construct random unitaries (with Ma, arxiv:2410.10116) and generate them in extremely low depth (with Schuster and Haferkamp, arXiv:2407.07754). Examining these unlearnable aspects of our world sheds light on the fundamental limits of scientific inquiry in the quantum realm.

Learning, Compression and Quantum Information

Leonardo Banchi
Abstract: The relationship between learning and compression is a fundamental and increasingly studied area in machine learning and information theory, which provides insight into generalization, model selection, and the limits of learnability. Recent research suggests that effective learning algorithms inherently compress the data they process, extracting meaningful patterns while discarding irrelevant information. On the other hand, quantum information tools enable the compression of classical data into quantum memories using fewer resources. In this talk I'll explore the foundational connections between learning and compression, and then show how quantum agents can be used to define more efficient learning models, paving the way for potential quantum advantages in learning tasks.

Learning with Repeated Classical-Quantum Interactions

Daniel Park
Abstract: The interaction between classical and quantum systems lies at the heart of many emerging quantum machine learning models. These repeated and controlled interactions give rise to temporal structures that can be harnessed for learning and decision-making, with classical and quantum components complementing one another. In this talk, I will explore how such repeated classical-quantum interactions contribute to improving optimization, expressivity, generalization, and robustness to noise in quantum machine learning. Drawing inspiration from both physical processes and algorithmic design, I will share recent insights that point toward a more dynamic view of quantum learning.

Learning in Continuously-Monitored and Repeatedly-Interacting Quantum Systems

Felix Binder
Abstract: To characterise a quantum system, we must observe it. The observation record then allows us to estimate the parameters governing its behaviour. While conventional approaches to parameter estimation and tomography rely on repeated measurements under reset conditions, we ask what can be learned in a single shot when memory persists between sequential measurements. We consider two separate scenarios: a continuously monitored open quantum system and a system coupled to a finite-sized environment probed at discrete time steps. In the first case, we focus on quantum trajectories in the jump unravelling and develop analytic and computational tools to compute the Fisher Information in both renewal and non-renewal processes. Our methods account for data compression and post-selection, and are illustrated with physically relevant examples. In the second case, we introduce a learning framework where only the system is probed and the environment acts as a hidden quantum memory. We characterise the gauge freedoms arising in this scenario, define a suitable gauge-invariant distance between quantum processes, and show how the Fisher Information matrix reveals the dimensionality of the accessible model space.

Bounds on memory-minimal quantum generation of stochastic processes

Magdalini Zonnios
Abstract: Stochastic processes abound in nature and accurately modeling them is essential across the quantitative sciences. They can be described by hidden Markov models (HMMs) or by their quantum extensions (QHMMs). These models explain and give rise to process outputs in terms of an observed system interacting with an unobserved memory. Although there are infinitely many models that can generate a given process, they can vary greatly in their memory requirements. It is therefore of great fundamental and practical importance to identify memory-minimal models. This task is complicated due to both the number of generating models, and the lack of invariant features that determine elements of the set. In general, it is forbiddingly difficult to ascertain that a given model is minimal. Addressing this challenge, we here identify spectral invariants of a process that can be calculated from any model that generates it. This allows us to determine strict bounds on the quantum generative complexity of the process -- its minimal memory requirement. We then show that the bound is raised quadratically when we restrict to classical operations. This is an entirely quantum-coherent effect, as we express precisely, using the resource theory of coherence. Finally, we demonstrate that the classical bound can be violated by quantum models.

How Quantum Agents Can Change Which Strategies Are More Complex

Spiros Kechrimparis
Abstract: Whether winning blackjack or navigating busy streets, achieving desired outcomes requires agents to execute adaptive strategies—strategies where actions depend contextually on past events. In complexity science, this motivates an operational quantifier of complexity: given two strategies, the more complex one demands the agent to track more about the past. Here, we show that conclusions about complexity fundamentally depend on whether agents can process and store quantum information. Thus, while classical agents might find Strategy A more complex to execute than Strategy B, quantum agents can reach the opposite conclusion. We derive sufficient conditions for such contradictory conclusions to occur and illustrate the phenomenon across multiple scenarios. As a byproduct, our results yield an information-theoretic lower bound on the minimal memory required by any agent - classical or quantum - to execute a given strategy.

Multipartite Quantum State Over Time and Markovianity

Seok Hyung Lie
Abstract: The quantum state over time formalism provides an extension of the density operator formalism into the time domain, so that quantum correlations across both space and time may be treated with a common mathematical formalism. While bipartite quantum states over time have been uniquely characterized from various perspectives, it is not immediately clear how to extend the uniqueness result to multipartite temporal scenarios, such as those considered in the context of Legget-Garg inequalities. In this talk, we show that two simple assumptions uniquely single out a multipartite extension of bipartite quantum states over time, namely, linearity in the initial state and a quantum analog of conditionability for multipartite probability distributions. As a direct consequence of our uniqueness result we arrive at a canonical multipartite extension of Kirkwood-Dirac type quasi-probability distributions, and we conclude by showing how our result yields a new characterization of quantum Markovianity.

Fundamental limits on quantum prediction

Graeme Berk
Abstract: For classical stochastic processes, the fundamental memory requirements for predicting the future based on observations of the past have been unlocked via the concept of causal equivalence: if two histories lead to the same future predictions, one does not need to memorise those histories separately. Thus, memory-minimal classical models of classical processes have a one-to-one correspondence between distinct futures, and distinct memory states. In this work we present a prediction formalism for quantum processes, generalising classical causal equivalence of conditional future probability distributions to a notion of quantum linear dependence of conditional future process tensor Choi states. This notion of linear independence fundamentally pertains to the observable statistics when interacting with the quantum process — which uses a vector space of Hermitian matrices. Critically however, the memory dimension of a quantum model is instead the dimension of a vector space of state vectors. The inequivalence of these vector spaces creates a uniquely quantum disconnect between observable statistics of a quantum process, and the minimal memory to model it. Specifically, we prove that the minimal dimension d of a quantum predictive model is quadratically lower bounded by the number of linearly independent futures D that can be exhibited by the process it models. We provide and demonstrate an algorithm to approximate D — and hence our lower bound — to a specified Markov order. Furthermore, we prove that if the memory states of a quantum model all yield statistically distinct futures, then the model saturates our bound. We also prove that models for processes that only require classical memory do not possess this disconnect between statistics and memory, suggesting a potential quadratic advantage for quantum memory in our framework. Overall, these results reveal a unique feature of quantum prediction that may be exploited to enhance memory efficiency in models.

An Efficient Quantum Algorithm for Discovering Rare Events of Stochastic Processes

Chengran Yang
Abstract: Rare events, though infrequent, can significantly impact various fields, including engineering and economics. Identifying these events within stochastic processes is crucial, yet conventional Monte Carlo methods often prove inefficient due to extensive sampling requirements. We introduce a quantum algorithm that constructs quantum sample states focusing solely on rare events, facilitating their detection. This method builds upon previous advancements in quantum sequence models to generate a superposition of all possible output sequences. The algorithm consists of three main steps: (1) constructing an amplitude block encoding from the quantum sample state using quantum sequence models; (2) filtering rare events through quantum singular value transformation applied to a polynomial approximation of a threshold function; and (3) measuring the ancillary system and post-selecting outcomes. Our algorithm offers flexibility in preparing quantum sample states for various distributions of rare events and achieves a quadratic speedup over classical methods in two scenarios: uniform and proportional sampling of rare events. We validate its effectiveness through numerical simulations on a 2-state coin process, successfully identifying and amplifying rare events while suppressing non-rare occurrences. This work highlights the potential of quantum sample states to explore advantages in studying properties related to rare events.

Learning finitely correlated states

Josep Lumbreras
Abstract: Matrix product operators allow efficient descriptions (or realizations) of states on a 1D lattice. We consider the task of learning a realization of minimal dimension from copies of an unknown state, such that the resulting operator is close to the density matrix in trace norm. For finitely correlated translation-invariant states on an infinite chain, a realization of minimal dimension can be exactly reconstructed via linear algebra operations from the marginals of a size depending on the representation dimension. We establish a bound on the trace norm error for an algorithm that estimates a candidate realization from estimates of these marginals and outputs a matrix product operator, estimating the state of a chain of arbitrary length $t$. This bound allows us to establish an $O(t^2)$ upper bound on the sample complexity of the learning task, with an explicit dependence on the site dimension, realization dimension and spectral properties of a certain map constructed from the state. A refined error bound can be established for $C^*$-finitely correlated states, a natural quantum generalization of hidden Markov models, which can be generated through consecutive applications of a physical map—a quantum channel—on a hidden quantum memory system. We can also obtain an analogous error bound for a class of matrix product density operators on a finite chain reconstructible by local marginals. In this case, a linear number of marginals must be estimated, obtaining a sample complexity of $\tilde{O}(t^3)$. The learning algorithm also works for states that are sufficiently close to a finitely correlated state, with the potential of providing competitive algorithms for other interesting families of states.

Advantages of multicopy nonlocality distillation and its application to minimizing communication complexity

Giorgos Eftaxias
Abstract:
--This submission concerns our journal publications [1,2]--
Nonlocality has been proved a resource for various information-processing tasks, a fact that naturally raised its distillation endeavours [3,4]. Here we introduce several nonlocality distillation schemes, some are sequential algorithms that repeatedly discover optimal two-copy protocols, while others are genuine three-copy protocols. The impact of our schemes is twofold. On the one hand, they unlock the distillability of quantum correlations not known to be distillable before, this way, they offer practical distillation of observed correlations by easy means. On the other hand, they uncover more non-signalling correlations that trivialize communication complexity [5], and others that defy information-causality [6]. This brings us closer to an understanding of the sets of nonlocal correlations that can be recovered from information-theoretic postulates, which enhances our understanding of what is special about quantum theory.
1. Eftaxias, G., Weilenmann, M. and Colbeck, R., 2023. Advantages of multicopy nonlocality distillation and its application to minimizing communication complexity. Physical Review Letters, 130(10), p.100201.
2. Eftaxias, G., Weilenmann, M. and Colbeck, R., 2023. Multisystem measurements in generalized probabilistic theories and their role in information processing. Physical Review A, 108(6), p.062212.
3. Forster, M., Winkler, S. and Wolf, S., 2009. Distilling nonlocality. Physical review letters, 102(12), p.120401.
4. Brunner, N. and Skrzypczyk, P., 2009. Nonlocality distillation and postquantum theories with trivial communication complexity. Physical review letters, 102(16), p.160403.
5. Brassard, G., Buhrman, H., Linden, N., Méthot, A.A., Tapp, A. and Unger, F., 2006. Limit on nonlocality in any world in which communication complexity is not trivial. Physical Review Letters, 96(25), p.250401.
6. Pawłowski, M., Paterek, T., Kaszlikowski, D., Scarani, V., Winter, A. and Żukowski, M., 2009. Information causality as a physical principle. Nature, 461(7267), pp.1101-1104. "

Sponsors