A17.00005. Stochastic Gradient Descent for Hybrid Quantum-Classical Optimization

Presented by: Frederik Wilde


Abstract

Gradient-based methods for hybrid quantum-classical optimization typically rely on expectation values with respect to the outcome of parameterized quantum circuits. In this work, we investigate the fact that the estimation of these quantities on quantum hardware leads to a form of stochastic gradient descent. In many relevant cases estimating expectation values with k measurements results in optimization algorithms whose convergence properties can be rigorously understood, for any value of k≥1. Moreover, in many settings the required gradients can be expressed as linear combinations of expectation values and we show that in these cases k-shot expectation value estimation can be combined with sampling over terms of the linear combination, to obtain doubly stochastic gradient descent. For all algorithms we prove convergence guarantees. Additionally, we explore numerically these methods on benchmark VQE, QAOA and quantum-enhanced machine learning tasks and show that treating the stochastic settings as hyper-parameters allows for significantly fewer circuit executions.

Authors

  • Ryan Sweke
  • Frederik Wilde
  • Johannes Jakob Meyer
  • Maria Schuld
  • Paul K. Fährmann
  • Barthélémy Meynard-Piganeau
  • Jens Eisert


Comments

Powered by Q-CTRL

© 2020 Virtual APS March Meeting. All rights reserved.