QHack2023
QHack2023 copied to clipboard
[Done]MatriQ-Qumpula_Quantum
Project Name:
MatriQ
Team Name:
Qumpula Quantum
Which challenges would you like to submit your project for?
- Quantum computing today! This work is heavily inspired by Google's paper Discovering faster matrix multiplication algorithms with reinforcement learning which was published in October 2022.
- Hybrid Quantum-Classical Computing Challenge. QAOA is a hybrid algorithm, and large QUBOs are solved with D-wave's Hybrid quantum annealer. On the other hand, these hybrid methods do not bring any new hybrid aspects to the field. In this project, the hybrid contribution is that the algorithm is not just a single QUBO but a sequence of QUBOs where the solution to the previous QUBO affects the following QUBO. This is a non-trivial hybrid quantum computing feature in this work.
- Amazon Braket Challenge. The best quantum device to run the algorithm is D-wave's quantum annealer, which is now available in the Amazon Marketplace. On the other hand, the QAOA formulation of the problem for 2x2 could be solved on Braket's simulator. Technically, the connection to Braket works, but it did not bring many advantages in this case, and the code is not fully ready yet.
- NVIDIA Challenge. After I unexpectedly got access to Nvidia's GPU, I was interested in testing the code in Run.ai. Unfortunately, I had some technical issues and needed more time to solve them. It should have been easy since Pennylane offers an exciting lightning.gpu device, which I aimed to test. Besides, I have used JAX with GPU in some quantum natural language processing work. It should boost the QAOA optimization process, but the configuration took too long this time. So I am not sure if the work actually qualifies for the Nvidia track, but GPUs certainly could be utilized here.
Project description:
The MatriQ project explores faster matrix multiplication algorithms with quantum computing. The paper
Fawzi, A. et al. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610 (Oct 2022). https://github.com/deepmind/alphatensor.
inspires this project. The paper implements an AlphaTensor reinforcement learning-based approach to discover faster matrix multiplication algorithms. The problem of this work is the same as in the paper. The key idea of MatriQ is to replace the reinforcement learning part with a suitable objective function, expressed as QUBO. At each step, a quantum annealer (or theoretically QAOA) finds the configuration of binary variables, which minimizes the objective function. The solution to QUBO provides us with three vectors which are a part of the decomposition which will form the final matrix multiplication algorithm (necessarily the same idea as in the paper). The longer document on the GitHub repository explains the idea in more detail.
Project Link:
https://github.com/valterUo/QHack23-MatriQ/tree/17dfb4aa6642c5e297c181096a5c507427e7bf9c
Video:
https://youtu.be/ux2twxZ2T4c
A short presentation and demo of how MatriQ rediscovers Strassen's algorithm if it has a good initial guess and the scalar field has two elements.