quantum icon indicating copy to clipboard operation
quantum copied to clipboard

[Performance] Boost tfq.convert_to_tensor speed

Open MichaelBroughton opened this issue 4 years ago • 10 comments

Currently tfq.convert_to_tensor uses just one core and makes use of Cirq serialization protocols. They are pretty slow for large circuits. A quick benchmark shows that more than 95% of all time spent computing in tfq.convert_to_tensor is spent in the cirq serialization logic and the protobuf SerializeToString function. Since it's unlikely we can speed either of those up quickly, perhaps we should look into parallelization of tfq.convert_to_tensor ?

MichaelBroughton avatar Aug 10 '20 18:08 MichaelBroughton

Hey Michael, I am new to tfq community and I would love to work on this issue!

MrSaral avatar Aug 27 '20 14:08 MrSaral

Welcome! Glad you’ve taken an interest. I’m optimistic we can make things a little quicker :)

MichaelBroughton avatar Aug 27 '20 18:08 MichaelBroughton

@MichaelBroughton Where can I start, I went through the code in tfq.conver_to_tensor

MrSaral avatar Aug 29 '20 00:08 MrSaral

After you've read the code you could:

  1. Fork the code and work on a local copy.
  2. Time the original implementation and yours on some big reference circuits.
  3. Make the new implementation faster.
  4. Open a Pull request on here with the changes you've made showing the numbers behind the performance boost.

MichaelBroughton avatar Aug 29 '20 20:08 MichaelBroughton

Ok thanks, I will give this a shot

MrSaral avatar Sep 01 '20 14:09 MrSaral

@MrSaral are you working on this issue? If so, please request to be assigned to it

tacho090 avatar Sep 27 '20 02:09 tacho090

Hey @tacho090 , Yes I am working on this. Please assign it to me.

MrSaral avatar Sep 27 '20 03:09 MrSaral

Hi! I'm new to the TFQ community and would love to tackle this problem. Can I be assigned to this issue? I'm more than happy to work on this.

redayzarra avatar Jul 04 '23 11:07 redayzarra

Go for it, feel free to open a PR for it

lockwo avatar Jul 04 '23 15:07 lockwo

After you've read the code you could:

  1. Fork the code and work on a local copy.
  2. Time the original implementation and yours on some big reference circuits.
  3. Make the new implementation faster.
  4. Open a Pull request on here with the changes you've made showing the numbers behind the performance boost.

Hi, I've been working on this issue and I had a couple of questions. I read the benchmarks/README.md file and tried to use Bazel for benchmarking, but I ran into a lot of errors.

  1. How would you like me to time my code? Should a custom benchmark file suffice or should I be using the existing benchmarking system (with Bazel)?

  2. Is there any specific reference circuit you would like me to use? I want to try multiple circuits that stress the code in various ways but I'm not sure what to look for.

That's all the questions I have for now. I'm not sure if I'm supposed to be using Bazel in the first place.

redayzarra avatar Jul 08 '23 23:07 redayzarra