exo icon indicating copy to clipboard operation
exo copied to clipboard

[BOUNTY - $500] Pipeline Parallel Inference

Open AlexCheema opened this issue 1 year ago • 11 comments

Prerequisite: https://github.com/exo-explore/exo/issues/1

Motivation: exo should use device resources as efficiently as possible. Current implementation underutilises available resources.

What: See https://pytorch.org/docs/stable/pipeline.html

Reward: $500 Bounty paid out with USDC on Ethereum, email [email protected].

AlexCheema avatar Jul 15 '24 07:07 AlexCheema

I'll like to work on this

Myestery avatar Jul 15 '24 07:07 Myestery

I'll like to work on this

That would be excellent! I can help here and on Discord with any questions / issues you have.

AlexCheema avatar Jul 18 '24 07:07 AlexCheema

Hi there,

I was taking a look at what it would take to make this work and did some testing, found out that when you start two chat sessions and run inference at the same time they mess each other up and tokens from the two sessions bleed into each other. See the two last messages:

image

The one on the left hangs after a while, the right one finishes but is also gibberish. Does this reproduce on your end? I think fixing session isolation might precede parallel pipelining?

the-alex-b avatar Jul 18 '24 09:07 the-alex-b

@the-alex-b Very interesting - you're totally right, we should fix session isolation first. This makes sense since both would share the same kv caches (it's stateful). What we really need is the ability to create multiple instances of the same model that only hold the weights in memory once.

This can still be part of the same bounty.

AlexCheema avatar Jul 18 '24 18:07 AlexCheema

Hi @AlexCheema, Can I work on session isolation?

pranav4501 avatar Aug 20 '24 16:08 pranav4501

Hi @AlexCheema, Can I work on session isolation?

Hey @pranav4501 I think @varshith15 is already working on that so best to check with him if you can contribute.

Can you also DM me on discord so we can find a good task for you. I can update bounties with something that you'd be interested to work on, as there aren't that many left now!

AlexCheema avatar Aug 20 '24 17:08 AlexCheema

Hi @AlexCheema, I DM'ed you on discord, I will also take a look at the stable diffusion bounty

pranav4501 avatar Aug 20 '24 17:08 pranav4501

Hello, can we update the GSheet to denote this is taken (if it is, which it seems to be)? cc @AlexCheema [apologies for the pings]

moosh3 avatar Nov 10 '24 01:11 moosh3

Prerequisite: #1

Motivation: exo should use device resources as efficiently as possible. Current implementation underutilises available resources.

What: See https://pytorch.org/docs/stable/pipeline.html

Reward: $500 Bounty paid out with USDC on Ethereum, email [email protected].

That pytorch page is giving me a 404. Is the idea here to be able to process multiple separate requests at once or to have a batch api that accepts multiple requests in one api call?

Hey! First off, I love this project, props for the great work! And I love your mission!

I think I did implement this for my MS thesis - you can find it here. The whole point of it was to show how maximizing GPU utilization by introducing pipeline parallelism at inference time leads to the ability to serve multiple requests efficiently.

The code I wrote is definitely not production ready (more like tomato sauce-ready considering the amount of spaghetti code), and I gotta say I didn't dive deep into the Exo codebase yet, but if someone is working on it, maybe I could give some help or support (or just another pair of eyes in debugging).

Let me know if this sounds good, I'd be super glad to be part of this!

davmacario avatar Feb 19 '25 20:02 davmacario

Hey! First off, I love this project, props for the great work! And I love your mission!

I think I did implement this for my MS thesis - you can find it here. The whole point of it was to show how maximizing GPU utilization by introducing pipeline parallelism at inference time leads to the ability to serve multiple requests efficiently.

The code I wrote is definitely not production ready (more like tomato sauce-ready considering the amount of spaghetti code), and I gotta say I didn't dive deep into the Exo codebase yet, but if someone is working on it, maybe I could give some help or support (or just another pair of eyes in debugging).

Let me know if this sounds good, I'd be super glad to be part of this!

Your thesis is interesting. We're working on this issue for exo v2.

AlexCheema avatar Feb 19 '25 21:02 AlexCheema