Max Ryabinin

Results 23 comments of Max Ryabinin

Hi @Bandcompute01, sure, that would be awesome! If you are building this on top of hivemind, it would even better to integrate this into the repository with a pull request:...

I've investigated this issue a bit, and my findings are as follows: * Newer Protobuf versions introduce breaking changes (https://github.com/onnx/onnx/pull/4629#issuecomment-1304821038) with downgrade being the [most popular solution](https://github.com/protocolbuffers/protobuf/issues/10051) for some. In...

Hi, thanks for reporting the issue! Can you describe the exact error you observed?

Hi @Jeduh, thanks for your contribution! We've discussed the fork in Discord, just copying here my thought on integration from our DMs: (@Jeduh) sure, but, could you please let me...

Hi @81549361, we've just pushed a commit that should fix the explosion in https://github.com/bigscience-workshop/petals/pull/343. Can you try running the updated SST-2 prompt tuning notebook from the main branch?

Hi @xinghua-qu! We made a fix for this in https://github.com/bigscience-workshop/petals/pull/343 which should likely resolve your issue. Can you try running the SST-2 prompt tuning notebook from main?

Hi, do you mean you want to use Petals with your GPU, but don't want to let the others use it? I think you can set up a private swarm...

Yes, it is possible — you just need to specify a different set of initial peers in [DistributedBloomConfig](https://github.com/bigscience-workshop/petals/blob/main/src/petals/client/remote_model.py#L33) when you're creating DistributedBloomForCausalLM from the tutorial. By default, the config (and...

Have you checked out https://github.com/huggingface/peft#use-cases? I think PEFT even showcases bigscience/bloomz-7b1, and the model support matrix includes BLOOM for prompt tuning

Yes, it is possible, but not necessary: with PEFT, you are likely to get the same result with fewer intermediate steps for setup.