Jeff Picard

Results 19 comments of Jeff Picard

This would be great for our company as well!

@rtg0795 `protobuf>=3.20.3,

Thanks @alanakbik! I put up a first stab (linked above) if you're interested.

Many thanks for the thoughtful review! > isolate the distribution logic only for the training logic I'll look into distributing across processes _inside_ the call to `.train`/`.fine_tune` rather than before....

Thanks for looking at this @helpmefindaname ! > logging at multi-gpu is off by 1 epoch Ahh, sorry about that. I think it's from the new call to `.set_epoch(epoch)` which...

> Any idea what could be going on that's making it slower? Aha, with a bigger batch size, multiple GPUs are faster again. There's a little overhead to synchronizing the...

And here's an example of running it on the lastest commit ```python from flair.datasets import IMDB from flair.embeddings import DocumentTFIDFEmbeddings, TransformerDocumentEmbeddings from flair.models import TextClassifier from flair.trainers import ModelTrainer if...

> What to do about forward vs forward_loss Oh, this can be resolved without a big refactor by _patching_ forward similar to what Fabric does [here](https://github.com/Lightning-AI/pytorch-lightning/blob/8ad3e29816a63d8ce5c00ac104b14729a4176f4f/src/lightning/fabric/wrappers.py#L184). > make TransformerEmbeddings work...

Done! @helpmefindaname and @HallerPatrick can you please take another look? This looks good to me. What do you think about merging? - Plugins - I added a property so that...