Megatron-DeepSpeed icon indicating copy to clipboard operation
Megatron-DeepSpeed copied to clipboard

[Tensorboard] Log text prediction in evaluation

Open thomasw21 opened this issue 2 years ago • 14 comments

A very useful tool in order to understand model performance beyond obtaining loss: Actually show what are the predictions.

It'd be very useful to be able to "see" the output of the model during evaluation in text format. These should be logged in tensorboard. Tensorboard likely supports markdown style where you can put prediction in bold.

Maybe we can only print out the first batch as we should get a good amount of example from it.

thomasw21 avatar Oct 27 '21 00:10 thomasw21

@TevenLeScao also suggested that we make inference work in Meg-DS. Very simple greedy search. The motivation is that teacher forcing won't tell us much about the model (it's very similar to validation loss), whereas greedy search will show the models actually infers.

Personally I don't agree with the statement that teacher forcing won't tell us much, but I do agree that running actual inference in Meg-DS will probably allow us to notice bugs very quickly.

thomasw21 avatar Oct 27 '21 22:10 thomasw21

Hey @thomasw21. Is this still needed? If so I'd love to take it on.

KMFODA avatar Aug 10 '22 07:08 KMFODA

Hey! We have finished training BLOOM so the tensorboard integration might not be required anymore. However I think having a generation engine in Meg-DS would he greatly appeciated as we currently rely on our transformers converted checkpoint to generate

thomasw21 avatar Aug 10 '22 07:08 thomasw21

I see I'd like to help with that then. Where would be the best place for having that generate engine?

KMFODA avatar Aug 15 '22 08:08 KMFODA

@KMFODA @thomasw21 , https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/328 Already added the ability to benchmark system, interactive cli, and a generation server. Testing a few things. Will try to get this merged by this week

mayank31398 avatar Aug 15 '22 09:08 mayank31398

Already added the ability to benchmark system, interactive cli, and a generation server.

IMO this issue is different, we want to have a inference mechanism from Meg-DS, without having to convert to transformers. The context was that we were training in Meg-DS and had no way to "test" the model until we built the transformers skeleton, convert the weights and then leverage transformers inference mechanisms.

Where would be the best place for having that generate engine? I'm not sure what you're asking, probably in this repo?

thomasw21 avatar Aug 16 '22 08:08 thomasw21

Where would be the best place for having that generate engine? I'm not sure what you're asking, probably in this repo?

Sorry. I'm new to this repo. I meant to ask where in the repo itself should this generate engine live?

KMFODA avatar Aug 16 '22 16:08 KMFODA

Hmm, @thomasw21 so, the PR I referred to above uses both HF accelerate and DS-inference libraries, depending on what we want to infer with. But it does require transformers version of BLOOM

mayank31398 avatar Aug 16 '22 16:08 mayank31398

@KMFODA currently, I am planning to create a standalone library. For now, I am adding to this repo itself.

mayank31398 avatar Aug 16 '22 16:08 mayank31398

Sorry. I'm new to this repo. I meant to ask where in the repo itself should this generate engine live?

I mean you can probably create a megatron/inference folder.

thomasw21 avatar Aug 16 '22 16:08 thomasw21

@thomasw21 , I am not sure how this differs from the PR I pointed above ^^. Can you explain?

mayank31398 avatar Aug 16 '22 17:08 mayank31398

If you don't have transformers skeleton (ie modeling) how would one be able to use transformers or DS-inference?

thomasw21 avatar Aug 16 '22 21:08 thomasw21

oh, I think I understand the issue now. Maybe something like loading from the universal checkpoints and running inference etc?

mayank31398 avatar Aug 16 '22 22:08 mayank31398

@mayank31398 yup! Essentially this is what this issue is about.

thomasw21 avatar Aug 17 '22 08:08 thomasw21