Thomas Viehmann

Results 227 comments of Thomas Viehmann

The typical thing to do is padding. For this, you'd want to group items with similar length.

Hi, oh oh. This should not happen, thank you for reporting! What I would do as a first step is to look at `process_input`, specifically at the end https://github.com/deep-learning-with-pytorch/dlwpt-code/blob/323de27e517c279ae69318d9ea0a7e6f416701ba/p3ch15/request_batching_jit_server.py#L59 Do...

:sweat_smile: So now it's gone or just not as bad?

Heya, thank you for reporting this! We'll need to update the code.

Absolutely, thank you for spotting this and reporting.

Hi. You are absolutely right that there is a problem here, tank you! The issue is that there isn't a 1-1 map between the C++ storage (which would be the...

Well, so the key takeaway we wanted to have here is that Adam will automatically "equilibrate" the step size between the parameters while SGD will need you to do this...

Heya, yeah. Between the three of us, we've had mixed feelings about the caching, too, and the main reason to have it is to provide the perspective that such thing...

And thank you for your very thorough and critical reading and sharing your comments. It is very interesting and, indeed, helpful.

Thank you for the pointer. Note that we're at the other end of the spectrum here with embedding. If one absolutely wanted to try to get something ordered there, one...