Luca Antiga
Luca Antiga
I’m 100% for failing loudly if it’s not a beaten path (and this one looks like it’s not)
Interestingly, we already have the concept of `endpoint` with their own `path` (and `method`) behind the scenes, which we currently use to expose `/v1/chat/completion` for the OpenAI spec. The only...
@kopalja thank you for the investigation and the reproduction
So this happens because automatic mixed precision in PyTorch is explicitly designed to work this way: https://github.com/pytorch/pytorch/blob/51b7528e274d350c1d5091acc40572d6b43879b8/torch/amp/grad_scaler.py#L99 in order to avoid issues with `nan` gradients. Here is the equivalent raw...
hey @Usama3059, great contribution I was actually thinking about options to get to the same effect (overlap preprocessing and gpu compute). What I had in mind originally was to have...
So here's my proposal in order to clarify the API: - we add an optional `preprocess` hook to `LitAPI` (optional, like `batch`), with the same signature as `predict` - we...
> @lantiga I think the proposed API is a great idea for initial simplicity at this stage, and I also agree with `LitAPI.setup`. Just one question, since preprocessing is a...
The problem is the following ``` WARNING: Ignoring version 1.5.10 of pytorch-lightning since it has invalid metadata: Requested pytorch-lightning==1.5.10 from https://files.pythonhosted.org/packages/18/f1/f59b307f75db1886c96e396eec878501510677394868680b8d2b8b58c47c/pytorch_lightning-1.5.10-py3-none-any.whl has invalid metadata: .* suffix can only be used...
Great catch @wlsdnen, do you want to send a PR or should I do that?
Thank you @ringohoffman for the contribution!