Harsha

Results 81 comments of Harsha

I recommend using `accelerate`. For more information, please refer to https://huggingface.co/docs/accelerate/usage_guides/memory or https://towardsdatascience.com/a-batch-too-large-finding-the-batch-size-that-fits-on-gpus-aef70902a9f1. I personally prefer the former as I find it to be much cleaner.

Unfortunately, `accelerate` only supports pytorch. Probably, will have to wait until tensorflow is supported.

Unfortunately, tensorflow doesn't have decorators/functions to auto-scale batch size like how lightning/accelerate for pytorch does. However, [here's](https://github.com/neuronets/nobrainer_training_scripts/blob/f0e5b9f41b9db1b07697487fb79d0c0d243cfb30/1.2.0/test_train.py#L213-L248) a naive example of accomplishing this. @satra let me know your thoughts about...

Hello! Any chance this PR will be merged to enable running the ec2 self-hosted runner as non-root?

> @machulav @hvgazula we've been using this branch for the past eight months at the Autoware Foundation without an issue, but I'd prefer it to get merged instead of using...

You can simply comment out that line (and other related lines) and continue.

Hello, did you see this https://github.com/google-research/scenic/issues/547?

@sargun-nagpal Did you notice `*=` (Multiply AND) next to `neg_cost_loss` as well as `pos_cost_loss`?

Hello! Sorry for being unclear earlier. In fact, you derived the answer yourself 😉 . All you need to tell yourself is- In the equation from the article, `t` is...