Prakhar Kulshreshtha

Results 9 comments of Prakhar Kulshreshtha

Hi @jackieylogan, the refinement process currently doesn't support running on CPU. It requires multiple forward backward passes, which would be very slow. So you can run with `python3 bin/predict.py refine=False...

Yes, the refinement process is time and memory intensive. It occupies around 24GB VRAM, until all the iterations are completed. That's because we aren't just doing inference, but rather multiple...

> Just found out that the instance segmentation is supported since [version 0.1.41](https://github.com/mmatl/pyrender/releases/tag/0.1.41). > > You can also find an example in the [example.py](https://github.com/mmatl/pyrender/blob/master/examples/example.py#L146-L154) > > TL;DR: > > ```python...

Sure, any instance with a total GPU VRAM >= 24GB i.e. `GPU Mem (GiB) >= 24GB` should work: https://aws.amazon.com/ec2/instance-types/ Some of the ones which should work: `p3.8xlarge`, `p2.8xlarge`, `g5.xlarge`, etc

Hi @hamzanaeem1999, 1. Yes, the VRAM is GPU RAM / GPU Memory 2. The refinement step utilizes multiple GPUs to get total memory > 24GB. If you can get an...

@MacM2Pro @leayz-888 Hello, looking into the problem, should be an easy fix. @mldemox the weights for the lama-refine models are same as the original models. The refinement step optimizes the...

Hey, thanks for raising this issue. Will be adding a fix soon!

Hey, can you post the complete stacktrace?

Hey thanks for raising the issue. I'm looking into it, will release a fix soon!