Jhin

Results 20 comments of Jhin

> 你好 > > 在 pytorch onnx 导出中,此操作在[很长一段时间内](https://github.com/pytorch/pytorch/issues/27212)都不受支持。但是,您可以通过一组其他操作来模拟此操作[,如下所示。](https://github.com/open-mmlab/mmcv/pull/953/files)以下是我如何做的一个例子: > > ``` > if not torch.onnx.is_in_onnx_export(): > batch_I_r = F.grid_sample(batch_I, build_P_prime_reshape, padding_mode='border', align_corners=True) > else: > # workwround for export to...

> the tool is public now: https://github.com/NVIDIA-AI-IOT/tensorrt_plugin_generator Thanks, I'll give it a try

> this happens when converting models to a different format. .ckpt is different from diffusers format, where the whole model is a group of folders separated into their components. I...

Thank you very much for your reply @patrickvonplaten : This is how I ran the original model: ```python python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img ./sketch-mountains-input.jpg --strength...

@patrickvonplaten I have compared the output of each step of the code and found that it is inconsistent after the prepare_latents, I haven't found the reason for this yet, I...

@patrickvonplaten hi,Is there a problem with my usage?

@agentzh Ok, I have already solved the problem, but if I shorten the sampling time, whether due to the sampling time is too short, lead to the result not accurate...

@agentzh Thank you,i have resolved this question,i intalled a wrong package. But now there is another quetion ,when i use this tool ,always get a wrong flame graph.when it show...

> Hi @dingjingzhen, thanks for supporting LLMLingua. Could you provide more details about how you are using it and your environment? > > The LLMLingua series relies on a smaller...

> Hi @xvyaward, thanks for your interest and the very detailed description. > > 1. Could you please share more information on how you use the mistral model for inference?...