lennartmoritz

Results 8 comments of lennartmoritz

Hey @GiorgosBetsos, I face the same issue. I wanted to speed up inference with the new TensorRT format in my application. But in the benchmarks the improvements between the PyTorch...

Thank you for the hint. I've created a workaround for the benchmark but I don't have the time right now to look into dynamic batch sizes and add correct "fixed...

@glenn-jocher yes, the batched inference speed has improved notably compared to the ealier benchmarks. I'll add benchmarks with the RTX 2060 below. I agree that a dynamic TRT export should...

Ah great, thank you so much. I think I found the culprit lines of code. https://github.com/eezstreet/SWATEliteForce/blob/8b45d483b96a143ae033b572170c8e953134aaee/System/AI.ini#L895-L901 I will temper with these values, and see if that helps.

Hey @e1four15f thank you for your code example. In the mean time, i wrote a similar script to yours based on the inference example script from the repo. But i've...

You can likely not just use the embeddings with an arbitrarily trained LLM model. The Idea of LanguageBind is to create a custom set of embeddings that is aligned to...

They use x-language training pairs where x denotes any of the supported modalities. So e.g. video-language, audio-language and depth-language, etc. are all used during training.

Execute the `inference.py` script and look in your `cache_dir`. There a folder named `models--LanguageBind--LanguageBind_Image` is created. Is that what you are looking for?