Jeevithan Alagurajah
Jeevithan Alagurajah
I wonder is there any effect due to normalized data ? I don't see you normalized your data.
How can I train this model with my own dataset?
Building the engine inside the docker container, I got insufficient memory issue. python ../llama/build.py --model_dir ./Mixtral-8x7B-instruct-v0.1 --use_inflight_batching --use_gpt_attention_plugin float16 --e nable_context_fmha --use_gemm_plugin float16 --world_size 8 --tp_size 8 --output_dir ./trt_engines/mixtral/TP --parallel_build...
Dynamic LoRA (Low-Rank Adaptation) switching functionality, allowing users to change LoRA models on-the-fly during inference without reloading the entire model.
### Model description OpenAI whisper model eg. medium.en ### Open source status - [x] The model implementation is available - [X] The model weights are available ### Provide useful links...
## Bug Description https://github.com/pytorch/TensorRT/blob/main/examples/dynamo/mutable_torchtrt_module_example.py I wasn't able to run this file above. I attempted to build the package from source but encountered an issue during the installation of torch version...
https://github.com/S-LoRA/S-LoRA Please refer above implementation
Can you please provide multiple Lora support for whisper models?
when I ran my own data set to decode it gives an error, for tool in tools: shuf = tool.decode(docs) AttributeError Traceback (most recent call last) in 1 for tool...
## Bug Description https://github.com/pytorch/TensorRT/blob/main/examples/dynamo/mutable_torchtrt_module_example.py I replaced hugging face whisper model instead of diffusion model ## To Reproduce import numpy as np import torch import torch_tensorrt as torch_trt from transformers import...