Niranjan Yadla
Niranjan Yadla
@gante @hollance Have added something like below and it is giving segmentation fault. Could you please help me on this ? "converter.representative_dataset = representative_dataset" and def representative_dataset(): for x in...
@hollance @gante I was able to convert from Hugging face whisper onnx to tflite(int8) model,however am not sure how to run the inference on this model Could you please review...
@gante Is it possible to modify the input audio spectrograms from 30s to 10 seconds in order to use them as input for a Hugging Face Whisper TensorFlow model? on...
@gante I separated encoder and decoder tflite models, however while running inference of decoder I only get single output . Could you please review the notebook and let me know...
I was able to successfully separate the encoder and decoder whisper tflite models in the following notebook and working correctly . https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/whisper_encoder_decoder_tflite.ipynb Posting here to help some of HF users...
@sanchit-gandhi how do i get transcript from the below script ``` import torch from transformers import AutoFeatureExtractor, WhisperModel from datasets import load_dataset model = WhisperModel.from_pretrained("openai/whisper-base") feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-base") ds =...
@gante I am attempting to divide the TFWhisperModel into an encoder and a decoder, but the code I have is producing an error. Can you assist me in resolving this...
@sanchit-gandhi is it possible to generate transcript using TFWhisperModel ? instead of WhisperForConditionGeneragion
@sanchit-gandhi Is it possible to directly map the decoder hidden states to logits without using the language modeling head? I am focusing on using only TFWhisperModel because it can be...