xinghua-qu
xinghua-qu
Hi, Is there any example code about how to load the checkpoint (https://github.com/google/dopamine/tree/master/docs#downloads) that can be utilized independently? For instance, I use openAI gym to create an environment and utilize...
Hi, I see the environment is from NoFrameskip-v4 version. However, after running the code, I found the observation size is 84*84*1. This is different from the situation (84*84*4) that usually...
``` # augment inputs with noise inputs = inputs + torch.randn_like(inputs, device='cuda') * noise_sd ```` Line 109 in `train.py`. If there is no `torch.clamp()` after line 109, it is possible...
Can you please inform how to pass an initialization (e.g., x0) to the optimization ?
In the paper, the segment size (the window length) for sliced audio reconstruction is stated as 32. While in the code, I saw the definition `segment_size=4`. See `commons.rand_slice_segments(x, x_lengths=None, segment_size=4):`...
For some reason, I met the error below. `from util.audio import audiofile_to_input_vector ModuleNotFoundError: No module named 'util'` Where is the util for importing? I did not find it
Hi, I found that the model used in BIET-3 based on torchscale is not as what the paper described. In the multiway transformer, the self-attention layer should be shared across...
### Describe the bug ```import torch from transformers import pipeline import bentoml pipe = pipeline( "automatic-speech-recognition", ) bentoml.transformers.save_model( "automatic-speech-recognition-whiser-large-v2", pipe, signatures={ "__call__": {"batchable": False} # Enable dynamic batching for model...
I tried to run the code for [prompt tuning example](https://github.com/bigscience-workshop/petals/blob/main/examples/prompt-tuning-personachat.ipynb), but got the NaN error after some iterations. Does anybody know the reason for such error? Thanks :) The dots...