HuangChiEn
HuangChiEn
> @HuangChiEn > > The short answer is: use `wids` and `ShardListDataset`. It behaves just like other indexed datasets and works exactly like other datasets for distributed training. > >...
> I have to use transformers 4.27 because latest version of clip-interrogator requires that specific version. After upgrading transformers from 4.26 to 4.27, I had this issue. > > ```...
It may be the stupid answer, but why not ```python # ...omit!! cfg = OmegaConf.merge( ServerConfig, {"db": DatabaseConfig}, # why not just directly set it with dataclass ? {"model": {"data_source":...
> It looks like there's a subtle bug on this line: https://github.com/AntixK/PyTorch-Model-Compare/blob/main/torch_cka/cka.py#L156 > > ``` > num_batches = min(len(dataloader1), len(dataloader1)) > ``` > > `len(dataloader1)` is in there twice, I...
> > @HuangChiEn > > The short answer is: use `wids` and `ShardListDataset`. It behaves just like other indexed datasets and works exactly like other datasets for distributed training. >...
> Hi! New to the webdataset library -- could someone explain why `with_epoch` and `unbatched` is necessary for DDP training? > > > @laolongboy > > > Same question. How...
Sorry~ i just give up in the last years, since the backward just disappeared... Hope you find other source code of Bi-LSTM for BP ~~ tova-hallas ***@***.***> 於 2020年8月16日 週日...
If you don't mind that i'm not the author of the code, I may offer some reply of your question. 'preprocess' is 3-party lib ~~ old page : https://pypi.org/project/preprocess/ new...
Hope the following thread close this issue ~ [Webdataset (Liaon115M) + Torchlightning (pl.DataModule) with visualizing progressbar during training](https://github.com/webdataset/webdataset/issues/346)
> @Zeno673 Hello, we evaluate serving the mamba as bidirectional multi-modal encoder in our recent work: [video-mamba-suite](https://github.com/OpenGVLab/video-mamba-suite). We find that concatentating directly textual and visual tokens can effectively perform cross-modal...