MuriyaTensei

Results 19 comments of MuriyaTensei

> 那个页面是我去掉了,因为内容比较老了,也不太想写使用相关的文档了... > > 要在所有类目里按照收藏数排序的话,可以把 `筛选-排序模式` 改成 `0s` 就好了。 > > 图片数量的问题,我也不知道是 API 不同还是什么导致的。但你说的 “10 万以上” 这个是怎么搜索的呢? > > 谢谢啦。 印象中一个基础的搜索收藏量的手段是通过 `XXXusers入り` 例如 `100000users入り` 似乎是通过添加tag实现的(?)

> 第三方接口参考[#383 (comment)](https://github.com/dmMaze/BallonsTranslator/issues/383#issuecomment-1961391588) ok,我看一下(似乎自己写的程序调用的加了v1,但是确实刚刚应该是没加),再去试试

> GUI训练目前只支持windows本地进行, 不支持服务器, 在线实时训练可以试试这个项目: https://github.com/neromous/RWKV-Ouroboros > > 设置的API URL只支持使用推理功能, 不支持调用远程的下载, 配置, 训练 > > 前端构建执行make build-web 好的,我研究了一下在线训练,有了一定收获。 但是现在还有一个问题,我似乎还是没有办法远程进行推理。我填写API之后点生成还是提示没有运行模型,而运行模型会提示本地没有模型

或许可以直接在train里main直接指定 `n_gpus=1` ?

excuse me, should I uncomment `train_loader.batch_sampler.set_epoch(epoch)`.

> Thanks a lot. Have you changed `num_workers=8`, I got `RuntimeError: DataLoader worker (pid(s) ****) exited unexpectedly` if I use `=8`. I have 4 GPUs in my server.

Thank you for your answer. I didn't delete this when the error occurred, but I modified several places to avoid the error. I'll try it later.

I still can't get faster, and have lots of questions. Have you used `dist.barrier()` or `dist.destroy_process_group()`?

Some of my code: Is it must be `pin_memory=False` ? ``` train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps) train_sampler = torch.utils.data.distributed.DistributedSampler( train_dataset, num_replicas=n_gpus, rank=rank, shuffle=True) train_loader = DataLoader(train_dataset, num_workers=4, pin_memory=True, batch_size=hps.train.batch_size, sampler=train_sampler) ```...

So, in multi gpus, the same steps like `G_20000.pth` is better than single gpu? Maybe G_20000.pth in single gpu is same as G_5000 in 4gpus?