Yu Sun
Yu Sun
Please note that `test_dataset` is only excuted when you want to test the data loading of a specific dataset. `test_dataset` would be excuted when formal usage, like training, testing, or...
看log应该是loss的异常导致的梯度爆炸。我这里只在之前制作预训练模型测试训练的时候出现过,重新加载中间的checkpoint继续训练就好了。会在训练早期出现这种问题,具体什么原因导致的确实没细致研究过。但使用pretrain模型,跨过基础特征构建阶段,就不会有这个问题。
是的,从你的log也可以看出是有一些问题。这些都是train from scratch的checkpoint的finetune吧,用pretrain不会有这个问题。实际上,我从0训练的时候是训了2D pose的heatmap和identity map的,同时学2D pose信息的时候就没有这个问题,如果您实在费劲,您可以试着用[HigherHRNet](https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation)的HRNet-32 pretraining开始训练,比如[这个](https://drive.google.com/drive/folders/1sydvEZuJlXDVlQQD_lG2nXGlLZ_Kh1mW),他的那个也是训过2D pose的。排除掉其他因素的干扰,应该就是2D pose的特征对于特征构建很关键,基于这点的话,从HigherHRNet的pretraining开始训练应该不会有这个问题了。出现这个bug实在抱歉,当时开源的时候只测试了基于pretaining的训练没问题,train from scratch训太久了,赶deadline就没试。和我当时训练的不同就是2D pose的预训练了,我也会尽快实验复核这个问题!
I guess that it might be [this line](https://github.com/Arthur151/ROMP/blob/dee1b80a5244dbca78637a494133958146b02bb9/romp/lib/loss_funcs/learnable_loss.py#L42).
Yes, you are right. Maybe we can add something like this to avoid gradient collapse: ``` loss_list = [0 if torch.isnan(value.item()) else value for key, value in loss_dict.items()] loss =...
这个报错,着实是看不出来什么。 我在本地测试没有遇到过这个问题。 但猜测可能是和新数据集的构建方式有关,我这里根据不同的类型,用不同的基类创建了数据集子类。 https://github.com/Arthur151/ROMP/blob/bafc86897c387caae125e7119b31dc30ee317bf0/romp/lib/dataset/h36m.py#L9
Yes, of course. You can refer to [this issue](https://github.com/Arthur151/ROMP/issues/88) to create your own dataset.
[Crowdpose](https://github.com/Arthur151/ROMP/blob/master/romp/lib/dataset/crowdpose.py) is a perfect example for your need. It just loads the 2D pose data. You need to replace [this line](https://github.com/Arthur151/ROMP/blob/992544341d9469c30de884403a3fc7b977974aa8/romp/lib/dataset/crowdpose.py#L31) to the 2D pose of your data. Please be...
Hi, @guoees . Yes, the training process needs quite some time. Which model are you triaining, ROMP or BEV? For ROMP, the overall loss is supposed to converge to ~...
results = np.load('./00000000.npz',allow_pickle=True)['results'][()] results['smpl_thetas'] 我加一下这方面的描述