Ace
Ace
I have send the download link (BaiDuYun) to your email.
please see #9
google drive: https://drive.google.com/open?id=1_EvC4OjwFYvTxnYqsVHdx8XGuCqVAW5z
I just use 1e-6 as learning rate. 1. after 100 epoch, the loss change slow. (You can adjust your learning rate, maybe 1e-4 is too large. For example, use [dynamic...
please check your own image and label directory:is there are some "not picture" file in this directory (I try the training in my computer, without the error you meet.)
Sorry, I didn't compare with paper. (As I know, the author release the dataset now, you can train it by yourself ) --- I am not working on it recently,...
I will do it in October. (I am finding a job recently, so forgive me.)
@holyhao I have update the code and results in pre-version,I will update results of v2(using learnable fusion) tomorrow
1. I think it's better to use larger lr (I find the loss curve decrease too slow at the beginning). --- but I did not try it (the learning in...
I did not have a machine with large memory(GPU), so i have no "engineering experiment" in large batch size. However, there is several discuss about bach-size: 1. [stack exchange](https://stats.stackexchange.com/questions/164876/tradeoff-batch-size-vs-number-of-iterations-to-train-a-neural-network) 2....