Chenxia Han

Results 14 comments of Chenxia Han

Hi, @farmingyard Did you do any data augmentation in training stage?

I found the channels of output blobs from plugin reshape layer is actually the number of classes, hence off-the-shelf across-channel softmax layer can satisfy demand. @chenzhi1992

@linux-devil NO. Though off-the-shelf softmax and my own implemented softmax layer both operate across-channel, off-the-shelf softmax still produce wrong results. I cannot figure it out why it happends.

That means we recommend to make a copy of config file and modify. You can try any one of the configs starting with `config/trident*`.

Is that situation happends everytime?

Our dataloader does use multi-threading to load images. According to your description, you are sharing gpu server with others, then jobs from others may occupy cpu resource in that server,...

You may use TensorBoard to monitor the loss while training, and stop as it going steady. Here is the link of setting up: [TensorBoard](https://github.com/TuSimple/simpledet/blob/master/doc/TENSORBOARD.md).

- [ ] model parallelism with pipeline

We'll fix this later to dynamically allocate memory. For temporary solution, you can replace ```1500``` by a larger number, may be ```1800```, in [models/retinanet/builder.py line 314](https://github.com/TuSimple/simpledet/blob/master/models/retinanet/builder.py#L314).

> It is back online now. Thanks for reporting. Hi, it is offline again. Could you please help to fix it?