HuangChiEn
HuangChiEn
I just saw this github repo today, and I also encountered the same issue actually.
add eyes on this issue. Also similar issue #2030
> XGBoost is backward compatible. Could you share your model? Or is there a way to reproduce the error? > Also ensure that the local and online machines have the...
> XGBoost is backward compatible. Could you share your model? Or is there a way to reproduce the error? > > XGBoost is backward compatible. Could you share your model?...
@CheungBH @akinsanyaayomide Not sure, but it may caused by the learning rate. In paper, it suggest the lr should be decayed 10 every 30 epochs. However, the epoch_step args in...
> ```python > float(-np.log(256.) * pixels) > ``` discretized level is a number denote the range of 8 bit pixel (2^8=256). So, glow paper mentioned `c=-M x log a`, where...
@chaiyujin i wonder why the objectives will need to divided by `float(np.log(2.) * pixels` ? what is that means ?
> As far as WebDataset is concerned, the length does not matter, anywhere. WebDataset just iterates through data sources until it reaches the end and then raises a StopIteration. You...
> You are not repeating your training data infinitely, so this won't work. > > If you want exactly one permutation of the training data per epoch and you want...
`DDP_equlize` sucks and deprecated!! `resampled=True` + `with_epoch(.)` hard to understand the behavior, and doesn't support multinode (each node seeing different part of dataset, but consume the same dataset in each...