Jayanta Dey
Jayanta Dey
Firs of all, this approach : https://github.com/KhelmholtzR/ProgLearn/blob/af84f50f4a8759104ded06891acac884b81e3821/docs/experiments/isic_proglearn_nn.ipynb is not same as yours. They used the whole data and used a multi-label voter. I told you earlier this thing. I think...
@amyvanee did you try fitting proglearn for one task only? I would not try it with other people's code. Please try to train it on only one task using add_task...
Same as #43, so deleting #43
@rflperry Does this PR help your query about contrastive loss?
@mkusman1 let's talk to jovo on Friday. Till then you can try to understand the codes in this repo: https://github.com/neurodata/SPORF and try to replicate some experiments from the morf paper.
Explore induced bias(interpolation & extrapolation) phenomenon in different machine learning models.
@jong here the true pdf was calculated: https://github.com/jdey4/progressive-learning/blob/master/replaying/result/figs/true_pdf.pdf. I am going to replicate it in the current repo. @jovo please correct me if I am wrong. The goal is to...
Explore induced bias(interpolation & extrapolation) phenomenon in different machine learning models.
@jshin13 do not subtract anything to show the posteriors. Use divergent colormap as here: https://github.com/jdey4/progressive-learning/blob/master/replaying/xor_nxor_pdf.py
Explore induced bias(interpolation & extrapolation) phenomenon in different machine learning models.
@jshin13 I calculated the true distribution here. You can have a look: https://github.com/neurodata/progressive-learning/blob/master/experiments/sim_pdf/XOR_pdf.ipynb
same as #29, so closing #29
Follow the guideline here: https://numpy.org/doc/