For full reproducibility, always report all details of your analyses, including sharing of data, methods, and models
Reporting these details is critical, and recording them in that way that enables them to be run and built upon (e.g. containers, executable notebooks, kipoi models) is even better.
Covered in Tip 1 and Tip 3. However, might want to add specific recommendation of notebooks?
I don't see it covered in Tip 1. In Tip 3, we have
Similar to Tip 4, try to start with a relatively smaller network and increase the size and complexity as needed to prevent wasting time and resources. Beware of the seemingly trivial choices that are being made implicitly by default settings in your framework of choice e.g. choice of optimization algorithm (adaptive methods often lead to faster convergence during training but may lead to worse generalization performance on independent datasets
It may indirectly refer to linear models, but I think we should still say that explictely. Imho the "baselines" section would be the more appropriate one since this here is more about tuning.
However, might want to add specific recommendation of notebooks?
you mean jupyter notebooks? This would be much too tool-specific, I think. Also, I would say that in most environments (except maybe you work on a single machine with interactive use) they are not a great fit for deep learning as it usually involves running many different hyperparameter configs and submitting jobs remotely to GPU clusters.
This paper might be good to cite for notebooks: https://arxiv.org/abs/1810.08055