[Feature] Train new TF2.0 models
== Background
- https://github.com/VVasanth/SpleeterTF2.0_Unofficial/blob/main/Readmemd
== Requirements
- Train new models
- Test, validate quality
- Integrate this repo / process back to https://github.com/deezer/spleeter/
== Details / Implementation
- tbd
Have took and installed your repo on a new virtual env. (Trying to use venv which is part of python3 just to follow deezer/spleeter's dev approach for now, rather than virtualenv / conda). And will try following your further steps. Still waiting for the data.
Added small suggestions / requests on the readme.md file. Still awaiting that they allow me to download the data.
I am not yet too familiar with the method you picked to implement this, and will learn as we go while working with you. And what were the alternatives and tradeoffs you had before starting. If you can elaborate that would be good. Per your note I believe that both this repo and Spleeter are per the Unet architecture (https://towardsdatascience.com/separate-music-tracks-with-deep-learning-be4cf4a2c83).
@agur - have created below gitter chat for us to correspond better...lets chat over there to discuss more...
https://gitter.im/audioSourceSeparationOnEdge/community?utm_source=share-link&utm_medium=link&utm_campaign=share-link