python-alp
python-alp copied to clipboard
a machine learning platform for teams
We should implement a custom celery autoscaler based on RAM or memory available on the GPU. References: http://docs.celeryproject.org/en/latest/userguide/workers.html#autoscaling http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-worker_autoscaler http://docs.celeryproject.org/en/latest/internals/reference/celery.worker.autoscale.html http://docs.celeryproject.org/en/latest/_modules/celery/worker/autoscale.html#Autoscaler
The master is broken. From what I can see, the test utils argnames in keras have changed (https://github.com/fchollet/keras/blob/master/keras/utils/test_utils.py) so this is a start. Maybe we should not rely on keras...
As the fit_on_gen part is almost finished, can we consider implementing a predict_on_gen function in all the backends? If yes, please attribute it to a milestone.
For now, the model_id is not attributed during the serialization. As a result, it is possible to try a predict on a model that does not have a model_id. -...
Fitting in local returns a string `model_id` whereas fitting in async returns a unicode string. --- Example: * ` Expe = Experiment(model) Expe.fit([data],[data_val]) print(Expe.mod_id)` returns `'7f66b5fdca51fe6fb21f53149e5bba2d'` * `Expe.fit_async([data],[data_val]) print(Expe.mod_id)` returns...
In the sklearn backend: - The following code `Expe.fit([data_0],[])` (providing a validation data that is an empty list) does not return an error, but does not returns a result either....
It should be possible to launch instances and worker remotely in AWS using [boto3](https://boto3.readthedocs.io/en/latest/index.htm).
We should allow the user to reuse the same parameters and optimizers' states for sequential fits. ```python expe.fit(...) .... expe.fit(...) ```
Currently running `docker run` does not inform the user that a container will be downloaded. We should inform the user that this container is being pulled from docker hub.
**OS:** Ubuntu 16.04 LTS **Setup:** brand new machine **Problem:** When installing on a brand new system, we actually require a lot more than just pip and docker. Launching `alp --help`...