Leon Overweel

Results 15 comments of Leon Overweel

I've also run into this with a ~50mb package. In my case it looks like the problem is a [32mb hard limit on Google Cloud Run requests](https://cloud.google.com/run/quotas#cloud_run_limits).

We got in touch with GCP support about this; for our near-term needs we'll probably end up hosting our PyPI cloud container in our k8s cluster instead of Cloud Run....

We ended up going for something close to 1. We turned off `storage.prepend_hash` and now use `gsutil` to push our wheels there manually and add the required (name, version) metadata;...

(In the form of checking whether `model.summary()` output remains the same.)

Hey, thanks for checking out Larq! If your dataset is in TensorFlow Datasets, the easiest way is to define a Zookeeper `@component` for it in [`larq_zoo/training/datasets.py`](https://github.com/larq/zoo/blob/master/larq_zoo/training/datasets.py); you can then call...

Hi, thanks for checking out Larq Zoo. @timdebruin and I went back through some old experiments to find their top-1 accuracies for you; here's the details: | Stage | Description...

No problem. Sadly I couldn't find our experiment data for the strong baseline, but from my notes at the time I think we got 62% validation accuracy.

Models which currently have explicit layers names, so no auto-generated names, based on the model summaries embedded in the docs: - [ ] `sota` - [ ] `QuickNet` - [...

Hi, thanks for raising this. The training script on Zoo is indeed not what we used to train the results we reported internally - that code is quite tightly coupled...