David Slater
David Slater
Possibly related to #458
One way to deal with this would be to add a `drop_last` kwarg to `fit` similar to what pytorch dataloaders do. Here's how it is defined in https://pytorch.org/docs/stable/_modules/torch/utils/data/dataloader.html: ``` drop_last...
It was because the original dataset did not contain splits. However, I think that we're fine to just call it `resisc45`. You will want to incorporate `resisc45_dataset_partition.py` into your `_generate_examples...
Those URLs are what results from applying the resisc45_dataset_partition.py script to the original dataset `NWPU-RESISC45.tar.gz` and then breaking into separate files. I think that we probably want to just reference...
Builds fine for me.
Where did it error? Do you have `ARMORY_PRIVATE_S3_ID` and `ARMORY_PRIVATE_S3_KEY`?
You can break down that operation with: ``` from armory.datasets import package, upload package.package("resisc45") package.update("resisc45") package.verify("resisc45") upload.upload("resisc45", public=True) ```
Hi Sonal, most of our dataset functionality is a bit heavy-handed for what you're doing, as it's designed for easily transfer and replication on other systems, and would require creating...
For new datasets, I was thinking that we put them into a new part of the `armory-public-data` bucket, like `armory-public-data/tfdsv4//` so that can keep them nicely separated from the current...
Is this necessary for us to support? Or is this something that is just useful to have for our own infrastructure?