Simon Willison
Simon Willison
Right now the look up tables created as a result of #17 always use string is the value type. If an explicit type has been set using --shape that type...
This should help a bunch - how to extract each individual tab of an Excel file into a separate dataframe: https://github.com/palewire/pandas-combine-workbooks-example/blob/master/pandas-combine-workbooks-example.ipynb
This would allow me to add `datasette publish now --csv=foo.csv --csv=bar.csv --csv-db-name=csvs` to https://github.com/simonw/datasette
Right now if the first JSON object is missing keys that are present in the second JSON object the script crashes.
I've been hoping to find a templating equivalent of [Black](https://github.com/psf/black) for quite a while. Could automatic formatting fit this project?
I'm not an AWS expert. I would feel a lot more comfortable if some AWS experts could review this tool and make sure that what it is doing makes sense...
I already have `delete-user` - this would be a similar utility but for deleting buckets. Mainly so I don't have to remember how to do it with `awscli`.
Would have been useful here: https://github.com/simonw/s3-credentials/issues/47#issuecomment-1114321878
It's frustrating when using `s3-credentials put-object` that you have to specify the key name each time, rather than deriving that from the filename: s3-credentials put-object simonwillison-cors-allowed-public \ click_default_group-1.2.2-py3-none-any.whl \ /tmp/click-default-group/dist/click_default_group-1.2.2-py3-none-any.whl...
If you want to access S3 from a Lambda function, AWS recommend you create a dedicated role that the Lambda function can then use. But... you still need to attach...