(BigQuery) Running a query and then downloading results via storage read api is awkward
I would like to use a syntax like client.query(project_id, query).use_storage_read(true) or client.query_with_storage(project_id, query) to send a query and then read the results via the storage read api.
Currently we need to use client.job.query(...), wait for the job to finish, grab the destination id (if it is temporary/unknown) from the response via client.job().get(...) and then initiate a client.read_table(...) to pull the data. This is fine but I imagine that this is a common enough use case to warrant providing a convenient top level api.
If you could outline the which api interface you think is best I'd be happy to have a go making a PR.
Thanks for your suggestion.
I think client.query_with_storage(project_id, query) would be better since it does not affect the existing intanface.
Ideally, it would be best to abstract the return value Iterator like Go implementation.
pub struct RowIterator {
inner: // enum of Storage or REST API
}
pub async fn query(project_id, query) -> Result<RowIterator, Error>
https://github.com/googleapis/google-cloud-go/blob/2b99e4f39be20fe21e8bc5c1ec1c0e758222c46e/bigquery/query.go#L405
However, there is a lot to modify, so any suggestions are welcome for PR.