Weide Zhang
Weide Zhang
thanks. i guess it also assumes the imu_linear_acceleration is mostly due to gravity instead of motion ? // Change the orientation_ so that it agrees with the current // gravity_vector_....
say i want to have a reader A contains schema (a,b,c) while another reader B schema contains (a,b,c,d). Can they both be sampled together ? The extra field in B...
thx for the confirmation. we will propose a PR later.
also i wonder if the cache is shared across all trainer processes and across all epoches ? will the cache need to be reset after each training epoch ?
thanks for the info. it's helpful. we will do some investigation and contribute back to community if possible.
Hi Selitvin, In our current scenario, we have 5 different datasets. each of them contains about 500k images. In 2 of the dataset, we have two labels (in image format,...
@selitvin do you have any suggestions ? Or can you give me some guidelines in which case, split into multiple petastorm dataset make sense and when combining into one petastorm...
hi @selitvin , the image resolution in these two datasets are different, thus have to make 2 different schemas. So looks i have to make them two different dataset. "You...
just wondering if i use option 3(each rank loads the whole petastorm dataset), might reading the whole data set have any performance bottlenecks ? is it better to set number...
just to confirm how option 3 is configured: in make_reader api, just simply set cur_shard=None, shard_count=None, shard_seed=None, num_epochs=real epoch number instead of None. is it fine ?