Kunwar Raj Singh
Kunwar Raj Singh
So, float16 may be possible already, but there are problems. We could cast all the weights to float16, after we have loaded them. But the problem with casting after loading...
Solves float16 inference, part of https://github.com/geohot/tinygrad/issues/829 Changes made: 1. Allow casting weights to a specific dtype in `fake_torch_load_zipped`, one layer at a time. > 2. Setting default_type for a `Tensor`...
Fix for https://github.com/geohot/tinygrad/blob/master/tinygrad/tensor.py#L543 ran through `test/test_tensor.py`, works
So far I've created base classes based on reference implementation (thanks @wozeparrot ), and I'm able to load the weights @geohot https://github.com/mlcommons/training/tree/master/object_detection/pytorch/maskrcnn_benchmark TODO: - [x] Load weights of the saved...
currently, there is no limit to how large the ops backlog can grow, this can cause tinygrad to create kernels which have too many args, leading to compilation warnings and...
while gathering elements by a 1-D tensor along a single axes is somewhat possible with arange, compare, multiply (as shown here https://github.com/geohot/tinygrad/issues/858) , it does not allow for full fancy...
Haven't made any real changes yet, just wanted to have a correct baseline first Changes made: - Eval needs to be done on entire 10k eval set - Fetch batches...
So far, changed linearizer to do calculations based on buffer dtypes, and also made grouping generic to vector dtypes ``` a = Tensor.rand(10,10, dtype=dtypes.float16) # also works for int32 (a+a).realize()...
Solves https://github.com/truefoundry/cognita/issues/317
Steps to reproduce: 1. Create a collection 2. Link 2 data sources to it. Excepted Behaviour: - when one of the data sources has completed ingestion, its status changes to...