Ross Wightman
Ross Wightman
@Wauplin yup, upload case is different, was just pointing out for this download, if there is going to be a checksum fallback, or we end up deciding metadata is too...
Sory for the very slow response, could modify the model after creating ```` def clear_inplace(module): res = module if hasattr(module, 'inplace'): module.inplace = False else: for name, child in module.named_children():...
@username2018 freezing might help the situation you describe, but an 'easy' interface (ie proportion, or generic 'number' of frozen or unfrozen layers) is well, difficult to get right :) I'm...
@Lyken17 so, this isn't as easy as one might think. tar files are simple, you can just stash the offsets for each file entry and directly access them later. When...
I don't see why there is any need to downvote this, this a contentious issue based on numerous heated issues in black, etc that you can look up. The question...
> Still, it's readable into my eyes, I think it's a personalization choose but perhaps modern editors with syntax highlighting solve your problem: I use a 'modern editor', PyCharm, and...
The webdataset pipeline doesn't have access to the string classnames at that point, it's using integer indicies from the get go, so the map capability is pretty minimal it can...
Quickest path would be to hack the _decode function at this line https://github.com/huggingface/pytorch-image-models/blob/e741370e2b95e0c2fa3e00808cd9014ee620ca62/timm/data/readers/reader_wds.py#L157 Decode the json there, get the class_name from the json, and then do a lookup on your...
@TheDarkKnight-21th that's going to be extremely slow, you're loading the same mapping file every sample. You'd want to add a class_to_idx argument to the _decoder fn. If it's a valid...
With sharded datasets there is no way of knowing what samples are still valid due to filtering, so there is no way of knowing the dataset length without calculating yourself....