Results 19 comments of lxysl

Hi, I am willing to learn the I-JEPA code and try to implement it in MMPretrain.

I am impressed by this brilliant work. But I have the same confusion about the code implementation. It seems the code implementation not completely match with the paper. I wonder...

The entire model weights are saved in this way in `safe_save_model_for_hf_trainer()`: ```python ... if trainer.deepspeed: torch.cuda.synchronize() trainer.save_model(output_dir) return state_dict = trainer.model.state_dict() if trainer.args.should_save: cpu_state_dict = { key: value.cpu() for key,...

And I also wonder why model weights are not checkpointed every save_step? Isn't the default save_step in hugging face Trainer equal to 500? Please @ me if there's any progress.

> Oh... Ignore all the above noisy comments, just a single line solve all issues. I will try it later! :)

No, just a series of data points. Use wfdb to load it and convert it to a 1d numpy array, finally convert to a pytorch tensor(if using pytorch).

> > Sorry for the confusion. For some reason when we developed the model, we save all files in OCR-VQA as `.jpg`, including some of the files that you may...

```python class InceptionResNetV2(nn.Module): def __init__(self, num_classes=1001): super(InceptionResNetV2, self).__init__() # Special attributs self.input_space = None self.input_size = (299, 299, 3) self.mean = None self.std = None # Modules self.conv2d_1a = BasicConv2d(3,...

> Thanks! > […](#) > On Tue, Feb 27, 2024 at 23:10 lxysl ***@***.***> wrote: class InceptionResNetV2(nn.Module): def __init__(self, num_classes=1001): super(InceptionResNetV2, self).__init__() # Special attributs self.input_space = None self.input_size =...