jshermeyer
                                            jshermeyer
                                        
                                    For the record this can be fixed easily if you modify line 77 to: `optimizer = torch.optim.Adam(model.parameters(), lr=float(model.hyperparams['learning_rate']), weight_decay=float(model.hyperparams['decay']))` I understand that the Adam optimizer does modify the learning rate...
Yes I think this is actually a redundancy I forgot to remove. Uploaded the updated code with this removed to master.
So sorry, just saw this. Yes it would be the model that you previously trained and saved as a pkl file. You then load the model to apply super-resolution to...