Gunes Evitan
Gunes Evitan
> Using the definition of the log partial hazard from [here](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.html#cox-s-proportional-hazard-model), for the DeepSurv, i.e., [CoxPH model](https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/cox-ph.ipynb), you can simply call `model.predict(x)`, as the log partial hazard is the output...
UNet++: A Nested U-Net Architecture for Medical Image Segmentation Attention U-Net: Learning Where to Look for the Pancreas TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation UNETR: Transformers for...
Thank you for the example @Dipet . Can you please explain what are `get_transform_init_args_names`, `get_params_dependent_on_targets` and `targets_as_params` methods used for? It is way more verbose than I expected.
I think simplest way to do it is downloading default config, changing `pretrained` or `finetuned` urls to absolute paths and overriding `PRETRAINED_MODEL_CONFIG_DICT` of the model class. I'm not sure it...
I'm glad that lavis inherently allows this with very few modifications. I tried this workaround on BLIP model and it didn't ask for internet connection in my local machine. `blip_caption_base_coco.yaml`,...
I haven't tried this patch on BLIP2. Why is it different?
You have to track every download in source code and patch it with your local paths.
> > I have to use transformers 4.27 because latest version of clip-interrogator requires that specific version. After upgrading transformers from 4.26 to 4.27, I had this issue. > >...
> We have made an update to BLIP-2 OPT models so that they can work with the latest transformers with version>=4.27. Does BLIP model work with transformers>=4.27 too?
You can modify the code to extract image/text features in batches but that won't yield any significant speed boost. Bottleneck is the flavor chaining part. I'm not sure that can...