XyChen

Results 64 comments of XyChen

> Hi,i have one doubt about the paper and the corresponding code. in the paper ,fmid = f15 + lambda(g - I) * f15; but I think the code corresponds...

> ### Tensorflow Version v1.11 > Please replace this line with Tensorflow version you are using. > > ### Keras Version v2.2.4 > Please replace this line with Keras version...

In keras 2.2, you can open the _mobilenet_v2.py_ file in the _keras_applications_ folder to find how to use relu6 rightly. In fact, you only need replace the **Activation('relu6')(x)** of **mobilenet_v2.layers.ReLU(6.)(x)**.

@KyriaAnnwyn What are the specific settings? GPU oom may occur when the input size is too large, especially for HAT-L on SRx2.

@KyriaAnnwyn 512x512 is really a large input size, which may cost about 20G memory for HAT-L on SRx2. You might consider testing the image in overlapping patches then merging together...

I will test the memory requirement for the models and provide a solution for limited GPU resources for testing.

The tile mode is provided for limited GPU memory when testing. The setting can be referred to https://github.com/XPixelGroup/HAT/blob/39eeb5c28741b05ed2f23f13ff9131efe7539fde/options/test/HAT_tile_example.yml#L7-L9

@morgen-star You don't need to change the learning rate when keeping the total batch size the same. For basic version of HAT, 4 A100 GPUs with batch size per gpu...

@AIisCool The provided pretrained models aim to perform High-Quality Image Super-Resolution, for which can not deal with other degradations like compression or blurring. We would consider training a HAT model...

@yumulinfeng1 What is the version of basicsr used? Is it `1.3.4.9`? The latest version of basicsr change the location of the function `rgb2ycbcr`.