xplique icon indicating copy to clipboard operation
xplique copied to clipboard

[Bug]: - Feature Visualization for single-channel image

Open tmt1611 opened this issue 2 years ago • 2 comments

Module

Feature Visualization

Current Behavior

When using xplique.features_visualizations.optimize on model taking single-channel image as input, an dimension error occurs.

I suspect this is caused by line 43 in xplique/features_visualizations/transformations.py :

kernel = tf.tile(kernel, [1, 1, 3, 1])

where it should be kernel = tf.tile(kernel, [1, 1, number of channel, 1])

Expected Behavior

xplique.features_visualizations.optimize return expected images visualizing features

Version

1.3.0

Environment

- OS:linux
- Python version:3.11.3
- Tensorflow version:2.12.0
- Packages used version:

Relevant log output

ValueError                                Traceback (most recent call last)
Cell In[8], line 12
     10 # create the objective, '-1' is the last layer, and the c_id's are the ids of the classes
     11 obj_logits = Objective.neuron(model, layer=-1, neurons_ids=[c_id for c_id, c_name in classes])
---> 12 imgs, _ = optimize(obj_logits,
     13                    nb_steps=1024, # number of iterations
     14                    optimizer=tf.keras.optimizers.Adam(0.05))
     16 # show the results
     17 plt.rcParams["figure.figsize"] = [12, 8]

File ~/.local/lib/python3.11/site-packages/xplique/features_visualizations/optim.py:112, in optimize(objective, optimizer, nb_steps, use_fft, fft_decay, std, regularizers, image_normalizer, values_range, transformations, warmup_steps, custom_shape, save_every)
    110 images_optimized = []
    111 for step_i in range(nb_steps):
--> 112     grads = optimisation_step(model, inputs)
    113     optimizer.apply_gradients([(-grads, inputs)])
    115     last_iteration = step_i == nb_steps - 1

File /usr/local/insa/anaconda/lib/python3.11/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
    151 except Exception as e:
    152   filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153   raise e.with_traceback(filtered_tb) from None
    154 finally:
    155   del filtered_tb

File /tmp/__autograph_generated_filedrja5ayf.py:34, in outer_factory.<locals>.inner_factory.<locals>.tf__step(model, inputs)
     32     nonlocal imgs
     33     pass
---> 34 ag__.if_stmt(ag__.ld(transformations), if_body, else_body, get_state, set_state, ('imgs',), 1)
     35 imgs = ag__.converted_call(ag__.ld(tf).image.resize, (ag__.ld(imgs), (ag__.ld(input_shape)[1], ag__.ld(input_shape)[2])), None, fscope)
     36 model_outputs = ag__.converted_call(ag__.ld(model), (ag__.ld(imgs),), None, fscope)

File /tmp/__autograph_generated_filedrja5ayf.py:29, in outer_factory.<locals>.inner_factory.<locals>.tf__step.<locals>.if_body()
     27 def if_body():
     28     nonlocal imgs
---> 29     imgs = ag__.converted_call(ag__.ld(transformations), (ag__.ld(imgs),), None, fscope)

File /tmp/__autograph_generated_filenvijkzyn.py:24, in outer_factory.<locals>.inner_factory.<locals>.tf__composed_func(images)
     22     images = ag__.converted_call(ag__.ld(func), (ag__.ld(images),), None, fscope)
     23 func = ag__.Undefined('func')
---> 24 ag__.for_stmt(ag__.ld(transformations), None, loop_body, get_state, set_state, ('images',), {'iterate_names': 'func'})
     25 try:
     26     do_return = True

File /tmp/__autograph_generated_filenvijkzyn.py:22, in outer_factory.<locals>.inner_factory.<locals>.tf__composed_func.<locals>.loop_body(itr)
     20 nonlocal images
     21 func = itr
---> 22 images = ag__.converted_call(ag__.ld(func), (ag__.ld(images),), None, fscope)

File /tmp/__autograph_generated_fileemei2n0q.py:23, in outer_factory.<locals>.inner_factory.<locals>.tf__blur(images)
     21 try:
     22     do_return = True
---> 23     retval_ = ag__.converted_call(ag__.ld(tf).nn.depthwise_conv2d, (ag__.ld(images), ag__.ld(kernel)), dict(strides=[1, 1, 1, 1], padding='SAME'), fscope)
     24 except:
     25     do_return = False

ValueError: in user code:

    File "/home/mttruong/.local/lib/python3.11/site-packages/xplique/features_visualizations/optim.py", line 164, in step  *
        imgs = transformations(imgs)
    File "/home/mttruong/.local/lib/python3.11/site-packages/xplique/features_visualizations/transformations.py", line 175, in composed_func  *
        images = func(images)
    File "/home/mttruong/.local/lib/python3.11/site-packages/xplique/features_visualizations/transformations.py", line 46, in blur  *
        padding='SAME')

    ValueError: Dimensions must be equal, but are 1 and 3 for '{{node depthwise}} = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="SAME", strides=[1, 1, 1, 1]](resize/ResizeBilinear, Tile)' with input shapes: [6,?,?,1], [10,10,3,1].

To Reproduce

Apply xplique.features_visualizations.optimize on a model taking single-channel image as input.

tmt1611 avatar Nov 22 '23 18:11 tmt1611

Hi @tmt1611,

Actually, the problem goes a bit further than that, as our implementation of feature visualization is designed specifically for RGB images (their most popular usage nowadays). Compatibility with other formats requires a bit of work, and although we are thinking about adding this functionality, it is not scheduled to be integrated into the library just yet (I'm thinking Q1-2 2024). Hold tight!

Regards,

Agustin-Picard avatar Dec 12 '23 10:12 Agustin-Picard

Hi, Thank you for the response. It's a shame that I cant use such a nifty toolbox for my project. I'm looking forward to using it soon.

Regards,

tmt1611 avatar Dec 12 '23 17:12 tmt1611