C3AE icon indicating copy to clipboard operation
C3AE copied to clipboard

ValueError: Input 0 of layer conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 64, 3]

Open rakesh160 opened this issue 5 years ago • 8 comments

while running the command >python nets/test.py -g -v -se -m ./model/c3ae_model_v2_151_4.301724-0.962, I am getting a value error.

ValueError: Input 0 of layer conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 64, 3] image

Any help is much appreciated!

rakesh160 avatar Aug 12 '20 05:08 rakesh160

Can you provide your enviroments (tensorflow version) ? I have test it locally (tensorflow 2.1), and work well.

StevenBanama avatar Aug 12 '20 08:08 StevenBanama

while running the command >python nets/test.py -g -v -se -m ./model/c3ae_model_v2_151_4.301724-0.962, I am getting a value error.

ValueError: Input 0 of layer conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 64, 3] image

Any help is much appreciated!

you can find that your inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3).

StevenBanama avatar Aug 12 '20 08:08 StevenBanama

Thanks for the quick reply @StevenBanama .

I have tensorflow 2.3.

I am new to these things and just trying to test it on one of the test images.

inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3) . Can you please elaborate a little on how to resolve the issue.

rakesh160 avatar Aug 12 '20 09:08 rakesh160

You can do print the shape img before lines 102 like this to check the inputs. ''' print(img.shape) '''

StevenBanama avatar Aug 12 '20 11:08 StevenBanama

Thanks for the quick reply @StevenBanama .

I have tensorflow 2.3.

I am new to these things and just trying to test it on one of the test images.

inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3) . Can you please elaborate a little on how to resolve the issue.

Firstly you can update your local repos. And then run it as below.

python nets/test.py -g -se -i assets/timg.jpg -m ./model/c3ae_model_v2_151_4.301724-0.962

StevenBanama avatar Aug 12 '20 11:08 StevenBanama

Thanks for the quick reply @StevenBanama . I have tensorflow 2.3. I am new to these things and just trying to test it on one of the test images. inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3) . Can you please elaborate a little on how to resolve the issue.

Firstly you can update your local repos. And then run it as below.

python nets/test.py -g -se -i assets/timg.jpg -m ./model/c3ae_model_v2_151_4.301724-0.962

Have you resolved it? Feeling free to promote the issue~

StevenBanama avatar Aug 17 '20 10:08 StevenBanama

i have found the solution, so you just need to np.expand_dim() each image in tri_imgs array.. then pass tri_imgs array to model.predict(). it will work fine

KhizarAziz avatar Nov 02 '20 22:11 KhizarAziz

def predict(models, img, save_image=False): try: bounds, lmarks = gen_face(MTCNN_DETECT, img, only_one=False) ret = MTCNN_DETECT.extract_image_chips(img, lmarks, padding=0.4) except Exception as ee: ret = None print(img.shape, ee) if not ret: print("no face") return img, None padding = 200 new_bd_img = cv2.copyMakeBorder(img, padding, padding, padding, padding, cv2.BORDER_CONSTANT) bounds, lmarks = bounds, lmarks

colors = [(0, 0, 255), (0, 0, 0), (255, 0, 0)]
for pidx, (box, landmarks) in enumerate(zip(bounds, lmarks)):
    trible_box = gen_boundbox(box, landmarks)
    tri_imgs = []
    for bbox in trible_box:
        bbox = bbox + padding
        h_min, w_min = bbox[0]
        h_max, w_max = bbox[1]
       
        resized =cv2.resize(new_bd_img[w_min:w_max, h_min:h_max, :], (64, 64))
        cv2.imwrite("test2222.jpg", resized)
        tri_imgs.extend([cv2.resize(new_bd_img[w_min:w_max, h_min:h_max, :], (64, 64))])

    for idx, pbox in enumerate(trible_box):
        pbox = pbox + padding
        h_min, w_min = pbox[0]
        h_max, w_max = pbox[1]
        new_bd_img = cv2.rectangle(new_bd_img, (h_min, w_min), (h_max, w_max), colors[idx], 2)


    # 初始化一个列表
    k = []

    # 将图像转换为张量
    #img_tensor = np.transpose(resized, (2, 0, 1))

    
    img_tensor = np.expand_dims(resized, axis=0)
    print("shape",img_tensor.shape)

     # 将张量加入列表
    k.append(img_tensor)
    k.append(img_tensor)
    k.append(img_tensor)
    #img_tensor= tf.convert_to_tensor(brray)
    #print("out2=",type(img_tensor))
    result = models.predict(k)
    age, gender = None, None
    if result and len(result) == 3:
        age, _, gender = result
        age_label, gender_label = age[-1][-1], "F" if gender[-1][0] > gender[-1][1] else "M"
    elif result and len(result) == 2:
        age, _  = result
        age_label, gender_label = age[-1][-1], "unknown"
    else:
       raise Exception("fatal result: %s"%result)
    cv2.putText(new_bd_img, '%s %s'%(int(age_label), gender_label), (padding + int(bounds[pidx][0]), padding + int(bounds[pidx][1])), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (25, 2, 175), 2)
if save_image:
    print(result)
    cv2.imwrite("igg.jpg", new_bd_img)
return new_bd_img, (age_label, gender_label)

using these codes to replace the source file,can solve the problem

xiangdeyizhang avatar Feb 11 '23 08:02 xiangdeyizhang