C3D-keras
C3D-keras copied to clipboard
the version of keras
@TianzhongSong
Thank you for sharing!
I want to know the version of Keras, because because i had a problem, namely, "Importerror cannot import name Max_Pool3D". This may not be a code problem, so i want to change my version of Keras.
Thank you so much!
@buaa-luzhi keras2.0.8
@TianzhongSong
Thank you very much!
@buaa-luzhi You are welcome!
@TianzhongSong I am sorry to trouble you again!
If execute the program twice, then "Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)" will happen. I do not know what happened. Have you encountered such a problem?
Thank you so much!
@buaa-luzhi Sorry, I have not encountered this problem and I have no idea how to solve the problem.
@TianzhongSong 您好! 我用这个训练代码训练出来的模型对测试集进行测试,然而分类的结果却和训练过程中边训练边测试的分类结果(准确率)不一致,您觉得问题在哪呢,测试代码是一致的,感谢!
@buaa-luzhi 不太好判断,可否分享一下你的测试代码?
-- coding: utf-8 --
from future import print_function
import keras.backend as K import numpy as np import scipy.io import os import cv2 from keras.optimizers import SGD from keras.models import model_from_json from glob import glob from natsort import natsorted from keras.models import load_model from datetime import datetime from i3d_inception_32f_M_Pre_Factorize import Inception_Inflated3d
batch_size = 1
if name == 'main':
# init model
model = Inception_Inflated3d()
# 加载预训练模型
checkpoints = glob(os.path.join(os.getcwd(), "w/*.h5"))
print(checkpoints)
checkpoints = natsorted(checkpoints)
assert len(checkpoints) != 0, 'No checkpoints found.'
checkpoint_file = checkpoints[-1]
SGD2 = SGD(lr=0.005, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=SGD2, metrics=['accuracy'])
print("[Info] loading model...\n")
model = load_model(checkpoint_file)
print("[Info] loading model -- DONE!")
# 测试样本
testing_data_list = 'test_data/test_sample.txt'
f = open(testing_data_list, 'r')
f_lines = f.readlines()
f.close()
features = []
for idx, line in enumerate(f_lines):
line = line.strip('\n')
test_i_pic = os.listdir(line)
test_i_pic.sort(key=lambda x: int(x[:-6]))
test_i_pic_len = len(test_i_pic)
test_i_data = np.zeros([1, 32, 224, 224, 3])
average_values = [197.7, 197.7, 197.7]
for i in range(test_i_pic_len):
pic_path = os.path.join(line, test_i_pic[i])
even_ind_1 = (i % 32)
img = cv2.imread(pic_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
image_h, image_w, image_c = np.shape(img)
square_sz = min(image_h, image_w)
crop_h = int((image_h - square_sz) / 2)
crop_w = int((image_w - square_sz) / 2)
image_crop = img[crop_h: crop_h + square_sz, crop_w: crop_w + square_sz]
image_crop = cv2.resize(image_crop, (224, 224), interpolation=cv2.INTER_AREA).astype(np.float32) - average_values
test_i_data[0, even_ind_1, :] = image_crop
# ok
# classes = model.predict_on_batch(test_i_data)
# ok
classes = model.predict(test_i_data, verbose=0, batch_size=batch_size)
classes = [item.tolist().index(max(item.tolist())) for item in classes]
print(classes)
print ('Predict over ...')
还有一个问题:看了您前两天更新的代码,发现训练过程中图像的翻转发生了变化,不是原先的数据增加两倍了,而是随机决定每个样本是原图还是翻转,您是怎么考虑这一改变的,谢谢!
@buaa-luzhi
1、关于测试代码, 你的数据预处理和我之前的不太一样,测试时的数据预处理得和训练时保持一致
2、随机决定每个样本是原图还是翻转,并且每个样本是随机裁剪的,这样做是为了增加数据的丰富性,就是种数据扩充手段,增强网络泛化能力。
@buaa-luzhi 3D卷积做视频分类的,也可以参考我另一个repo:3D-Dense-Residual-Network-for-Action-Recognition
@TianzhongSong 非常感谢! 测试代码,我是根据需要做了修改。 感谢您分享的代码。
@TianzhongSong “3D-Dense-Residual-Network-for-Action-Recognition“ 发文章了吗,想看看,没搜到啊
@buaa-luzhi 写了还没发表
请教下video2img这个脚本咋没跑通啊?
video_path = './ucf101/'
save_path = './ucfimgs/'
其他没改,
数据是将几个短视频avi放到第一个文件夹下,结果第二个文件下啥也没有啊,大佬这是啥情况啊?
请教下这是咋回事啊??
@ucasiggcas 不好意思 請問一下要如何下載那些影片 謝謝