KTH-Action-Recognition
KTH-Action-Recognition copied to clipboard
Action Recognition on the KTH Dataset
the final accuracy obtained from the uploaded model? My accuracy about block+flow is far from 90%
I have installed the latest OpenCV version-> 4.6.0.66 and Pytorch version->1.12.1. I am getting the following error. [ WARN:[email protected]] global D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp (239) cv::findDecoder imread_('..\dataset\walking\person25_walking_d4_uncomp.avi'): can't open/read file: check file path/integrity...
it's not issue but how i can use trained model to detect behavior with opencv ?
Runtime Error: Expected object of scalar type Long but got scalar type Byte for argument #2 'target'
Loading Dataset Start training Traceback (most recent call last): File "train_cnn_single_frame.py", line 59, in validate=True, resume=resume, use_cuda=cuda) File "D:\Action Recognition\KTH Program\KTH-Action-Recognition-master\KTH-Action-Recognition-master\main\train_helper.py", line 109, in train loss = criterion(outputs, labels) File...
I can't make the dataset. The following exception is raised. Though I have updated imageio and ffmpeg plugins. Could you sense what else could be done? ~\Anaconda\envs\tensorflow_env\lib\site-packages\imageio_ffmpeg\_io.py in read_frames(path, pix_fmt,...
Can you tell me how to run the cnn+of with a smaller training set so that my system doesnt crash
Hi Khoi, Simple question here, please direct me on how I can try out my models on a sample video to detect human activity.
currently, we can test list of videos and get average accuracy. I want to feed the single video file and get a result.
Hi, This is not actually an issue. I could successfully run datautils.py of the main folder and could generate pickle files corresponding to train, test and validation data for CNN...
Hi, When I run your code, there is no problem with data_utils.py and data_utils.py. But I can not run eval_cnn_block_frame_flow.py sucessfully. Loading dataset Loading model Traceback (most recent call last):...