MARS icon indicating copy to clipboard operation
MARS copied to clipboard

Cannot save videos

Open Huhaowen0130 opened this issue 3 years ago • 13 comments

Hello! When I tested MARS on my videos as below, 捕获 there seems to be something wrong with the saved video, the size is 1 KB: image May I ask how to save videos like this? Thank you! image

Huhaowen0130 avatar Dec 27 '21 07:12 Huhaowen0130

I have a similar issue. Video is created with a size of 252 bytes and it isn't playable.

sw-dev-code avatar Dec 31 '21 12:12 sw-dev-code

I have a similar issue. Video is created with a size of 252 bytes and it isn't playable.

Do you have any idea how to address it?

Huhaowen0130 avatar Dec 31 '21 15:12 Huhaowen0130

Sorry to hear you've been having issues with this - is this occurring on Linux or Windows?

We'll work to get this fixed, but in the meantime if you have Matlab you can also save video snippets and view your pose + annotation output using Bento: http://github.com/neuroethology/bentoMAT [edited to correct link]

annkennedy avatar Jan 02 '22 04:01 annkennedy

@annkennedy Can you please double-check the link for bento, seems like a broken link?

The issue happens on Windows in my case. You can find the error log below.

animating056    0% --  [Elapsed Time: 0:00:00] |            | (ETA:  --:--:--)
'list' object has no attribute 'emit'
'list' object has no attribute 'emit'
Finished processing all the data in the queue!

sw-dev-code avatar Jan 02 '22 04:01 sw-dev-code

Apologies, that's http://github.com/neuroethology/bentoMAT

annkennedy avatar Jan 02 '22 04:01 annkennedy

Sorry to hear you've been having issues with this - is this occurring on Linux or Windows?

We'll work to get this fixed, but in the meantime if you have Matlab you can also save video snippets and view your pose + annotation output using Bento: http://github.com/neuroethology/bentoMAT [edited to correct link]

I'm now trying BENTO, but I haven't found the way to save videos.

By the way, is MARS only designed for the case of a pair of mice? Can it be used to analyse the behavior of a single mouse?

Huhaowen0130 avatar Jan 02 '22 07:01 Huhaowen0130

You can save movies with Bento by selecting File->Save Movie. After setting a filename, an interface will pop up with save options, allowing you to set the start+stop times of the saved clip. Make sure the encoding format you select in the interface matches the extension you selected when saving the file, and be sure not to resize the window while the movie is being generated.

MARS is designed for pairs of interacting mice, though if you have a single mouse you can always just discard the pose data for the animal you're not interested in (which will be random nonsense), and of course the behavior classifier output won't make sense. You can also train new mouse pose models + behavior classifiers for single-mouse conditions, using http://github.com/neuroethology/MARS_developer. We're working on a version of MARS that will let you specify number+type of animals to track+detect behaviors for, it's not yet ready for release but hopefully will be out soon.

annkennedy avatar Jan 02 '22 18:01 annkennedy

You can save movies with Bento by selecting File->Save Movie. After setting a filename, an interface will pop up with save options, allowing you to set the start+stop times of the saved clip. Make sure the encoding format you select in the interface matches the extension you selected when saving the file, and be sure not to resize the window while the movie is being generated.

MARS is designed for pairs of interacting mice, though if you have a single mouse you can always just discard the pose data for the animal you're not interested in (which will be random nonsense), and of course the behavior classifier output won't make sense. You can also train new mouse pose models + behavior classifiers for single-mouse conditions, using http://github.com/neuroethology/MARS_developer. We're working on a version of MARS that will let you specify number+type of animals to track+detect behaviors for, it's not yet ready for release but hopefully will be out soon.

I see. Thank you for your reply!

Huhaowen0130 avatar Jan 03 '22 03:01 Huhaowen0130

@annkennedy Thank you so much for your help. Is there any way I can be notified when that new version of MARS is released?

sw-dev-code avatar Jan 03 '22 12:01 sw-dev-code

@annkennedy Can you please double-check the link for bento, seems like a broken link?

The issue happens on Windows in my case. You can find the error log below.

animating056    0% --  [Elapsed Time: 0:00:00] |            | (ETA:  --:--:--)
'list' object has no attribute 'emit'
'list' object has no attribute 'emit'
Finished processing all the data in the queue!

I got the same problem when I run with the sample video. Do you have any idea how to address it?

zhaojiachen1994 avatar Apr 08 '22 03:04 zhaojiachen1994

@annkennedy I use MARS on linux. output the video but there is no remarks for mounting and other information as shown in the previous figure.

Archerfaded avatar Aug 11 '22 07:08 Archerfaded

@annkennedy video like this no remarks 2022-08-11_153446 .

Archerfaded avatar Aug 11 '22 07:08 Archerfaded

I wrote a simple code to visualize the joints. Hope this can help you.

You need first to run the mars code to get predictions, and then run this code to visualize joints and output a video. Run pip install tqdm to install tqdm if you haven't installed it.

import json
import cv2
import numpy as np
from tqdm import tqdm

# change video_path and pred_path here
video_path = 'sample_videos/sample_clip_1.mp4'
pred_path = 'sample_videos/output_v1_8/sample_clip_1'

output_path = os.path.join(pred_path, 'output.mp4')

color = [(0,0,255),(255,0,0)]

def drawline(image, data, pt1, pt2):
    image = cv2.line(image, (int(data[0][pt1]), int(data[1][pt1])), (int(data[0][pt2]), int(data[1][pt2])), thickness=1, color=(0,255,255))
    return image

video = cv2.VideoCapture(video_path)
video_data = []

while(video.isOpened()):
    ret, frame = video.read()
    if ret == False:
        break
    video_data.append(frame)

print(np.array(video_data).shape)

fourcc = cv2.VideoWriter_fourcc(*'MJPG')
fps = video.get(cv2.CAP_PROP_FPS)
size = (int(video.get(cv2.CAP_PROP_FRAME_WIDTH)),int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)))
out = cv2.VideoWriter(output_path, fourcc, fps, size)
video.release()

for files in sorted(os.listdir(pred_path)):
    print(files)
    if 'pose_top' in files and '.json' in files:
        with open(os.path.join(pred_path, files)) as f:
            pose_top = json.load(f)
        print(pose_top.keys())

        for keys in sorted(pose_top.keys()):
            print(keys, len(pose_top[keys]))
            if keys == 'keypoints':
                for i,frame in enumerate(tqdm(range(min(len(pose_top[keys]),len(video_data))))):
                    image = video_data[i]           
                    for j,instances in enumerate(pose_top[keys][frame]):
                        for keypoints in range(len(instances[0])):
                            image = cv2.circle(image, (int(instances[0][keypoints]),int(instances[1][keypoints])), radius=5, color=color[j], thickness=-1)
                        
                        image = drawline(image, instances, 0, 1)
                        image = drawline(image, instances, 0, 2)
                        image = drawline(image, instances, 1, 3)
                        image = drawline(image, instances, 2, 3)
                        image = drawline(image, instances, 3, 4)
                        image = drawline(image, instances, 3, 5)
                        image = drawline(image, instances, 4, 6)
                        image = drawline(image, instances, 5, 6)
                    out.write(image)
    print()

out.release()

ichbill avatar Sep 05 '22 20:09 ichbill