pypylon
pypylon copied to clipboard
How to record video
Using a Basler ac1920-25uc on Windows Python 3.7, I can capture and record images (using .py files like grab.py, guiimagewindow.py, save_image.py), but how do I acquire and record videos via Python Pypylon?
check this. https://stackoverflow.com/questions/49782358/save-video-instead-of-saving-images-while-using-basler-camera-and-python
Tried running the sample code and I get the following error:
AttributeError: module 'pypylon' has no attribute 'factory'
Even i am getting the same errror
Can anyone help out of this?
As a sample for a GEV camera this code shows:
- Using ffmpeg process to record video that is created in a python process.
- Store YUV422 color data without conversion to RGB directly from the camera to H.264
Code to use the videowriter class is in the main at the end
import subprocess as sp
import os
### for demonstration of how to write video data
### this class is an excerpt from the project moviepy https://github.com/Zulko/moviepy.git moviepy/video/io/ffmpeg_writer.py
###
class FFMPEG_VideoWriter:
""" A class for FFMPEG-based video writing.
A class to write videos using ffmpeg. ffmpeg will write in a large
choice of formats.
Parameters
-----------
filename
Any filename like 'video.mp4' etc. but if you want to avoid
complications it is recommended to use the generic extension
'.avi' for all your videos.
size
Size (width,height) of the output video in pixels.
fps
Frames per second in the output video file.
codec
FFMPEG codec. It seems that in terms of quality the hierarchy is
'rawvideo' = 'png' > 'mpeg4' > 'libx264'
'png' manages the same lossless quality as 'rawvideo' but yields
smaller files. Type ``ffmpeg -codecs`` in a terminal to get a list
of accepted codecs.
Note for default 'libx264': by default the pixel format yuv420p
is used. If the video dimensions are not both even (e.g. 720x405)
another pixel format is used, and this can cause problem in some
video readers.
audiofile
Optional: The name of an audio file that will be incorporated
to the video.
preset
Sets the time that FFMPEG will take to compress the video. The slower,
the better the compression rate. Possibilities are: ultrafast,superfast,
veryfast, faster, fast, medium (default), slow, slower, veryslow,
placebo.
bitrate
Only relevant for codecs which accept a bitrate. "5000k" offers
nice results in general.
withmask
Boolean. Set to ``True`` if there is a mask in the video to be
encoded.
"""
def __init__(self, filename, size, fps, codec="libx264", audiofile=None,
preset="medium", bitrate=None, pixfmt="rgba",
logfile=None, threads=None, ffmpeg_params=None):
if logfile is None:
logfile = sp.PIPE
self.filename = filename
self.codec = codec
self.ext = self.filename.split(".")[-1]
# order is important
cmd = [
"ffmpeg-4.2.1-win64-static/bin/ffmpeg",
'-y',
'-loglevel', 'error' if logfile == sp.PIPE else 'info',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-s', '%dx%d' % (size[1], size[0]),
'-pix_fmt', pixfmt,
'-r', '%.02f' % fps,
'-i', '-', '-an',
]
cmd.extend([
'-vcodec', codec,
'-preset', preset,
])
if ffmpeg_params is not None:
cmd.extend(ffmpeg_params)
if bitrate is not None:
cmd.extend([
'-b', bitrate
])
if threads is not None:
cmd.extend(["-threads", str(threads)])
if ((codec == 'libx264') and
(size[0] % 2 == 0) and
(size[1] % 2 == 0)):
cmd.extend([
'-pix_fmt', 'yuv420p'
])
cmd.extend([
filename
])
popen_params = {"stdout": sp.DEVNULL,
"stderr": logfile,
"stdin": sp.PIPE}
# This was added so that no extra unwanted window opens on windows
# when the child process is created
if os.name == "nt":
popen_params["creationflags"] = 0x08000000 # CREATE_NO_WINDOW
self.proc = sp.Popen(cmd, **popen_params)
def write_frame(self, img_array):
""" Writes one frame in the file."""
try:
self.proc.stdin.write(img_array.tobytes())
except IOError as err:
_, ffmpeg_error = self.proc.communicate()
error = (str(err) + ("\n\nMoviePy error: FFMPEG encountered "
"the following error while writing file %s:"
"\n\n %s" % (self.filename, str(ffmpeg_error))))
if b"Unknown encoder" in ffmpeg_error:
error = error+("\n\nThe video export "
"failed because FFMPEG didn't find the specified "
"codec for video encoding (%s). Please install "
"this codec or change the codec when calling "
"write_videofile. For instance:\n"
" >>> clip.write_videofile('myvid.webm', codec='libvpx')")%(self.codec)
elif b"incorrect codec parameters ?" in ffmpeg_error:
error = error+("\n\nThe video export "
"failed, possibly because the codec specified for "
"the video (%s) is not compatible with the given "
"extension (%s). Please specify a valid 'codec' "
"argument in write_videofile. This would be 'libx264' "
"or 'mpeg4' for mp4, 'libtheora' for ogv, 'libvpx for webm. "
"Another possible reason is that the audio codec was not "
"compatible with the video codec. For instance the video "
"extensions 'ogv' and 'webm' only allow 'libvorbis' (default) as a"
"video codec."
)%(self.codec, self.ext)
elif b"encoder setup failed" in ffmpeg_error:
error = error+("\n\nThe video export "
"failed, possibly because the bitrate you specified "
"was too high or too low for the video codec.")
elif b"Invalid encoder type" in ffmpeg_error:
error = error + ("\n\nThe video export failed because the codec "
"or file extension you provided is not a video")
raise IOError(error)
def close(self):
if self.proc:
self.proc.stdin.close()
if self.proc.stderr is not None:
self.proc.stderr.close()
self.proc.wait()
self.proc = None
# Support the Context Manager protocol, to ensure that resources are cleaned up.
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
if __name__ == '__main__':
## sample program for a GEV camera
## target is to write the YUV video data without further conversion
##
import pypylon.pylon as py
cam = py.InstantCamera(py.TlFactory.GetInstance().CreateFirstDevice())
cam.Open()
cam.PixelFormat = "YUV422Packed"
with FFMPEG_VideoWriter("ffmpeg_demo.avi",(cam.Height(), cam.Width()), fps=30, pixfmt="uyvy422") as writer:
cam.StartGrabbingMax(1000)
while cam.IsGrabbing():
res = cam.RetrieveResult(1000)
writer.write_frame(res.Array)
print(res.BlockID)
res.Release()
@thiesmoeller I'm using your code to record videos from Basler camera but I don't know how to calculate precisely the total amount of time I want to record i.e. the number I need to insert on cam.StartGrabbingMax().
I calculated this number by the fps*seconds_I_want_to_record but it's not working. When I use for example, fps=24 and seconds=10 it just records ~5 seconds on the .avi file.
import datetime
import sys
def initiate_and_setup_cam(fps=24):
# enable emulation
import os
os.environ["PYLON_CAMEMU"] = "1"
cam = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice())
cam.Open()
cam.ImageFilename = img_dir
cam.ImageFileMode = "On" # enable image file test pattern
cam.TestImageSelector = "Off" # disable testpattern [ image file is "real-image"]
cam.PixelFormat = "Mono8" # choose one pixel format. camera emulation does conversion on the fly
cam.Height = height
cam.Width = width
cam.AcquisitionFrameRateAbs.SetValue(fps);
return cam
if __name__ == '__main__':
total_record_time = 2 * 60 ## in seconds
chunks_time = 10 ## in seconds
fps = 24
time_initiated = datetime.datetime.now()
while True:
time_initiated_chunk = datetime.datetime.now()
chunk_formatted_time = time_initiated_chunk.strftime("%d-%m-%Y-%H-%M-%S")
with FFMPEG_VideoWriter('./recorded-videos/' + chunk_formatted_time + '.avi',(cam.Height(), cam.Width()), fps=24, pixfmt="rgba") as writer:
while (time_initiated_chunk + datetime.timedelta(seconds=chunks_time) > datetime.datetime.now()):
cam = initiate_and_setup_cam(fps)
cam.StartGrabbingMax(fps)
while cam.IsGrabbing():
res = cam.RetrieveResult(1000)
writer.write_frame(res.Array)
res.Release()
if (time_initiated + datetime.timedelta(seconds=total_record_time) < datetime.datetime.now()):
break
What I am doing wrong? Is there a way I can use an while True loop to record infinitely (until a break condition, in this case)? If so, I could use datetime resources for breaking the loop.
Currently I'm now using a real camera yet, just the camera simulator, but I don't think this will impact.
The fps problem was being caused by the custom images that I was using at the camera simulation. It seems that the simulation can't produce videos larger than 20 fps.
To solve this, I needed to deactivate the custom image set that I was using, then simulation worked fine.
@denisb411 , do you mind posting how you managed to solve it? I have had issues maintaining steady fps when turning jpg to avi in cv2. I have been trying to work on a solution that doesn't store the images locally, rather a cache that theoretically continuously appends into the avi file. I have tried the approach using the image_time * image_fps = num_of_images_to_take but how can I store those images in an array, append continuously into the image array, and extract images from that array and run thru cv2 writer.
Thanks in advance.
I'll share my code in case it helps someone. To cap the fps I am setting the AcquisitionFrameRate parameter to the desired value but that might not work for higher values and might not be 100% precise, not sure.
import pypylon.pylon as pylon
from imageio import get_writer
fps = 5 # Hz
time_to_record = 60 # seconds
images_to_grab = fps * time_to_record
tlf = pylon.TlFactory.GetInstance()
devices = tlf.EnumerateDevices()
cam = pylon.InstantCamera(tlf.CreateDevice(devices[0]))
cam.Open()
print("Using device ", cam.GetDeviceInfo().GetModelName())
cam.AcquisitionFrameRate.SetValue(fps)
writer = get_writer(
'output-filename.mkv', # mkv players often support H.264
fps=fps, # FPS is in units Hz; should be real-time.
codec='libx264', # When used properly, this is basically "PNG for video" (i.e. lossless)
quality=None, # disables variable compression
ffmpeg_params=[ # compatibility with older library versions
'-preset', # set to fast, faster, veryfast, superfast, ultrafast
'fast', # for higher speed but worse compression
'-crf', # quality; set to 0 for lossless, but keep in mind
'24' # that the camera probably adds static anyway
]
)
print(f"Recording {time_to_record} second video at {fps} fps")
cam.StartGrabbingMax(images_to_grab, pylon.GrabStrategy_OneByOne)
while cam.IsGrabbing():
with cam.RetrieveResult(1000, pylon.TimeoutHandling_ThrowException) as res:
if res.GrabSucceeded():
img = res.Array
writer.append_data(img)
print(res.BlockID, end='\r')
res.Release()
else:
print("Grab failed")
# raise RuntimeError("Grab failed")
print("Saving...", end=' ')
cam.StopGrabbing()
cam.Close()
print("Done")
I think precise image acquisition according to the desired fps might be possible using triggers but I have not looked into that. This might have some answers: https://docs.baslerweb.com/resulting-frame-rate
I am going to share a problem I encountered: I used an RGB8 color profile and could not reach the 14 FPS at full resolution written in the camera's documentation. Solved it by using another format (YCbCr422_8) and converting the grabbed image via openCV (img = cv2.cvtColor(res.Array, cv2.COLOR_YUV2RGB_YUY2)
).
Hi @mijalapenos, Framerates are typically given in the highest throughput scenario, which is Bayer Raw transport. In your use case of recording H.264 video transferring RGB over the wire or even converting to RGB before encoding is wasting ressources. The video encoders work in YUV format. So keeping it in this format will give you highest performance both in FPS on the wire as in system load in your host
Hello @thiesmoeller,
I see your point, I was trying to achieve this for at least half a day. However, I could not find a pixel format in the ffmpeg which works with any of the raw formats. I always receive a message such as
Incompatible pixel format 'bayer_gbrg8' for codec 'libx264', auto-selecting format 'yuv444p'
and then the video appears to be black and white. Do you have any suggestions how to resolve this?
Thank you
Bayer format would give you the highest on-the-wire framerate, but you will get a high system load ( bayer interpolation is computationally expensive ). What you tried will be seen by libx264 as monochome data..
So you have two options ( maybe more ;-) ):
-
average fps and systemload by using YUV422 output of the camera and configuring libx264 to directly use this raw data. This will skip any conversion steps on the received video data before the encoding step See https://github.com/basler/pypylon/issues/113#issuecomment-543774545 where I documented using ffmpeg on windows and configuring the camera to output YUV422 format.
-
high fps and high system load by using Bayer raw format on the camera and converting doing bayer interpolation on your host. you could use either the pypylon ImageFormatConverter ( https://github.com/basler/pypylon/blob/master/samples/utilityimageformatconverter.py ) or use opencv function for the conversion step to YUV422. Use ffmpeg as above
I was aiming for the first option but my camera (daA2500-14uc) does not support YUV422Packed
format, just the YCbCr422_8
, which does not seem to have any corresponding pixel format in ffmpeg sadly. Anyways, thank you for your assistance!
It is supported: FourCC code is YUY2
Seems like -pix_fmt yuyv422
works only for -vcodec rawvideo
, which generates extremely large files (10 minutes ~ 74 GB). Other encoders switch to another format and therefore produce black & white footage.