picamera2 icon indicating copy to clipboard operation
picamera2 copied to clipboard

[BUG]How to change the FPS with camera api

Open krishgcek opened this issue 4 months ago • 9 comments

Describe the bug

I am trying to set up a CCTV camera using a Raspberry Pi (2B+) with a Noir OV5647 camera. My goal is to change the framerate dynamically to control exposure at night.

During daytime → FPS = 16

During nighttime → FPS = 10

However, the H.264 header always seems to be set to 30 FPS, regardless of the actual framerate. This causes ffmpeg to encode the video at half speed.

I also tried using -vsync vfr, but the problem persists.

To Reproduce

Minimal script that reproduces the issue:

ffmpeg_cmd = [ "ffmpeg", "-fflags", "+genpts", "-analyzeduration", "5000000", "-probesize", "5000000", "-vsync", "vfr", "-thread_queue_size", "512", "-dts_delta_threshold", "1", "-f", "h264", "-i", "pipe:0", "-thread_queue_size", "512", "-f", "pulse", "-i", pulse_device, "-af", ( "aresample=async=1,adelay=5000|5000," "highpass=f=200,lowpass=f=3000," "afftdn=nf=-35,equalizer=f=440:t=q:w=1:g=-20," "volume=10dB" ), "-c:v", "copy", "-fflags", "nobuffer", "-c:a", "libopus", "-b:a", "32k", "-ac", "1", "-ar", "16000", "-use_wallclock_as_timestamps", "1", "-f", "rtsp", "-rtsp_transport", "tcp", "rtsp://127.0.0.1:8554/mystream" ]

ffmpeg = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)

initial_fps = 16 frame_us = int(1_000_000 / initial_fps)

video_config = picam2.create_video_configuration( main={"size": (1920, 1080)}, controls={ "FrameDurationLimits": (frame_us, frame_us), "Sharpness": 4.0, "Contrast": 2.0, "Saturation": 1.3, } )

video_config["transform"] = libcamera.Transform(hflip=1, vflip=1) picam2.configure(video_config)

Expected behavior

ffmpeg should publish the RTSP stream at normal (1×) speed, matching the dynamically set framerate.

Actual behavior

At 16 FPS → works correctly.

At 10 FPS → ffmpeg publishes the RTSP stream at half speed.

When using rpicam-vid, the H.264 header is set properly and playback speed is correct. However, I need to be able to change FPS dynamically without RTSP disconnection.

Console output / screenshots

If applicable, please see attached logs/screenshots that show the error or playback issue.

Hardware

Raspberry Pi 2B+

OV5647 Noir camera

Additional context

Problem only occurs when framerate is reduced dynamically.

With a fixed framerate (e.g., 16 FPS), everything works fine.

With dynamic framerate (16 → 10 FPS), ffmpeg stream playback slows down.

krishgcek avatar Aug 28 '25 07:08 krishgcek

Hi, and thanks for the question. I wasn't sure if your example was actually a fully self-contained one that I could run as I couldn't see exactly how frames were getting into your FFmpeg process. Is Picamera2 doing the encoding, or is FFmpeg doing that? Also, if you could use code tags (triple backquote on a line) so as preserve the indentation, that would be helpful.

If Picamera2 is doing the encoding, the H.264 encoder has an enable_sps_framerate parameter. You could try setting that to False - which will stop a constant framerate being encoded in the compressed bitstream (which is in any case really not the place for it).

Can you also say more about how you know the stream is coming out wrong? There's always a chance the playback software is messing things up, so it might be worth capturing and inspecting the RTSP output.

But otherwise I'm not sure what to suggest as I can't claim any particular expertise with RTSP or ffmpeg.

I tend to use MediaMTX for RTSP streaming these days (see section 9.1.2 of the manual), though I can't vouch for how well it might run on a Pi 2B+. It should work with audio too (you would have to set encoder.audio = True). Though again, I don't know what it might do with variable framerates.

davidplowman avatar Aug 28 '25 08:08 davidplowman

import time
import subprocess
import numpy as np
from picamera2 import Picamera2, MappedArray
from picamera2.encoders import H264Encoder
from picamera2.outputs import FileOutput
import libcamera

# === PulseAudio device ===
pulse_device = "webrtc_mic"

# === FFmpeg command ===
ffmpeg_cmd = [
    "ffmpeg",
    "-fflags", "+genpts",
    "-analyzeduration", "5000000",
    "-probesize", "5000000", 
    "-vsync", "vfr",    
    "-thread_queue_size", "512",
     "-dts_delta_threshold", "1",
    "-f", "h264", "-i", "pipe:0",
    "-c:v", "copy",
    "-fflags", "nobuffer",
    "-f", "rtsp", "-rtsp_transport", "tcp",
    "rtsp://127.0.0.1:8554/mystream"
]
ffmpeg = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)

# === Camera setup ===
tuning = Picamera2.load_tuning_file("/usr/share/libcamera/ipa/rpi/vc4/ov5647_noir.json")
picam2 = Picamera2(tuning=tuning)

# Initial FPS and controls
initial_fps = 16
frame_us = int(1_000_000 / initial_fps)
video_config = picam2.create_video_configuration(
    main={"size": (1920, 1080)},
    controls={
        "FrameDurationLimits": (frame_us, frame_us),
        "Sharpness": 4.0,
        "Contrast": 2.0,
        "Saturation": 1.3,
    }
)

# Apply hflip + vflip
video_config["transform"] = libcamera.Transform(hflip=1, vflip=1)
picam2.configure(video_config)


#picam2.start()

# === Encoder and recording ===
encoder = H264Encoder(bitrate=1800000,repeat=False,enable_sps_framerate=False,framerate=16.0)

picam2.start_recording(encoder, FileOutput(ffmpeg.stdin))


# === FPS and grayscale helpers ===
def set_fps(us_per_frame):
    picam2.set_controls({"FrameDurationLimits": (us_per_frame, us_per_frame)})

def set_grayscale(enabled):
    picam2.set_controls({"Saturation": 0.0 if enabled else 1.3})


# === FPS and image controls ===
def set_fps(us_per_frame):
    picam2.set_controls({"FrameDurationLimits": (us_per_frame, us_per_frame)})

def set_contrast(value):
    picam2.set_controls({"Contrast": value})

def set_gain(value):
    picam2.set_controls({"AnalogueGain": value})

def set_shutter(us):
    picam2.set_controls({"ExposureTime": us})

def set_auto_exposure():
    # Reset to automatic exposure/gain
    picam2.set_controls({"AeEnable": True})

def set_manual_exposure():
    # Disable auto exposure to allow manual shutter/gain
    picam2.set_controls({"AeEnable": False})
    
# === Main loop: adjust FPS dynamically ===
try:
    while True:
        now = time.localtime()
        h, m = now.tm_hour, now.tm_min

        if 6 <= h < 17:
            set_fps(62500)              # 16 fps
            set_grayscale(False)
            set_auto_exposure()         # let camera handle exposure
            set_contrast(1.4)           # strong contrast
        elif h == 17 and m >= 30:
            set_fps(83333)              # 12 fps
            set_manual_exposure()
            set_contrast(0.7)
            set_gain(9)                 # boost brightness
            set_shutter(2800000) 
            set_grayscale(False)
        else:
            # === Nighttime ===
            set_fps(100000)           # 10 fps
            set_grayscale(True)
            set_manual_exposure()
            set_contrast(0.7)
            set_gain(9)                 # boost brightness
            set_shutter(2800000)        # ~2.8s exposure

        time.sleep(30)

except KeyboardInterrupt:
    print("Stopping...")

finally:
    picam2.stop_recording()
    picam2.stop()
    ffmpeg.stdin.close()
    ffmpeg.wait()

Here is the minimal script. I tried, setting enable_sps_framerate to False , but still the header has 30fps.

log from ffmpeg: -vsync is deprecated. Use -fps_mode Input #0, h264, from 'pipe:0': Duration: N/A, bitrate: N/A Stream #0:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080, 30 fps, 30 tbr, 1200k tbn Output #0, rtsp, to 'rtsp://127.0.0.1:8554/mystream': Metadata: encoder : Lavf59.27.100 Stream #0:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080, q=2-31, 30 fps, 30 tbr, 90k tbn Stream mapping: Stream #0:0 -> #0:0 (copy) [rtsp @ 0x209ba00] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly ^CStopping...ps= 17 q=-1.0 size=N/A time=00:01:33.73 bitrate=N/A speed=0.564x frame= 2821 fps= 17 q=-1.0 Lsize=N/A time=00:01:33.99 bitrate=N/A speed=0.564x

krishgcek avatar Aug 28 '25 09:08 krishgcek

I think the first thing to do might be to remove the pipe to ffmpeg, and just capture a few of those H.264 frames to a file. Then we can look at that with an H.264 bitstream parser and see what's actually in there. Would you be able to do that and post the raw H.264 bitstream here? Thanks.

davidplowman avatar Aug 28 '25 09:08 davidplowman

Ok I will try to, add. But the picam-vid is working fine and setting the header properly for the ffmpeg. If I put -r 16 also the frames are coming nicely with audio in sync and no jitter or any other errors. I think the ffmpeg is getting the framerate from sps header and waiting for additional 14 frames to come and that reduces the publishing speed. So if the header is set to 16 instead of 30, the issue must be resolved (just a guess). I did an ffprobe on the final rtsp, streams : 0 : index : 0 codec_name : h264 codec_long_name : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 profile : High codec_type : video codec_tag_string : [0][0][0][0] codec_tag : 0x0000 width : 1920 height : 1080 coded_width : 1920 coded_height : 1080 closed_captions : 0 film_grain : 0 has_b_frames : 0 pix_fmt : yuv420p level : 40 color_range : tv color_space : bt709 color_transfer : bt709 color_primaries : bt709 chroma_location : left field_order : progressive refs : 1 is_avc : false nal_length_size : 0 r_frame_rate : 30/1 avg_frame_rate : 30/1 time_base : 1/90000 bits_per_raw_sample : 8 extradata_size : 54 disposition : default : 0 dub : 0 original : 0 comment : 0 lyrics : 0 karaoke : 0 forced : 0 hearing_impaired : 0 visual_impaired : 0 clean_effects : 0 attached_pic : 0 timed_thumbnails : 0 captions : 0 descriptions : 0 metadata : 0 dependent : 0 still_image : 0 1 : index : 1 codec_name : opus codec_long_name : Opus (Opus Interactive Audio Codec) codec_type : audio codec_tag_string : [0][0][0][0] codec_tag : 0x0000 sample_fmt : fltp sample_rate : 48000 channels : 2 channel_layout : stereo bits_per_sample : 0 initial_padding : 0 r_frame_rate : 0/0 avg_frame_rate : 0/0 time_base : 1/48000 start_pts : 3210 start_time : 0.066875 extradata_size : 19 disposition : default : 0 dub : 0 original : 0 comment : 0 lyrics : 0 karaoke : 0 forced : 0 hearing_impaired : 0 visual_impaired : 0 clean_effects : 0 attached_pic : 0 timed_thumbnails : 0 captions : 0 descriptions : 0 metadata : 0 dependent : 0 still_image : 0 format : filename : removed nb_streams : 2 nb_programs : 0 format_name : rtsp format_long_name : RTSP input start_time : 0.066875 probe_score : 100 tags : title : No Name url : removed

krishgcek avatar Aug 28 '25 10:08 krishgcek

I've had a look at this on a Pi 4 (which is the same underlying platform as earlier Pis). I've discovered the following:

  • If you specify a framerate and ask for the SPS framerate to be enabled, then the H.264 bitstream includes the framerate you asked for.
  • Actually, it lists double the framerate because I remember something about it counting fields, not frames, even with progressive video. But anyway, it uses the number you give it.
  • If you don't ask for the SPS framerate to be enabled, the H.264 encoder seems to include it anyway, defaulting to 30 fps. I'm not convinced this is super-helpful behaviour, but it's down in the firmware on the GPU and I can't change it.
  • I can't find a way of not having the SPS framerate.

A large part of the difficulty here is, I think, that H.264 elementary streams are not a proper container format. Frames don't have their own timestamps and ffmpeg re-timestamps everything (with added jitter!) when it first sees them. Only it may be believing the SPS framerate that always appears to be present.

FFmpeg versions >= 6.1 apparently have the ability to strip out this timing data (without re-encoding), but the version on the Pi is older than that. Not sure what the situation will be when Trixie comes out.

I wonder if you'd have more luck using an actual container format? The H.264 bitstream will still have the timing information, but the receiver might believe the true timestamps over the elementary bitstream. For example, you could use a PyavOutput to write an mpegts stream, and pipe that out to ffmpeg (though if using a PyavOutput, you might find you could output RTSP directly, as per the documentation I linked earlier).

davidplowman avatar Aug 28 '25 13:08 davidplowman

If it's of any interest, I tried the following:

  1. I downloaded the latest MediaMtx for my Pi 4. The Pi 2B+ would want the armv7 version, I think.

  2. Put this into my mediamtx.yaml file:

paths:
  cam:
    runOnDemand: python /home/pi/stream.py
  1. And created the file /home/pi/stream.py:
import time
from picamera2 import Picamera2
from picamera2.encoders import H264Encoder
from picamera2.outputs import PyavOutput

picam2 = Picamera2()
main = {'size': (1280, 720), 'format': 'YUV420'}
controls = {'FrameRate': 30}
config = picam2.create_video_configuration(main, controls=controls)
picam2.configure(config)

encoder = H264Encoder(bitrate=10000000)
encoder.audio = True
output = PyavOutput("rtsp://127.0.0.1:8554/cam", format="rtsp")

print("Camera starting")
picam2.start_recording(encoder, output)

try:
    while True:
        time.sleep(0.5) 
except KeyboardInterrupt:
    print("Camera stopping")

picam2.stop_recording()
  1. Then just run mediamtx mediamtx.yaml after which you should be able to play the RTSP stream using ffplay or vlc (or you could play it in a browser using WebRTC if you preferred; this gives you lower latency).

I think it plays OK at other framerates, even though the SPS is probably saying 30fps. I'm not able to try it on a Pi 2B+ - your mileage may vary.

davidplowman avatar Aug 28 '25 14:08 davidplowman

I tried the PyavOutput, it kills the cpu. It does not have enough cpu to perform the operation

krishgcek avatar Aug 28 '25 17:08 krishgcek

Hmm, well that will probably require a bit of trouble-shooting. The PyavOutput is just Python bindings for ffmpeg/libav, so it shouldn't do anything worse than ffmpeg itself, but one never knows without investigating.

Were you running MediaMtx as well? Presumably your original ffmpeg command is pushing the RTSP stream to an RTSP server on your system (is that right?) and in which case what is the server?

I also assume you're running Raspberry Pi OS Lite? I expect booting the desktop OS is hopeless on a Pi 2B these days.

Another thought might be to use rpicam-vid instead of Python - at least you avoid the Python overheads. It ought to be straightforward to tune the AGC/AEC to give you long exposures automatically when the light levels drop. Let me know if you'd like to try that.

From what you say, I see that you've tried rpicam-vid - what command line were you using for it? I'd be interested to see why that didn't have the same problems.

davidplowman avatar Aug 28 '25 18:08 davidplowman

I compiled the ffmpeg locally to latest and the issue is resolved. it uses gpu and works nicely

krishgcek avatar Aug 29 '25 07:08 krishgcek