Significant delay when sending RTP audio via Ingress
Hi,
I have a .NET application that receives an RTP audio stream in PCM mulaw format (8000 Hz).
My goal is to send this audio stream into a LiveKit room using a Ingress.
Steps performed
- Successfully created an Ingress via the LiveKit API.
- Used FFmpeg to send the RTP audio stream to the Ingress endpoint with the following command:
ffmpeg -fflags nobuffer -flags low_delay -f mulaw -ar 8000 -ac 1 -i pipe:0 -c:a aac -b:a 24k -f flv {rtmpUrl}
where {rtmpUrl} is the URL returned by the Ingress creation.
Issue
FFmpeg connects successfully to the Ingress endpoint.
However, there is a delay of approximately 10-15 seconds between sending the audio to the Ingress and hearing it inside the room.
This delay occurs both when running the setup locally and when using LiveKit’s cloud service.
By comparison, when sending the same RTP stream to a standard RTMP server running on Docker and playing the stream, the delay is at max 1 second.
Environment
Using the latest LiveKit server and Ingress Docker images.
The stream is raw RTP PCM mulaw 8000 Hz.
RTMP streaming typically introduces roughly 2-20, use WHIP to get (ms) delay
RTMP streaming typically introduces roughly 2-20, use WHIP to get (ms) delay
But WHIP currently does not support TURN when I publish from OBS; only STUN is available, and it fails to connect.
When you publish via RTMP to a LiveKit Ingress, the server has to decode and transcode the audio because it doesn’t support receiving Opus directly. This makes some delay unavoidable. You can try to reduce latency by lowering the publisher buffers (e.g., in FFmpeg or OBS), but in practice, even with very low buffers, audio-only latency rarely drops below about 6 seconds.
I’m not sure about your exact use case, but if you need lower latency, you could use FFmpeg or GStreamer to publish the audio via WHIP/WebRTC. This allows sending Opus natively and avoids the RTMP → Ingress transcoding, reducing latency to sub-second levels.