docker-jitsi-meet icon indicating copy to clipboard operation
docker-jitsi-meet copied to clipboard

No video/audio between clients over the Internet

Open BlinkyStitt opened this issue 5 years ago • 8 comments

I setup an Ubuntu 18.04 server on AWS with ports 22, 80, 443, 4443, and 10000/udp open to the world.

Then I setup a DNS A record to point meet.myfqdn.com to the server's external IP.

On the server, I ran apt-get update && apt-get upgrade, rebooted, then installed docker. Then I followed the documentation to get everything running.

git clone https://github.com/jitsi/docker-jitsi-meet && cd docker-jitsi-meet
cp env.example .env
./gen-passwords.sh
vim .env

Then I edited the config:

HTTP_PORT=80
HTTPS_PORT=443

# this defaulted to UTC, but that gave a warning in the logs. This might deserve a separate issue.
TZ=Etc/UTC

PUBLIC_URL=https://meet.myfqdn.com
DOCKER_HOST_ADDRESS=my.public.ip.address
ENABLE_LETSENCRYPT=1
LETSENCRYPT_DOMAIN=meet.myfqdn.com
[email protected]
ENABLE_AUTH=0
ENABLE_GUESTS=1
AUTH_TYPE=internal
JVB_TCP_HARVESTER_DISABLED=false
ENABLE_RECORDING=0
ENABLE_HTTP_REDIRECT=1

(EDIT: Actually, the first time I tried it, JVB_TCP_HARVESTER_DISABLED was not modified (and it defaults to true).

I setup fresh configs

rm -rf ~/.jitsi-meet-cfg
mkdir -p ~/.jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody,jicofo,jvb,jigasi,jibri}
docker-compose up -d

Everything is up:

$ docker-compose ps
          Name              Command   State                        Ports                      
----------------------------------------------------------------------------------------------
dockerjitsimeet_jicofo_1    /init     Up                                                      
dockerjitsimeet_jvb_1       /init     Up      0.0.0.0:10000->10000/udp, 0.0.0.0:4443->4443/tcp
dockerjitsimeet_prosody_1   /init     Up      5222/tcp, 5269/tcp, 5280/tcp, 5347/tcp          
dockerjitsimeet_web_1       /init     Up      0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp     

At this point. I can connect with 3 clients and they can all chat. None of them see the other's video or audio.

I've tried both 2 and 3 clients with peer-to-peer enabled and disabled with no success.

I did some looking at github issues and then added some config for NAT_HARVESTER (found in a few github issues, not the docs)

$ docker-compose stop
$ vim ~/.jitsi-meet-cfg/jvp/sip-communicator.properties
org.jitsi.videobridge.NAT_HARVESTER_LOCAL_ADDRESS=my.local.ip.address
org.jitsi.videobridge.NAT_HARVESTER_PUBLIC_ADDRESS=my.external.ip.address
$ docker-compose up -d

Let me know what logs would be useful. I'm not seeing any warnings or errors. The only relevant looking line is:

"Not merging A/V streams from [email protected]/67897fa4 to [email protected]/6e23a5b7"

That definitely looks related to me. I'm pretty sure I need the streams merged. I'm new to this though and not totally sure. And I have no idea why the A/V streams are not being merged.

I was originally trying to get this working on a server on my home network. I followed the same steps there and was able to get clients sharing video/audio, but only if they were both behind my firewall. If they were both outside the firewall (or one inside and one outside), then this setup failed the same way my current AWS setup is failing. This makes me think that the NAT traversal stuff is definitely part of the problem.

I'm seeing old docs saying to open ports 10000-20000 UDP. That's outdated, right? That does not seem safe.

I do not think my browser (Brave / #85) is the problem since the video worked when I had the server running on my LAN.

BlinkyStitt avatar Apr 22 '20 19:04 BlinkyStitt

Hello @WyseNynja Have you look at this custom config https://github.com/jitsi/docker-jitsi-meet#running-behind-nat-or-on-a-lan-environment by setting up this value DOCKER_HOST_ADDRESS.

Other things is that you can share the log of JVB to identify the public IP that the STUN server discover for your installation. Try to remove informations on STUN servers. If you are behind a router it's probably that router that block packet (it's a supposition).

mamiapatrick avatar Apr 24 '20 10:04 mamiapatrick

Thanks for the reply. I think my original post already covers all your recommendations though.

I set DOCKER_HOST_ADDRESS according to those docs. Search for DOCKER_HOST_ADDRESS=my.public.ip.address to see the spot with all the config changes I made.

My expected public IP is in JVB's logs.

What do you mean by "try to remove informations on STUN servers"? Do you mean NAT_HARVESTER_LOCAL_ADDRESS? I've tried the server both without and with that setting and neither worked.

This is on AWS. I've listed the ports that I opened.

BlinkyStitt avatar Apr 24 '20 17:04 BlinkyStitt

I'm seeing the same issue. I can web but fails to start video I see the "Connection Refused" error on the browser while accessing "https://localhost/http-bind?room=xyz" How do I debug the problem?

justvisiting avatar Apr 25 '20 23:04 justvisiting

That sounds like a different issue. I don’t see “connection refused”

On Apr 25, 2020, at 4:48 PM, hchaudhary [email protected] wrote:

 I'm seeing the same issue. I can web but fails to start video I see the "Connection Refused" error on the browser while accessing "https://localhost/http-bind?room=xyz" How do I debug the problem?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

BlinkyStitt avatar Apr 27 '20 16:04 BlinkyStitt

I have similar a issue (including the "not merging A/V") on non AWS. I installed stable-4416 on a CentOS virtual server. A local coturn is running, set in config.js and prosody. p2p mode is enabled in config.js. I use two test clients at home behind a common NAT (from my ISP). A test with a public jitsi meet server was a success.

DOCKER_HOST_ADDRESS=178.x.x.x JVB_STUN_SERVERS=178.x.x.x:3478 JVB_TCP_HARVESTER_DISABLED=true

STUN discovery works:

jvb_1      | org.ice4j.ice.harvest.StunMappingCandidateHarvester discover
jvb_1      | INFO: Discovered public address 178.x.x.x:38740/udp from STUN server 178.x.x.x:3478/udp using local address 172.18.0.6:0/udp
jvb_1      | org.ice4j.ice.harvest.MappingCandidateHarvesters maybeAdd
jvb_1      | INFO: Discarding a mapping harvester with duplicate addresses: org.ice4j.ice.harvest.StunMappingCandidateHarvester, face=/172.18.0.6, mask=/178.x.x.x. Kept: org.ice4j.ice.harvest.MappingCandidateHarvester, face=/172.18.0.6, mask=/178.x.x.x
jvb_1      | org.ice4j.ice.harvest.MappingCandidateHarvesters initialize
jvb_1      | INFO: Using org.ice4j.ice.harvest.MappingCandidateHarvester, face=/172.18.0.6, mask=/178.x.x.x
jvb_1      | org.ice4j.ice.harvest.MappingCandidateHarvesters initialize
jvb_1      | INFO: Initialized mapping harvesters (delay=1141ms).  stunDiscoveryFailed=false
jvb_1      | org.ice4j.ice.harvest.AbstractUdpListener <init>
jvb_1      | INFO: Initialized AbstractUdpListener with address 172.18.0.6:10000/udp. Receive buffer size 212992 (asked for 10485760)
jvb_1      | org.ice4j.ice.harvest.SinglePortUdpHarvester <init>
jvb_1      | INFO: Initialized SinglePortUdpHarvester with address 172.18.0.6:10000/udp
jvb_1      | org.jitsi.utils.logging2.LoggerImpl log
jvb_1      | INFO: Connected.

If a guest connects to the room ICE says it has succeeded.

jvb_1      | INFO: The remote side is acting as DTLS client, we'll act as server
jvb_1      | INFO: Starting the Agent without remote candidates.
jvb_1      | INFO: Start ICE connectivity establishment.
jvb_1      | INFO: Init checklist for stream stream-65d28019
jvb_1      | INFO: ICE state changed from Waiting to Running.
jvb_1      | INFO: ICE state changed old=Waiting new=Running
jvb_1      | INFO: Start connectivity checks.
jvb_1      | INFO: Transport description: (…)
jvb_1      | INFO: Gathering candidates for component stream-2a239c2d.RTP.
jvb_1      | INFO: Ignoring empty DtlsFingerprint extension: <transport xmlns='urn:xmpp:jingle:transports:ice-udp:1'><fingerprint xmlns='urn:xmpp:jingle:apps:dtls:0' required='false'/></transport>
jvb_1      | INFO: Transport description: (...)
jvb_1      | INFO: Add remote candidate for stream-2a239c2d.RTP: 192.168.100.102:35241/udp/host
jvb_1      | INFO: Starting the agent with remote candidates.
jvb_1      | INFO: Start ICE connectivity establishment.
jvb_1      | INFO: Init checklist for stream stream-2a239c2d
jvb_1      | INFO: ICE state changed from Waiting to Running.
jvb_1      | INFO: ICE state changed old=Waiting new=Running
jvb_1      | INFO: Trigger checks for pairs that were received before running state
jvb_1      | INFO: Add peer CandidatePair with new reflexive address to checkList: CandidatePair (State=Frozen Priority=7962116751041232895):
jvb_1      |    LocalCandidate=candidate:1 1 udp 2130706431 172.18.0.6 10000 typ host
jvb_1      |    RemoteCandidate=candidate:10000 1 udp 1853824767 y.y.y.y 35241 typ prflx
jvb_1      | INFO: Start connectivity checks.
jvb_1      | INFO: Transport description: (...)
jvb_1      | INFO: The remote side is acting as DTLS server, we'll act as client
jvb_1      | INFO: Pair succeeded: 172.18.0.6:10000/udp/host -> y.y.y.y:35241/udp/prflx (stream-2a239c2d.RTP).
jvb_1      | INFO: Adding allowed address: 178.x.x.x:35241/udp
jvb_1      | INFO: Pair validated: 178.x.x.x:10000/udp/srflx -> y.y.y.y:35241/udp/prflx (stream-2a239c2d.RTP).
jvb_1      | INFO: Nominate (first valid): 178.x.x.x:10000/udp/srflx -> y.y.y.y:35241/udp/prflx (stream-2a239c2d.RTP).
jvb_1      | INFO: verify if nominated pair answer again
jvb_1      | INFO: IsControlling: true USE-CANDIDATE:false.
jvb_1      | INFO: Pair failed: 172.18.0.6:10000/udp/host -> 192.168.100.102:35241/udp/host (stream-2a239c2d.RTP)
jvb_1      | INFO: Pair succeeded: 178.x.x.x:10000/udp/srflx -> y.y.y.y:35241/udp/prflx (stream-2a239c2d.RTP).
jvb_1      | INFO: Pair validated: 178.x.x.x:10000/udp/srflx -> y.y.y.y:35241/udp/prflx (stream-2a239c2d.RTP).
jvb_1      | INFO: Nominate (first valid): 178.x.x.x:10000/udp/srflx -> y.y.y.y:35241/udp/prflx (stream-2a239c2d.RTP).
jvb_1      | INFO: IsControlling: true USE-CANDIDATE:true.
jvb_1      | INFO: Nomination confirmed for pair: 178.x.x.x:10000/udp/srflx -> y.y.y.y:35241/udp/prflx (stream-2a239c2d.RTP).
jvb_1      | INFO: Selected pair for stream stream-2a239c2d.RTP: 178.x.x.x:10000/udp/srflx -> y.y.y.y:35241/udp/prflx (stream-2a239c2d.RTP)
jvb_1      | INFO: CheckList of stream stream-2a239c2d is COMPLETED
jvb_1      | INFO: ICE state changed from Running to Completed.
jvb_1      | INFO: ICE state changed old=Running new=Completed
jvb_1      | INFO: ICE connected
jvb_1      | INFO: Starting DTLS.
jvb_1      | INFO: Harvester used for selected pair for stream-2a239c2d.RTP: srflx
jvb_1      | INFO: Initialized a new PartitionedByteBufferPool with 8 partitions.
jvb_1      | INFO: Initialized a new PartitionedByteBufferPool with 8 partitions.
jvb_1      | INFO: Initialized a new PartitionedByteBufferPool with 8 partitions.
jvb_1      | INFO: Negotiated DTLS version DTLS 1.2
jvb_1      | INFO: DTLS handshake complete. Got SRTP profile 1
jvb_1      | INFO: Attempting to establish SCTP socket connection
jvb_1      | INFO: jitsisrtp successfully loaded

I can see the audio/video data go upstream to the server. Shortly after that I get:

jicofo_1   | INFO: [60] org.jitsi.jicofo.ParticipantChannelAllocator.log() Sending session-initiate to: [email protected]/d0c93f98
jicofo_1   | INFO: [60] org.jitsi.jicofo.LipSyncHack.log() Not merging A/V streams from [email protected]/8e79b5f2 to [email protected]/d0c93f98
jicofo_1   | INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Received session-accept from [email protected]/d0c93f98 with accepted sources:Sources{ video: [ssrc=90245538 ssrc=1888856970 ssrc=735512764 ssrc=3649039467 ssrc=1750684991 ssrc=812700357 ] }@1314709742
jicofo_1   | WARNING: [28] org.jitsi.jicofo.LipSyncHack.log() No corresponding audio found for: [email protected]/d0c93f98 'source-add' to: [email protected]/8e79b5f2
jvb_1      | INFO: ICE state changed from Completed to Terminated.
jvb_1      | INFO: ICE state changed old=Completed new=Terminated
jvb_1      | Got sctp association state update: 1
jvb_1      | sctp is now up.  was ready? false
jvb_1      | INFO: SCTP connection is ready, creating the Data channel stack
jvb_1      | INFO: Will wait for the remote side to open the data channel.
jvb_1      | INFO: Received data channel open message
jvb_1      | INFO: Remote side opened a data channel.
jvb_1      | INFO: create_conf, id=d1fc9d1a6192d581 gid=null logging=false
jvb_1      | INFO: Performed a successful health check in 20ms. Sticky failure: false
jvb_1      | INFO: ds_change ds_id=2a239c2d
jvb_1      | INFO: create_conf, id=1cec290d75d48efb gid=null logging=false
jvb_1      | INFO: Performed a successful health check in 13ms. Sticky failure: false
jvb_1      | INFO: Running expire()

The firewall allows all needed inbound ports, outbound is unrestricted and docker set its forwarding rules. What makes me wonder is that DNAT counts the packets, but MASQUERADING does not. I suppose the packets flow TO the server but cannot get BACK? Or are they passed through as the packet counter of POSTROUTING is not zero?

Chain PREROUTING (policy ACCEPT 877 packets, 50717 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 1432 81243 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 456 packets, 25690 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 674 packets, 45070 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   59  3540 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 700 packets, 46646 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
    0     0 MASQUERADE  tcp  --  *      *       172.18.0.5           172.18.0.5           tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       172.18.0.5           172.18.0.5           tcp dpt:443
    0     0 MASQUERADE  udp  --  *      *       172.18.0.6           172.18.0.6           udp dpt:10000
    0     0 MASQUERADE  tcp  --  *      *       172.18.0.6           172.18.0.6           tcp dpt:4443

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
    2   104 DNAT       tcp  --  !br-401ee4728730 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8000 to:172.18.0.5:80
   68  4020 DNAT       tcp  --  !br-401ee4728730 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:5443 to:172.18.0.5:443
    1   132 DNAT       udp  --  !br-401ee4728730 *       0.0.0.0/0            0.0.0.0/0            udp dpt:10000 to:172.18.0.6:10000
    0     0 DNAT       tcp  --  !br-401ee4728730 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:4443 to:172.18.0.6:4443

AnanasPfirsichSaft avatar Apr 27 '20 20:04 AnanasPfirsichSaft

After some hours of testing I solved my issue. I unset the JVB_STUN_SERVERS and removed a line in my prosody configuration I had pasted from a howto.

turncredentials_secret = "xxx";
turncredentials = {
  { type = "stun", host = "178.x.x.x" },
  { type = "turn", host = "178.x.x.x", port = "3478" },
  { type = "turns", host = "178.x.x.x", port = "5349", transport = "tcp" }
};

The "turns" entry has to be removed. UDP only should be sufficient and I guess TCP is useful if it is on port 443 reachable even by conservative firewalls :)

AnanasPfirsichSaft avatar Apr 28 '20 20:04 AnanasPfirsichSaft

Hello, I am trying to configure my jitsi-meet docker for several weeks, I cannot find any solution, I would like to ask you some questions.

DOCKER_HOST_ADDRESS=178.x.x.x * this is Public IP (external), VM docker IP (internal), or IP from virtual network of jitsi-meet network

JVB_STUN_SERVERS=178.x.x.x:3478 JVB_TCP_HARVESTER_DISABLED=true

It is that I have tried almost everything, and I am about to leave it impossible.

I've a Proxmox with a VM with docker, and all VM are running behind NAT with a subdomain, but this one almost that I leave it for impossible.

I've not camera, no sound.

Thx

xicuc avatar Jun 24 '20 16:06 xicuc

Do you reach any results?

cod3r0k avatar May 05 '25 20:05 cod3r0k

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Aug 16 '25 02:08 github-actions[bot]