nextcloud-spreed-signaling
nextcloud-spreed-signaling copied to clipboard
[Question] GuzzleHttp\Exception\ClientException: Client error: `POST https://signaling.<my_server>/signaling/api/v1/room/waa4ph5a` resulted in a `403 Forbidden` response: Authentication check failed
I'm trying the signaling server with my (working, private, non-commercial) nextcloud instance in the internet. I mainly followed this guide - but my installation is completely podman-compose
based.
I'm able to run the suggested tests on the signaling server:
$ curl -i https://<myserver>/signaling/api/v1/welcome
HTTP/2 200
server: nginx/1.21.1
date: Thu, 05 Aug 2021 13:16:30 GMT
content-type: application/json; charset=utf-8
content-length: 94
referrer-policy: no-referrer
x-content-type-options: nosniff
x-download-options: noopen
x-frame-options: SAMEORIGIN
x-permitted-cross-domain-policies: none
x-robots-tag: none
x-xss-protection: 1; mode=block
strict-transport-security: max-age=15768000; includeSubDomains; preload;
{"nextcloud-spreed-signaling":"Welcome","version":"64faa1c4990347424583c8626255e408df6bf793"}
$ curl -i http://signaling.<my_server>:8080/api/v1/welcome
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Server: nextcloud-spreed-signaling/64faa1c4990347424583c8626255e408df6bf793
Date: Thu, 05 Aug 2021 13:15:36 GMT
Content-Length: 94
{"nextcloud-spreed-signaling":"Welcome","version":"64faa1c4990347424583c8626255e408df6bf793"}
Where https://<myserver>/signaling
is on nginx with reverse proxy configuration like suggested, and http://signaling.<my_server>:8080
is the exposed port of the (podman-running) signaling container. As you could see, nginx is configured to allow http/2 as well.
However, I always encounter a 'Fehler beim Aufbau der Signaling-Verbindung. Versuche erneut…' message in talk on nextcloud, and I (sometimes) find the following in the logs ('Protokollierung'):
GuzzleHttp\Exception\ClientException: Client error: `POST https://<my_server>/signaling/api/v1/room/waa4ph5a` resulted in a `403 Forbidden` response: Authentication check failed
Is there any way to get more debug info on what is running wrong? (I already looked into the nginx-debug
logs but could not find anything remarkable.) Is there a way to switch debugging on in the signaling server?
Please double-check the shared secrets match (Nextcloud Talk admin / backend configuration of signaling server in https://github.com/strukturag/nextcloud-spreed-signaling/blob/2ac58a3360e0059ff6f12ccec4987211c6906840/server.conf.in#L83).
Also the signaling server must be able to access the URL of Nextcloud.
Dear @fancycode,
sorry for answering late. I checked my secret over and over - but for me it looks like that it is not the culprit.
What is different w.r.t. the guide mentioned is that I don't run nextcloud-spreed-signaling
(and all the other containers needed by spreed) in network_mode: host
. Instead, I just 'expose' the ports (i.e. for the nextcloud-spreed-signaling
image: ports: - "8080:8080"
- the port given in my server.conf
for [http] listen=...
). (The main reason for this difference is that 'host mode' is problematic with podman
and hence I only use that mode for the coturn
image.)
However, this means that all ports not 'exposed' are in a private network (and hence not accessible from a remote host). I wonder if I have fully understood the nextcloud-spreed-signaling
image. Your readme suggests that it consists (at least) of 2 parts: frontend and backend. So I wonder if the curl test above only tests the 'frontend'?!?
Or just in other words: In server.conf
what is the right URL for the [backend-1] url=...
section? Should it point to my reverve (nginx) proxy? Or should it point to the (internal) podman container name? Should I give a port? And if so, what port?
Kind regards,
aanno
PS: If there is interest, I'm able to give a complete podman-compose
example of the problem.
Hi
I am facing same issue any solution?
Error on talk : Failed to establish signaling connection. Retrying …
Dear @zohaib09,
no, this is still an issue for me. For me it looks that spreed is difficult to run in a podman environment.
However, running spreed is of minor importance for my private nextcloud installation. Hence I don't work on the problem any more.
Perhaps you feel like trying https://github.com/nextcloud/all-in-one ?
The url
configured for the backends must be the public URL of Nextcloud, i.e. the same URL users will enter in their browser when connecting to Nextcloud. Clients will include their NC URL in the hello
request to the signaling server and this will then select the backend based on this URL. Same for internal backend requests from Nextcloud to the signaling server.
Could also be this issue: https://github.com/nextcloud/spreed/pull/7437
Feel free to reopen if the problem still exists.
It still exists. {code: 'invalid_request', message: 'The request could not be authenticated.'}
Hello !
This is still an issue for me, i have the same error in Nextcloud logs :
GuzzleHttp\Exception\ClientException : Erreur client : POST https://signalingserver.domain.com/standalone-signaling/api/v1/room/imx8frtt a entraîné une réponse 403 Forbidden : La vérification de l'authentification a échoué
Signalings logs :
oiko@sssl:~/nextcloud-spreed-signaling$ sudo ./bin/signaling --config /etc/signaling/server.conf [sudo] password for oiko: main.go:133: Starting up version 8bf0b53d0bb32dbc38b6cdb73655cc073cbb98c3/go1.18.1 as pid 1402 main.go:142: Using a maximum of 1 CPUs natsclient.go:108: Connection established to nats://localhost:4222 grpc_common.go:167: WARNING: No GRPC server certificate and/or key configured, running unencrypted grpc_common.go:169: WARNING: No GRPC CA configured, expecting unencrypted connections hub.go:200: Using a maximum of 8 concurrent backend connections per host hub.go:207: Using a timeout of 10s for backend connections hub.go:303: Not using GeoIP database main.go:193: WARNING: Old-style MCU configuration detected with url but no type, defaulting to type janus mcu_janus.go:294: Connected to Janus WebRTC Server 0.11.8 by Meetecho s.r.l. mcu_janus.go:300: Found JANUS VideoRoom plugin 0.0.9 by Meetecho s.r.l. mcu_janus.go:305: Data channels are supported mcu_janus.go:309: Full-Trickle is enabled mcu_janus.go:311: Maximum bandwidth 1048576 bits/sec per publishing stream mcu_janus.go:312: Maximum bandwidth 2097152 bits/sec per screensharing stream mcu_janus.go:318: Created Janus session 6537124536058702 mcu_janus.go:325: Created Janus handle 8971974268218711 main.go:263: Using janus MCU hub.go:385: Using a timeout of 10s for MCU requests backend_server.go:95: Using configured TURN API key backend_server.go:96: Using configured shared TURN secret backend_server.go:98: Adding "turn:cloud.domain.com:3478?transport=udp" as TURN server backend_server.go:98: Adding "turn:cloud.domain.com:3478?transport=tcp" as TURN server backend_server.go:111: No IPs configured for the stats endpoint, only allowing access from 127.0.0.1 main.go:339: Listening on 127.0.0.1:8080 client.go:284: Client from XX.XX.XX.XX has RTT of 51 ms (51.176132ms) client.go:284: Client from XX.XX.XX.XX has RTT of 64 ms (64.287484ms) client.go:284: Client from XX.XX.XX.XX has RTT of 231 ms (231.87662ms) client.go:284: Client from XX.XX.XX.XX has RTT of 96 ms (96.033098ms) client.go:284: Client from XX.XX.XX.XX has RTT of 44 ms (44.874561ms) client.go:284: Client from XX.XX.XX.XX has RTT of 133 ms (133.495209ms) client.go:284: Client from XX.XX.XX.XX has RTT of 69 ms (69.191126ms) client.go:284: Client from XX.XX.XX.XX has RTT of 64 ms (64.574188ms) client.go:284: Client from XX.XX.XX.XX has RTT of 68 ms (68.120593ms) client.go:284: Client from XX.XX.XX.XX has RTT of 119 ms (119.176912ms) client.go:303: Error reading from XX.XX.XX.XX: websocket: close 1006 (abnormal closure): unexpected EOF ^Cmain.go:362: Interrupted janus_client.go:470: Unable to deliver message { "janus": "detached", "session_id": 6537124536058702, "sender": 8971974268218711 }. Handle 8971974268218711 gone? janus_client.go:405: conn.NextReader: read tcp 127.0.0.1:49700->127.0.0.1:8188: use of closed network connection mcu_janus.go:271: Connection to Janus gateway was interrupted, reconnecting in 1s
Conf file : server.txt