Spotify Connection Issues
Context
Under the hood AmpliPi uses spotifyd to stream music from Spotify. The version of spotifyd that we are using (0.3.4) is about a year old. It enables spotify premium users to connect to AmpliPi using the spotify-connect interface using a PC or phone.
Problem
Users have been reporting intermittent problems connecting to the AmpliPi's Spotify streams via spotify-connect.
Potential Solution
Upgrade spotifyd to the latest version and validate the connection process and overall interface is more stable.
Notes:
- Compile and test 0.3.5 -> Build Failure
- Compile and test tip -> Build :+1: -> Panic after 4 hours
May 06 15:09:59 amplipi-inhouse authbind[3093]: The application panicked (crashed).
May 06 15:09:59 amplipi-inhouse authbind[3093]: Message: called `Result::unwrap()` on an `Err` value: PoisonError { .. }
May 06 15:09:59 amplipi-inhouse authbind[3093]: Location: src/dbus_mpris.rs:84
May 06 15:09:59 amplipi-inhouse authbind[3093]: Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
May 06 15:09:59 amplipi-inhouse authbind[3093]: Run with RUST_BACKTRACE=full to include source snippets.
May 06 15:09:59 amplipi-inhouse authbind[3093]: INFO: 172.18.2.69:3770 - "GET /api HTTP/1.1" 200 OK
May 06 15:09:59 amplipi-inhouse authbind[3093]: alsaloop: pcm.c:1654: snd_pcm_poll_descriptors_count: Assertion `pcm' failed.
May 06 15:09:59 amplipi-inhouse authbind[3093]: Subprocess exited with non-zero code: -6. Restarting...
May 06 15:10:00 amplipi-inhouse authbind[3093]: INFO: 172.18.1.219:58030 - "GET /api HTTP/1.1" 200 OK
May 06 15:10:00 amplipi-inhouse authbind[3093]: INFO: 172.18.2.69:3770 - "GET /api HTTP/1.1" 200 OK
May 06 15:10:01 amplipi-inhouse authbind[3093]: INFO: 172.18.1.221:57736 - "GET /api HTTP/1.1" 200 OK
May 06 15:10:01 amplipi-inhouse CRON[7619]: pam_unix(cron:session): session opened for user pi by (uid=0)
May 06 15:10:01 amplipi-inhouse CRON[7620]: (pi) CMD (/home/pi/amplipi-dev/scripts/check-online)
May 06 15:10:01 amplipi-inhouse CRON[7619]: pam_unix(cron:session): session closed for user pi
This crash may be related to https://github.com/Spotifyd/spotifyd/issues/1246
After talking with @klay2000 we may be able to solve the instability by making a better process monitor.
Jason is going to speak to Spotify and find out what a collaborative relationship looks like.
For the current spotifyd client it looks like there may be another route around this bug. We could refactor our MPRIS interface to use the event-based PropertiesChanged signal instead of polling dbus. This would avoid the bug entirely since a stale Spotify instance would not generate any new events and when it does generate a new event the client will have already refreshed its credentials.
See the spec for more details:
- http://specifications.freedesktop.org/mpris-spec/2.2/
- (cached) http://web.archive.org/web/20240208153942/http://specifications.freedesktop.org/mpris-spec/2.2/
As @klay2000 found spotifyd does not appear to broadcast PropertyChanged events for metadata or playback status. However we can change how we poll metadata, if we only do it when playing maybe it won't have this issue?
After refactoring the MPRIS code to only poll metadata when playing we still get a slightly different error (smells like the same bug):
Jun 10 10:53:13 amplipi-inhouse authbind[3704]: Token cache write ignored (not configured)
Jun 10 10:53:13 amplipi-inhouse authbind[3704]: The application panicked (crashed).
Jun 10 10:53:13 amplipi-inhouse authbind[3704]: Message: RSpotify not authenticated
Jun 10 10:53:13 amplipi-inhouse authbind[3704]: Location: /home/pi/.cargo/registry/src/index.crates.io-1cd66030c949c28d/rspotify-0.12.0/src/clients/base.rs:106
Jun 10 10:53:13 amplipi-inhouse authbind[3704]: Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Jun 10 10:53:13 amplipi-inhouse authbind[3704]: Run with RUST_BACKTRACE=full to include source snippets.
I figured out how to completely stop polling using PropertiesChanged, will try that and see if it fixes the issue.
So far this seems to fix the issue we were having, I'd like to keep testing and shaking some small bugs out.