react-native-track-player icon indicating copy to clipboard operation
react-native-track-player copied to clipboard

[Feature Request] lessen async js complexity

Open puckey opened this issue 2 years ago • 6 comments

What is the need and use case of this feature?

Whenever I find myself writing code using RNTP, I find myself needing to worry about:

  • chaining async calls
  • making sure they do not cause race conditions
  • where to place the setupPlayer code to have it running before I call anything else (and wait for it to complete)
  • making sure that if anything throws, I setup the player again because the service is not running and then rerun the function I called to see if that works

Describe the ideal solution By bringing down the async calls to as little as possible, we can reduce async complexity as well as improve performance (less to and fro between js and native).

Here is an example of a new function signature for TrackPlayer.add:

await TrackPlayer.add(tracks, {
  /** (optional) Whether the tracks should replace the current (or upcoming when 'upcoming') queue */
  replace: 'upcoming',
  /** (optional) The repeat mode */
  repeat: RepeatMode.Queue,
  /** (optional) the index at which to add the tracks */
  index: 5,
  /** Whether to skip to the first of the added tracks (optional, can also be boolean) */  
  skip: {
    /** (optional) Whether to play after skipping */
    play: true,
    /** (optional) The position at which to start playing */
    position: 10
  }
});

Which replaces something like:

await TrackPlayer.removeUpcomingTracks();
const index = await TrackPlayer.add([track]);
await TrackPlayer.setRepeatMode(RepeatMode.Queue);
await TrackPlayer.skip(index, 10);
await TrackPlayer.play();

To avoid race conditions, we could setup an internal promise queue in RNTP where we wait for any queue related function call to resolve first before calling the next. We can also catch any errors here and setup the player again and rerun them here. This could happen either on the js side or on the native side.

Also, I believe all of this could be done in a backward compatible way.

Would love to your thoughts on this.

puckey avatar Aug 30 '22 10:08 puckey

I will read this a bit more in depth later but for sure the async aspect of RN and the bridge communication has been a suboptimal part of this audio library. This is something that might be fixed with turbomodules but is not something we can rely on quite yet.

One of our core missions with v4 will be to re-evaluate the API we provide and refine it to what makes the most sense for the ReactNative platform. I think your proposal here is in line with that goal and looks interesting.. I think however it'll be very important to try and understand more common use cases of the library to develop an appropriate API.

I know for example that play followed by a seek is also a very common thing

dcvz avatar Aug 30 '22 13:08 dcvz

This is a cool concept of a DSL/Builder pattern and I really like it. It definitely provides the templates for everything a person needs to worry about while working with a player. We should look into this.

mpivchev avatar Aug 30 '22 14:08 mpivchev

If we were to move in this direction my feedback would be to provide an even high-level abstraction like:

await TrackPlayer.dispatchCommands([
  { name: "add", args: [{...}, ...] ],
  ...
]);

If we did that we could simply just map those arguments directly into the existing native-side commands without having to change too much.

But to @dcvz 's point, I'm not sure how much of this TurboModules will fix for us.

jspizziri avatar Aug 30 '22 14:08 jspizziri

So here’s my thinking on this:

Async is a big pain point for this library. Currently we cannot confidently dispatch a chain of actions and have them execute sequentially. I’ve already illustrated some of the examples of where this turns around and bites us. In my mind there’s two options for solving this.

TurboModules TurboModules is now in a phase where you can enable it in projects and migrate your libraries to it. I think this should allow us to be able to treat the player API as we expect it to (among the other improvements the new architecture provides us) as methods can now be treated as synchronous. I have only superficially researched this but this is my takeaway so far.

The problem with this approach is that it’s unclear to me how quickly people will move to the new architecture.. so we may have something that works but users will be stuck on the last version we offered before TurboModules.

Generic Execution API Method Two forms of this have been proposed in this thread. Essentially we provide an API for you to tell the library what actions you want to dispatch and we have some kind of internal interpreter mapping those definitions to serial player actions.

This will likely alleviate a lot of usage but has the downside that it’s up to the user to be very crafty about how they dispatch commands.. and this async issue will still occur between two dispatch commands.

Edit: there’s a third method I’d forgotten

Use a command buffer This approach is sometimes seen in graphics libraries like OpenGL or Metal. The idea is you have a method that’s starts the buffering of commands and another that flushes (executes) those commands. It would look something like this:

TrackPlayer.startPlayerActionBuffer() // starts recording actions
TrackPlayer.play() // is recorded for later execution
TrackPlayer.seek(XX) // is recorded for later execution
TrackPlayer.flushPlayerActionBuffer() // executes the buffered actions

This is different to method two above, but has the same problems. Since it’s still multiple functions it will likely have a bit more delay before final execution.

dcvz avatar Sep 02 '22 08:09 dcvz

While TurboModules is definitely the future, a downside seems to be that all packages must use them or it won't work. It is going to take quite some time to reach such a grade of adoption. (If you compare it to for example the slow adoption of ESM which has ben a pita on the node.js side)

This chaining of commands is very interesting and is something which internally would probably look like the data structure @jspizziri is speaking of. I wonder however, how return types would then work like. Or perhaps this would be only the queue logic subset of functionality?

How about:

TrackPlayer.run(
    Playlist.play().add(track).seek(XX)
)

or

Playlist.play().add(track).seek(XX).run()

On the issue of async issues between dispatch commands, the way I have been ensuring the order of async calls is to use a local promise queue with a concurrency of 1 async command at a time. Something like:

let p = Promise.resolve();

const play = () => {
  return p = p.then(
    () => TrackPlayer.play()
  )
};

const seek = (time) => {
  return p = p.then(
    () => TrackPlayer.seek(time)
  )
};

const add(track) => {
  return p = p.then(
    () => TrackPlayer.add(track)
  )
}

play()
add(myTrack)
seek(10)

That way any async call must resolve first before the next is performed, ensuring async call order. We could have something like this running in the background on the JS side to ensure call order. And you can even add error catching logic in there, which has setupPlayer called on failure. Which is then something else that users would not need to worry about.

The (theoretical) downside of this might be that queueing is slow – but I have yet to experience this on my side.

puckey avatar Sep 02 '22 10:09 puckey

The two could even be combined, where you would just have the existing api as is, but internally a buffer is built up and every tick (or x ms) the command list sent to the native side and results are returned as an array. Then each promise can resolve individually with the return values. So imagine:

TrackPlayer.play()
TrackPlayer.add(track).then(index => console.log(index))
TrackPlayer.skip(10)

leads to internal buffer, when tick or (x ms) has passed:

[['play'], ['add', track], ['skip', 10]]

Which is automatically sent to the native side (after which the internal buffer is set to [] again and waits for new commands) and the commands are performed and returned back to the js side with the list of return values:

[null, 0 , null]

Then the js side resolves all the promises with their corresponding return values and you see the console.log output.

At every point native resolves a buffer of promises, it can perform the next set and return them back to js until the queue is empty again.

puckey avatar Sep 02 '22 11:09 puckey

@jspizziri is there a PR where those changes landed?

qmx avatar Sep 23 '22 12:09 qmx

@qmx, we're now using our project board to manage our workflow. See more on discord.

This feature is going in our backlog, which means we ant to implement it at some point in the future, but it's not on a concrete roadmap. All such features are organized in our project and closed. Once they're on a concrete roadmap we'll reopen them.

jspizziri avatar Sep 23 '22 12:09 jspizziri