Ben Wagner
Ben Wagner
I think we want { synchronized: true } to be the default for a ReadableStream from MediaStreamTrackProcessor. @alvestrand
Yes, that would make sense. In that case createProcessedMediaStreamTrack (and createProcessedMediaStream) could accept the same type. It would change the timing of the init/start and flush/destroy calls, so a few...
I think it would be a good idea, since the final spec that reached consensus (https://www.w3.org/TR/mediacapture-transform/) has the MediaStreamTrackProcessor and VideoTrackGenerator exposed only on DedicatedWorker. Making the sample fully spec-compliant...
One use case to consider is https://ai.googleblog.com/2018/04/looking-to-listen-audio-visual-speech.html
> > One use case to consider is https://ai.googleblog.com/2018/04/looking-to-listen-audio-visual-speech.html > > This is indeed a good use case. It seems covered AFAIK by getUserMedia+MediaStreamAudioSourceNode+AudioWorklet. Apologies if I'm missing something obvious,...
There is quite a bit of discussion in https://github.com/gpuweb/gpuweb/issues/1380 about import. @Kangz can confirm, but since [VideoFrame can now be created from a CanvasImageSource](https://github.com/w3c/webcodecs/issues/158), that covers WebGPU also.