client-sdk-android
                                
                                 client-sdk-android copied to clipboard
                                
                                    client-sdk-android copied to clipboard
                            
                            
                            
                        Enable apps using ARCore to use LiveKit
Intension
Currently the camera access and capturing is handled within the LiveKit SDK. For AR applications which use ARCore and render some 3D objects into the camera image the camera is accessed by ARCore itself. In order to allow such applications to use LiveKit for an audio and video call the SDK need to provide an option to configure which image frames should be used and sent to WebRTC.
Solution
One possible solution for this is to allow a custom VideoCapturer in the LocalVideoTrack. This VideoCapturer can then be implemented by the application using the LiveKit SDK to define how to retrieve the image. E.g. using PixelCopy to grab the view a surface. I'm fully open for other ideas and suggestions.
I think we can add a hook into the VideoCapturer (using a custom listener in onFrameCaptured for this), though I don't know if it's threadsafe/has any potential timing issues.
Not sure on the timing of when this can be done, but in the mean time, LocalVideoTracks can be constructed with a custom VideoCapturer manually. Here's an example:
class FakeLocalVideoTrackFactory : LocalVideoTrack.Factory {
    override fun create(
        capturer: VideoCapturer,
        source: VideoSource,
        name: String,
        options: LocalVideoTrackOptions,
        rtcTrack: VideoTrack
    ): LocalVideoTrack {
		TODO() // shouldn't be called anyways.
    }
}
room = LiveKit.connect(
    CGCApplication.applicationContext(),
    url,
    token,
    ConnectOptions(),
    RoomOptions(),
    this@LiveKitManager
)
val localParticipant = room?.localParticipant
val peerConnectionFactory = PeerConnectionFactory.builder()
    .setOptions(PeerConnectionFactory.Options())
    .setVideoEncoderFactory( /* ... */ )
    .createPeerConnectionFactory()
val videoSource = peerConnectionFactory.createVideoSource(false)
val webRTCLocalVideoTrack = peerConnectionFactory.createVideoTrack(
    "video0",
    videoSource
)
val surfaceTextureHelper =
    SurfaceTextureHelper.create("CaptureThread", eglBase.eglBaseContext)
val videoCapturer = ... /* create custom videoCapturer here */
val localVideoTrackFactory = FakeLocalVideoTrackFactory()
val localVideoTrack = LocalVideoTrack(
    videoCapturer,
    videoSource,
    "video",
    LocalVideoTrackOptions(
        false,
        null,
        CameraPosition.BACK,
        VideoPreset169.QHD.capture
    ),
    webRTCLocalVideoTrack,
    peerConnectionFactory,
    CGCApplication.applicationContext(),
    eglBase,
    DefaultsManager(),
    localVideoTrackFactory
)
localParticipant?.let {
    it.publishVideoTrack(localVideoTrack, VideoTrackPublishOptions())
    liveKitIsConnected = true
}
Using a custom listener can also work, you're right.
Regarding your other suggestion for a temporary solution. I was able to make it work following that approach, only applying some changes here and there. However I used the SimulcastVideoEncoderFactoryWrapper which is internal and I was changing it public to make it work.
Looks like this was addressed in #68. VideoTracks with custom VideoCapturers can created using LocalParticipant.createVideoTrack with the VideoCapturer overload:
https://github.com/livekit/client-sdk-android/blob/a8abbefa06ecf2a15b01e3c76e9ee99d80d13502/livekit-android-sdk/src/main/java/io/livekit/android/room/participant/LocalParticipant.kt#L67-L71