Can other transports be used
Such as livekit: End to end stack for WebRTC
Yes, absolutely, as soon as someone adds the LiveKit transport. But currently, that's not implemented.
Big +1 on this request! Would be super interested in livekit support for pipecat!
Gotta say their API sounds familiar:
async def entrypoint(ctx: JobContext):
2 vad = silero.VAD()
3 stt = deepgram.STT()
4 llm = openai.LLM()
5 tts = elevenlabs.TTS()
6 assistant = VoiceAssistant(vad, stt, llm, tts, allow_interruptions=True)
7 assistant.set_system_message(["You are a voice assistant created by LiveKit."])
8 await assistant.say('Hello, how can I help you today?')
9
10 @ctx.room.on("participant_connected")
11 def _on_participant_connected(participant:
12 rtc.RemoteParticipant):
13 assistant.start(ctx.room, participant)
14
I'm planning to work on LiveKit whenever I have some free time. Has anybody started working on it already?
I'm planning to work on LiveKit whenever I have some free time. Has anybody started working on it already?
Hi @joachimchauvet Do you have progress? would love to do the same!
I'd love to see this happen as well! @kwindla and I were talking about collaborating on this and would love to help however I can.
#396 I tried adding a simple serializer to help bridge LiveKit Agents via WebSocket.
Hi @joachimchauvet Do you have progress? would love to do the same!
Yes, but I'm not quite there yet. I've been able to connect to a LiveKit room and send TTS audio from my Pipecat pipeline, but I'm running into a few issues with some callback handlers and receiving frames from participants. Once I've tidied things up and have something decent enough to share, I'll post an update here. Still haven't gotten around to video yet, though.
Edit: Haven't had the chance to clean that up yet but if anybody wants some inspiration already, I'll push my progress here.
@davidzhao any plans on shipping livekit support for pipecat? One of the reasons not able to use pipecat & livekit, is primarily because of lack of livekit support and vice-versa.
I would like to add my voice to the others, livekit-transport would be fantastic especially video, that would be the last missing puzzle piece for a fully local solution. I hope I am not missing anything but I think the local transport does not support video or image input.
Livekit will be good option.
+1 for live kit here, if it is really needed I can help to build livekit pipecat module.
@joachimchauvet : Can you send me a test main.py or so with what you dev livekit it, or a example how to use it. (I know it is still in dev, but I could optimized it with your code and help you)
@joachimchauvet : Can you send me a test main.py or so with what you dev livekit it, or a example how to use it. (I know it is still in dev, but I could optimized it with your code and help you)
Sure! I'm on my phone right now but I'll send that over in the morning (CET) with the latest changes I have locally.
I added an example here: https://github.com/joachimchauvet/pipecat-livekit/blob/main/examples/foundational/01b-livekit-audio.py It requires livekit-api to generate the token.
We're happy to merge this into pipecat main any time.
I added an example here: https://github.com/joachimchauvet/pipecat-livekit/blob/main/examples/foundational/01b-livekit-audio.py It requires livekit-api to generate the token.
@joachimchauvet Is there any method to enable LiveKit transport to transmit images simultaneously, similar to how Daily transport works, by actively retrieving photos from the other participant's camera?
@joachimchauvet Is there any method to enable LiveKit transport to transmit images simultaneously, similar to how Daily transport works, by actively retrieving photos from the other participant's camera?
Right now it only supports audio. It's definitely possible to implement video/images with LiveKit but that's not implemented in my LiveKit transport yet.
I think this can now be closed since we support websockets, fastapi websockets, livekit and daily.