agents icon indicating copy to clipboard operation
agents copied to clipboard

Build real-time multimodal AI applications 🤖🎙️📹

Results 430 agents issues
Sort by recently updated
recently updated
newest added

Has anyone measured cpu/ram requirements for the minimal example? Anyone done load testing or have recommendations for autoscaling plan? It'd be good to also know where the sweet spot is:...

Hi all, I'm encountering issues while testing agents. The server appears to assign multiple jobs to the same room when using the Agent connect --room command, causing the agent to...

Hi there. I have a problem, when I add TTS with elevenlabs plugin, I get this error: Plugin.register_plugin(ElevenLabsPlugin()) TypeError: Can't instantiate abstract class ElevenLabsPlugin with abstract method download_files

{"message": "Error in main_task\nTraceback (most recent call last):\n File \"/root/pythonenv/enve/lib/python3.10/site-packages/livekit/agents/utils/log.py\", line 16, in async_fn_logs\n return await fn(*args, **kwargs)\n File \"/root/pythonenv/enve/lib/python3.10/site-packages/livekit/plugins/azure/tts.py\", line 90, in main_task\n raise ValueError(\nValueError: failed to synthesize audio:...

Hello, when I set the mode to `JobType.JT_PUBLISHER`, after 1 human user joined the room and two agents were started, I found that the agent service received 2 job requests....

Adding support for UtteranceEnd to Deepgram agent plugin. The UtteranceEnd is another feature offered by Deepgram to detect end of speech. The Deepgram documentation is here: https://developers.deepgram.com/docs/utterance-end

{"message": "Error in _str_synthesis_task\nTraceback (most recent call last):\n File \"/root/pythonenv/enve/lib/python3.10/site-packages/livekit/agents/utils/log.py\", line 16, in async_fn_logs\n return await fn(*args, **kwargs)\n File \"/root/pythonenv/enve/lib/python3.10/site-packages/livekit/agents/voice_assistant/agent_output.py\", line 195, in _str_synthesis_task\n handle.tts_forwarder.push_text(transcript)\n File \"/root/pythonenv/enve/lib/python3.10/site-packages/livekit/agents/transcription/tts_forwarder.py\", line 200, in...

I have an LLM ready and deployed on a server. Can I use my apis to get the response instead of openAI.llm()?

is there an integration that uses llamaindex vectors and chatengine directly?