DeepResearch Agent with video search support.
This PR introduces an upgraded version of the Deep Research Agent, which significantly improves the research process by integrating both web and video searches into a parallel workflow. Unlike the previous version, which only focused on raw web searches, this new agent intelligently decides if a user's query needs video content, and if so, how many video searches to perform.
When a user provides a query, the agent first plans the research by determining how many web searches need to be made. Based on that plan, it generates multiple search queries. Additionally, it evaluates if the query requires video searches, and if so, it also decides how many video searches should be performed. The key benefit here is that both web and video searches happen in parallel, along with parallel execution of each individual search query, significantly reducing latency and improving the overall speed of the process.
Once the web and video searches are completed, the agent returns the top K relevant results from both types of searches, ensuring that the user gets the most pertinent information quickly. The whole process is executed efficiently with minimal delay, thanks to the parallel nature of the system.
This new version of the Deep Research Agent makes the research process faster, more efficient, and much more comprehensive by combining both text-based web results and video content, tailored to the user’s query.
Hey @NeuralNoble this sounds incredible - very high value! Would it be possible to add this as code within the repo rather than a sub-module? Or perhaps a notebook with an explanation and a link to your repo?
yes i can do that.. so is it fine if it stays as a separate folder in community contribution folder . however i will make a notebook explaining my approach and provide the link to my repo
that would be perfect - thanks so much @NeuralNoble