[Feature Request]: Submit Single Prompt to Multiple Language Models with Unified Results Stream
The suggested feature allows users to submit a single prompt to multiple language models simultaneously. Instead of manually submitting the same prompt to each model and receiving separate threads, this feature would streamline the process by presenting the responses from all models in a single, contiguous output. This would allow users to interact with the combined results as if they came from one model, providing a more efficient workflow.
Use Case
Comparing Model Outputs: Users could easily compare how different language models respond to the same query without manually managing multiple threads.
Consolidated Follow-ups: After receiving responses from multiple models, users could submit follow-up prompts against the unified result, making it easier to refine or expand on the information.
Users could quickly generate consolidated reports that highlight key differences and contradictions between models, saving time and effort.
Alternatives Considered
No response
That is cool, especially since you never know what model will give you the best answer on perplexity. Sometimes claude is good, sometimes gpt4o