Results 4 issues of moritalous

Support for Bedrock inference providers has been applied with the following merges: https://github.com/meta-llama/llama-stack/commit/95abbf576b4b078e72b779f534cbaf696e30ecab However, it was overwritten in the next merge. https://github.com/meta-llama/llama-stack/commit/56aed59eb4c9915676c6fc7aac009dad97e7ead2 As a result, Bedrock is not displayed as...

CLA Signed

I ran it with the topic "Compare iPhone 16 vs iPhone 16e". ```python # Fast config with DeepSeek-R1-Distill-Llama-70B thread = {"configurable": {"thread_id": str(uuid.uuid4()), "search_api": "tavily", "planner_provider": "openai", "planner_model": "gpt-4o", "writer_provider":...

It would be more versatile and better if we could set chat_model as configurable so that we could freely change chat_model-specific parameters (such as extended shinking in Claude 3.7 Sonnet...

#40 I made changes to add support for custom chat models in the configuration. However, I was unable to get it to work as expected with the OpenAI o1-mini model...