Kayvan Sylvan

Results 190 comments of Kayvan Sylvan

Old issue and no response from the original poster @bartmeuris - closing this

With the latest versions of fabric, we can do this: ```text fabric -L Available models: [1] Anthropic|claude-3-5-haiku-20241022 [2] Anthropic|claude-3-5-haiku-latest [3] Anthropic|claude-3-5-sonnet-20240620 [4] Anthropic|claude-3-5-sonnet-20241022 [5] Anthropic|claude-3-5-sonnet-latest [6] Anthropic|claude-3-7-sonnet-20250219 [7] Anthropic|claude-3-7-sonnet-latest ```...

@garnus @everaldo especially for LiteLLM and other locally run (like LM Studio) model providers, there were cases where we could not easily disambiguate which model was meant.

Wouldn't setting an environment variable like this do what you need? ```text export FOD=/my/long/fabric/output/directory ``` The idea being that FOD represents your Fabric Output Directory Set it in your shell...

Use "fabric --serve" to create the Fabric API server.

You're right, but there's nothing in Fabric's handling that's different for DeepSeek versus OpenAPI, for example: ```plaintext $ time fabric -m deepseek-reasoner 'Why is the sky blue?' The sky appears...

I don't think there's anything for us to do about this. I recently fixed a long-standing bug, howver, that doubled our token usage. @liangzhenqiao any other thoughts? Or can you...

As of v1.4.252 onward you can use the "--suppress-think" flag to suppress DeepSeek thinking blocks.

The idea of a conversation mode is cool, but it's outside the scope of Fabric. However, you could do this with a shell script, using sessions.