Fabric
Fabric copied to clipboard
[Question]: Is there a easy way to select a model from a different vendor in the command-line ?
What is your question?
After reading the documentation, I am still not clear how to change the model from a distinct vendor. I tried using the --model parameter, but it didn'work.
What worked:
Redefining env vars: DEFAULT_VENDOR and DEFAULT_MODEL
For instance, my DEFAULT_VENDOR is Mistral and DEFAULT_MODEL is open-mixtral-8x22b-2404
If I try fabric --model=gpt-4o, it doesn't work
What I have to do: DEFAULT_VENDOR=OpenAI DEFAULT_MODEL=gpt-4o fabric "prompt"
I would like to know if there is an easier way to do that, since I have multiple vendors: Ollama, Mistral, OpenAI, Groq etc.
Thanks a lot and congrats for this great project.
Please provide output of
fabric --listmodels
Hello, I'm on the same boat. I have Ollama nad OpenAI configured. `fabric --listmodels
Available models:
Ollama
[1] mistral:latest
[2] llama3.1:latest
[3] medllama2:latest
[4] llama3.2:latest
[5] codellama:latest
OpenAI
[6] gpt-4o-audio-preview-2024-10-01
[7] gpt-4o-realtime-preview
[8] gpt-4o-realtime-preview-2024-10-01
[9] o1-mini-2024-09-12
[10] dall-e-2
[11] gpt-4-turbo
[12] gpt-4-1106-preview
[13] gpt-3.5-turbo
[14] gpt-3.5-turbo-0125
[15] gpt-3.5-turbo-instruct
[16] babbage-002
[17] whisper-1
[18] dall-e-3
[19] text-embedding-3-small
[20] gpt-3.5-turbo-16k
[21] gpt-4-0125-preview
[22] gpt-4-turbo-preview
[23] omni-moderation-latest
[24] gpt-4o-2024-05-13
[25] omni-moderation-2024-09-26
[26] tts-1-hd-1106
[27] chatgpt-4o-latest
[28] gpt-4
[29] gpt-4-0613
[30] o1-mini
[31] o1-preview
[32] o1-preview-2024-09-12
[33] tts-1-hd
[34] text-embedding-ada-002
[35] gpt-3.5-turbo-1106
[36] gpt-4o-audio-preview
[37] tts-1
[38] tts-1-1106
[39] gpt-3.5-turbo-instruct-0914
[40] davinci-002
[41] text-embedding-3-large
[42] gpt-4o-mini-2024-07-18
[43] gpt-4o-mini
[44] gpt-4o-realtime-preview-2024-12-17
[45] gpt-4o-mini-realtime-preview
[46] gpt-4o-mini-realtime-preview-2024-12-17
[47] gpt-4o-2024-08-06
[48] gpt-4o
[49] gpt-4o-2024-11-20
[50] gpt-4o-audio-preview-2024-12-17
[51] gpt-4o-mini-audio-preview
[52] gpt-4o-mini-audio-preview-2024-12-17
[53] gpt-4-turbo-2024-04-09`
The default one is set to OpenAI gpt-4o but some data I would like to provide to an ollama model like: `fabric -u "https://xxx.pl/start.html" --pattern summarize_paper -m llama3.1 Requested Model = llama3.1 Default Model = gpt-4o Default Vendor = OpenAI.
could not find vendor.`
please use ... -m "llama3.1:latest" ...
It's working. Thank you.
@eugeis , sorry for the late response.
Available models:
Anthropic
[1] claude-3-5-haiku-latest
[2] claude-3-5-haiku-20241022
[3] claude-3-5-sonnet-latest
[4] claude-3-5-sonnet-20241022
[5] claude-3-5-sonnet-20240620
[6] claude-3-opus-latest
[7] claude-3-opus-20240229
[8] claude-3-sonnet-20240229
[9] claude-3-haiku-20240307
[10] claude-2.1
[11] claude-2.0
[12] claude-instant-1.2
Ollama
[13] llama3.2:1b-instruct-fp16
[14] deepseek-coder-v2:16b-lite-instruct-q8_0
[15] mistral:v0.2
[16] qwen2.5-coder:latest
[17] llama3.2:1b
[18] llama3.1:latest
[19] llama3.2:latest
Mistral
[20] ministral-3b-2410
[21] ministral-3b-latest
[22] ministral-8b-2410
[23] ministral-8b-latest
[24] open-mistral-7b
[25] mistral-tiny
[26] mistral-tiny-2312
[27] open-mistral-nemo
[28] open-mistral-nemo-2407
[29] mistral-tiny-2407
[30] mistral-tiny-latest
[31] open-mixtral-8x7b
[32] mistral-small
[33] mistral-small-2312
[34] open-mixtral-8x22b
[35] open-mixtral-8x22b-2404
[36] mistral-small-2402
[37] mistral-small-2409
[38] mistral-small-latest
[39] mistral-medium-2312
[40] mistral-medium
[41] mistral-medium-latest
[42] mistral-large-2402
[43] mistral-large-2407
[44] mistral-large-2411
[45] mistral-large-latest
[46] pixtral-large-2411
[47] pixtral-large-latest
[48] codestral-2405
[49] codestral-latest
[50] codestral-mamba-2407
[51] open-codestral-mamba
[52] codestral-mamba-latest
[53] pixtral-12b-2409
[54] pixtral-12b
[55] pixtral-12b-latest
[56] mistral-embed
[57] mistral-moderation-2411
[58] mistral-moderation-latest
Groq
[59] llama-3.2-1b-preview
[60] llama-3.3-70b-versatile
[61] llama-3.1-70b-versatile
[62] llama3-groq-8b-8192-tool-use-preview
[63] gemma2-9b-it
[64] llama-3.2-90b-vision-preview
[65] distil-whisper-large-v3-en
[66] llama-3.1-8b-instant
[67] llama-3.2-3b-preview
[68] llama3-groq-70b-8192-tool-use-preview
[69] whisper-large-v3
[70] llama-guard-3-8b
[71] llama-3.2-11b-vision-preview
[72] llama3-8b-8192
[73] llama3-70b-8192
[74] whisper-large-v3-turbo
[75] llama-3.3-70b-specdec
[76] mixtral-8x7b-32768
OpenAI
[77] gpt-4o-audio-preview-2024-10-01
[78] gpt-4o-realtime-preview
[79] gpt-4o-realtime-preview-2024-10-01
[80] dall-e-2
[81] gpt-4-turbo
[82] gpt-4o-mini-2024-07-18
[83] gpt-4-1106-preview
[84] gpt-4o-mini
[85] gpt-3.5-turbo
[86] gpt-3.5-turbo-0125
[87] gpt-3.5-turbo-instruct
[88] babbage-002
[89] whisper-1
[90] dall-e-3
[91] text-embedding-3-small
[92] gpt-3.5-turbo-16k
[93] gpt-4-0125-preview
[94] gpt-4-turbo-preview
[95] omni-moderation-latest
[96] gpt-4o-2024-05-13
[97] omni-moderation-2024-09-26
[98] tts-1-hd-1106
[99] chatgpt-4o-latest
[100] gpt-4
[101] gpt-4-0613
[102] o1-preview
[103] o1-preview-2024-09-12
[104] tts-1-hd
[105] text-embedding-ada-002
[106] gpt-3.5-turbo-1106
[107] gpt-4o-audio-preview
[108] tts-1
[109] tts-1-1106
[110] gpt-3.5-turbo-instruct-0914
[111] davinci-002
[112] text-embedding-3-large
[113] gpt-4o-realtime-preview-2024-12-17
[114] gpt-4o-mini-realtime-preview
[115] gpt-4o-mini-realtime-preview-2024-12-17
[116] gpt-4o-2024-08-06
[117] gpt-4o
[118] o1-mini
[119] o1-mini-2024-09-12
[120] gpt-4o-2024-11-20
[121] gpt-4o-audio-preview-2024-12-17
[122] gpt-4o-mini-audio-preview
[123] gpt-4o-mini-audio-preview-2024-12-17
[124] gpt-4-turbo-2024-04-09
With the latest versions of fabric, we can do this:
fabric -L
Available models:
[1] Anthropic|claude-3-5-haiku-20241022
[2] Anthropic|claude-3-5-haiku-latest
[3] Anthropic|claude-3-5-sonnet-20240620
[4] Anthropic|claude-3-5-sonnet-20241022
[5] Anthropic|claude-3-5-sonnet-latest
[6] Anthropic|claude-3-7-sonnet-20250219
[7] Anthropic|claude-3-7-sonnet-latest
The option to use is -V (or --vendor)
fabric -h | grep -B 3 -A 3 vendor
-U, --updatepatterns Update patterns
-c, --copy Copy to clipboard
-m, --model= Choose model
-V, --vendor= Specify vendor for the selected model (e.g., -V "LM
Studio" -m openai/gpt-oss-20b)
--modelContextLength= Model context length (only affects ollama)
-o, --output= Output to file
@garnus @everaldo especially for LiteLLM and other locally run (like LM Studio) model providers, there were cases where we could not easily disambiguate which model was meant.