John Mai
John Mai
要不你设置固定的扫描路径,让插件只扫存在model的目录
> Hi guys - which branch should i be pinning/using? @narciszait use specify the version with a commit hash
这个commit导致的:https://github.com/hiyouga/LLaMA-Factory/pull/3829/commits/b55fb611c57be03fb38218c7da1d96f6848496ba https://github.com/hiyouga/LLaMA-Factory/blob/c4f50865ad798e1e99044480e1ab05abefc30224/src/llamafactory/data/loader.py#L122-L124 目前我先自己改回了 ```py if data_args.max_samples is not None: # truncate dataset num_samples = min(data_args.max_samples, len(dataset)) dataset = dataset.select(range(num_samples)) ``` @hiyouga
因为部分ai输出的内容是markdown格式的,json.loads无法正常解析,可以先通过正则去提取pattern = r'```json\n([\s\S]*?)\n```' 这是我的临时方案  完整方案可以借鉴langchain https://github.com/langchain-ai/langchain/blob/c6350d636e139bd9018f7225d3257c4be6139d54/libs/core/langchain_core/utils/json.py#L124
> > Base models don't usually have a chat template. Is there something like https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/generate.py#L213C1-L214C1 in Swift Transformers? > > No, nothing like that: https://github.com/huggingface/swift-transformers/blob/main/Sources/Tokenizers/Tokenizer.swift#L359 > > Perhaps there should...
LLama-3.2-3B-8bit is not a chat model; it is recommended to use Instruct or Chat models. Ex: https://huggingface.co/mlx-community/Llama-3.2-3B-Instruct-8bit I will compatibility such models.
Thank you for your suggestions. I will adopt them.
Thank you for your kind words about our project! We plan to support the OpenAI-compatible API, but it's not a top priority right now since many apps already offer this...
ChatMLX mainly relies on the support provided by [MLX Swift Examples](https://github.com/ml-explore/mlx-swift-examples). Add Phi 3.5 MoE https://github.com/ml-explore/mlx-swift-examples/pull/116 I will update promptly after this PR is merged.