lu
lu
w2 start -S /home/www/whistle/8888 -C -p 8888 w2 start -S /home/www/whistle/9999 -C -p 9999 通过上面命令起两个实例其中一个会过一段时间挂掉
> Renamed the issue. Let me know if this is accurate: we need an option to entirely disable the request to huggingface and only load from local disk. If it...
> What device? mali gpu arm64
> Are you using a terminal emulator? Which device? Which version of android? termux xiaomi 14pro android 14
proot-distro installing debian can run exo, but when multiple phones are connected to the same network, exo cannot be added together, and all nodes are displayed as one.
可以硬编码模型路径修改三个地方: exo/exo/api/chatgpt_api.py: resolve_tinygrad_tokenizer function def resolve_tinygrad_tokenizer(model_id: str): if model_id == "llama3-8b-sfr": # here i modified # return AutoTokenizer.from_pretrained("TriAiExperiments/SFR-Iterative-DPO-LLaMA-3-8B-R") return AutoTokenizer.from_pretrained("/nasroot/models/Meta-Llama-3-8B") elif model_id == "llama3-70b-sfr": return AutoTokenizer.from_pretrained("TriAiExperiments/SFR-Iterative-DPO-LLaMA-3-8B-R") else: raise ValueError(f"tinygrad doesnt...
> @artistlu 感谢您的建议,我依旧没有成功,我无法连接到 huggingface,未能成功运行该项目 我也不能链接 huggingface,我是下在这[https://www.modelscope.cn/models/LLM-Research/Meta-Llama-3-8B/files](url) 下载模型,放在每个节点固定目录,硬编码三个地方就可以加载了。   希望有帮助
> Try running with `SUPPORT_BF16=0` e.g. `SUPPORT_BF16=0 python3 main.py`. Can you let me know if that works? > > Ideally we detect this automatically. In order to load a local...