Eric Curtin

Results 479 comments of Eric Curtin

> @rhatdan I've updated the project build to only accept python 3.11+! > > I'm trying to consolidate the dev dependencies into the `pyproject.toml` so they can be centrally managed....

> At least in my case I think such tool could be quite useful. When designing the new cache format (which have the metadata saved along with model file), I...

It's also sometimes debatable whether we should go there in llama.cpp or leave this functionality to higher-level tools like Docker Model Runner which also supports pulling from HuggingFace and DockerHub...

> > [@ServeurpersoCom](https://github.com/ServeurpersoCom) don't u think this could be added to [#16335](https://github.com/ggml-org/llama.cpp/pull/16335)? > > Yes, the model selector could definitely evolve into a more complete system, but as it stands...

We do have an issue with the tommarques56 github account in the llama-pull PR, he's left roughly 30 AI bot comments. Sometimes it's not even clear whether he's speaking as...

> The naming on `llama-cli` does suggest that it should be a general purpose tool, rather than just a chat utility. I don't have a strong preference on which tool...

Also curious whats the model of your CPU @kraxel ? I don't recall seeing anyone report this... Like even if we can turn off these instructions, it may make sense...

Lets close, we can re-open if necessary

Related to this, I really want to build portable binaries for all flavors of Linux CentOS/Ubuntu/etc. Do the statically built binaries work well for these use cases?

@Vaibhavs10 @ggerganov @vignesh1507 did we ever fix the ship libcurl on Windows problem in the end?