Zachary Cross
Zachary Cross
Are there any plans for Ollama to support this type of hardware setup (AMD GPUs on Intel Mac)?
> Some possibly relevant data: on my intel iMac pro with AMD Radeon Pro Vega 8GB vram, if I build the current head of llama.cpp with `make CUBLAS=1` the resulting...
@leobenkel Seems like there might be a potential adjustment that devs can make to one of the Ollama dependency builds to take advantage of AMD GPU's utilization of Metal 3...
Any updates on this?
> llama3 compiled from sources on my MacPro with AMD Radeon RX 6950 XT 16GB successfully utilized GPU. > > ``` @xakrume Can you explain in more detail how you...
@vlbosch Are you getting GPU acceleration with the AMD eGPU? If so, how? I'm also running Mac Intel build. I have 64gb RAM and Jan is only running CPU inference...
Commenting in support, would be a huge fan of this integration
@ishaan-jaff Thanks for the reply! In short, the ability to use GPT4-capable LLM for costly products using the OpenAI API (particularly autonomous agents that use a ton of API calls)....