LLamaSharp icon indicating copy to clipboard operation
LLamaSharp copied to clipboard

LLamaSharp v0.15.0 broke cuda backend

Open SymoHTL opened this issue 1 year ago • 16 comments

Description

i have a linux server with a Quadro RTX4000, i have installed drivers, my app runs in a docker container as base image i used: FROM nvidia/cuda:12.5.0-runtime-ubuntu22.04 AS base in v0.13.0 it worked with gguf models and gpu support, but now i wanted to run it with LLama3.1 so i need to upgrade to v0.15.0 after upgrading it cant load the library anymore, if i install only the cpu backend it works but well my server has a gpu for a reason

Edit: full error

SymoHTL avatar Aug 28 '24 11:08 SymoHTL

Can you try testing with the current master branch? We've just merged in new binaries which will be the 0.16.0 release soon.

martindevans avatar Aug 28 '24 13:08 martindevans

how can i do that

SymoHTL avatar Aug 28 '24 13:08 SymoHTL

Just clone this repo and build an application to run in your server environment (e.g. one of the examples).

martindevans avatar Aug 28 '24 13:08 martindevans

so just run an example? is it preconfigured with cuda?

SymoHTL avatar Aug 28 '24 13:08 SymoHTL

By default the examples have WithCuda() called in the initial setup (see here).

martindevans avatar Aug 28 '24 13:08 martindevans

hmm, i tried code assistant but it ran on gpu image

Edit: wait, i dont have cuda installed, only in the docker container

SymoHTL avatar Aug 28 '24 13:08 SymoHTL

ok now the gpu is working, but just at like 25%, how can i now test the master branch in my app?

SymoHTL avatar Aug 28 '24 13:08 SymoHTL

@martindevans Please fix this bug in release 0.16.0: https://github.com/SciSharp/LLamaSharp/issues/891

Otherwise, I'll stay on 0.13.0 and KM 0.62.240605.1 :)

aropb avatar Aug 28 '24 15:08 aropb

why are you tagging him here for another issue?

SymoHTL avatar Aug 28 '24 16:08 SymoHTL

Please fix this bug in release

It's an open source project, issues will get fixed when someone who wants them fixed puts in the work!

how can i now test the master branch in my app?

Easiest way is probably to remove the nuget reference from your main project, and add a reference to your cloned copy of LLamaSharp.

martindevans avatar Aug 28 '24 18:08 martindevans

I'm sorry if I broke the rules.

aropb avatar Aug 28 '24 19:08 aropb

when will v0.16.0 be released?

SymoHTL avatar Aug 28 '24 19:08 SymoHTL

Hopefully this weekend. I'm going to be busy for the rest of September so I want to get it released before then if possible.

martindevans avatar Aug 28 '24 20:08 martindevans

hmm it is running now on 0.16.0 but its not working in docker, it works fine without it tho, are the libraries maybe not copied correctly?

edit: my docker image is the nvidia one set up with cuda compose also redirects the gpu


# Install .NET dependencies
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
    wget \
    apt-transport-https && \
    wget https://packages.microsoft.com/config/ubuntu/22.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb && \
    dpkg -i packages-microsoft-prod.deb && \
    apt-get update && \
    apt-get install -y --no-install-recommends \
    aspnetcore-runtime-8.0 \
    libxml2 && \
    rm -rf /var/lib/apt/lists/*

WORKDIR /app
EXPOSE 8080
EXPOSE 8081

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["WebUi/WebUi.csproj", "WebUi/"]
COPY ["Infrastructure/Infrastructure.csproj", "Infrastructure/"]
COPY ["Application/Application.csproj", "Application/"]
COPY ["Domain/Domain.csproj", "Domain/"]
RUN dotnet restore "WebUi/WebUi.csproj"
COPY . .
WORKDIR "/src/WebUi"
RUN dotnet build "WebUi.csproj" -c $BUILD_CONFIGURATION -o /app/build

FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "WebUi.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .

ENTRYPOINT ["dotnet", "WebUi.dll"]

image

SymoHTL avatar Sep 03 '24 12:09 SymoHTL

I don't personally know much about docker, but I know some people have reported issues before with the binaries not loading in certain docker environments. In those cases I think it was due to missing dependencies.

Try cloning llama.cpp inside the container and compiling it, then using those binaries (ensure you use exactly the right version, see the bottom of the readme).

martindevans avatar Sep 04 '24 01:09 martindevans

This issue has been automatically marked as stale due to inactivity. If no further activity occurs, it will be closed in 7 days.

github-actions[bot] avatar Apr 28 '25 00:04 github-actions[bot]