llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Feature Request: SYCL CI online

Open airMeng opened this issue 1 year ago • 2 comments

Prerequisites

  • [X] I am running the latest code. Mention the version if possible as well.
  • [X] I carefully followed the README.md.
  • [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [X] I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

@ggerganov @slaren hi folks, you shall receive emails from intel developer cloud about access to intel GPU, could you set up the CI for SYCL finally? You can turn to me and @arthw if we can help.

Motivation

Current SYCL backend will be broken from time to time, CI/CD needed.

cc @arthw @OuadiElfarouki @joeatodd discussed in https://github.com/ggerganov/llama.cpp/discussions/6399#discussioncomment-9778501

Possible Implementation

No response

airMeng avatar Oct 03 '24 03:10 airMeng

@tonym97 FYI

airMeng avatar Oct 03 '24 03:10 airMeng

@airMeng Thank you - just responded to the email. Let's continue the discussion there.

ggerganov avatar Oct 03 '24 13:10 ggerganov

@ggerganov have you checked your gmail, there should be invitation email of intel developer cloud and related instructions, I think it will include SSH access

airMeng avatar Oct 04 '24 02:10 airMeng

@airMeng Yup, I just signed up for the Intel cloud, but having trouble getting access to the instances. Sent you e-mail with a question.

ggerganov avatar Oct 04 '24 08:10 ggerganov

@ggerganov have you got the access to the machine?

airMeng avatar Oct 05 '24 09:10 airMeng

@ggerganov do you need more support from us?

airMeng avatar Oct 21 '24 05:10 airMeng

@airMeng Don't think anything is needed from your side for now. I'm working with Intel Customer Support to resolve some connectivity issues. Feel free to reach over email for more info.

ggerganov avatar Oct 21 '24 05:10 ggerganov

@airMeng Don't think anything is needed from your side for now. I'm working with Intel Customer Support to resolve some connectivity issues. Feel free to reach over email for more info.

Ok I think you already get a Meteor-lake Linux machine and we can expect SYCL CI online soon?

airMeng avatar Oct 23 '24 14:10 airMeng

@airMeng The ggml-ci is now running on the provided cloud instance with SYCL enabled. It will run on all commits on the master branch and also those that have the ggml-ci string in the commit message:

image

Here is a sample run from current master:

https://github.com/ggml-org/ci/tree/results/llama.cpp/40/f2555797f97314de749873cdc29dc102be66e2/ggml-7-sycl

ggerganov avatar Oct 24 '24 18:10 ggerganov

@ggerganov Do we really have a CI for GGML SYCL? I was discussing internally within our company for a dedicated Intel GPU based CI/CD instance (we have an A770 GPU Ubuntu instance and should be available online now).

So far I can see the PRs only do the building and not testing, can you please verify what's the case here?

qnixsynapse avatar Mar 20 '25 10:03 qnixsynapse

Unfortunately, the ggml-ci for SYCL didn't work out - there were some intermittent connectivity issues with the node that we deployed on Intel AI Cloud (it would work for a while and then suddenly lose connection).

I was discussing internally within our company for a dedicated Intel GPU based CI/CD instance (we have an A770 GPU Ubuntu instance and should be available online now).

Sounds good. If you can provide SSH access to it, I can set it up. Or I can send you instructions over e-mail. Let me know.

ggerganov avatar Mar 20 '25 10:03 ggerganov

Can you please send an email to [email protected] so that we can give you ssh access?

Edit: Let's test it out first to make sure it is stable.

qnixsynapse avatar Mar 20 '25 10:03 qnixsynapse