Feature Request: SYCL CI online
Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the README.md.
- [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [X] I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
@ggerganov @slaren hi folks, you shall receive emails from intel developer cloud about access to intel GPU, could you set up the CI for SYCL finally? You can turn to me and @arthw if we can help.
Motivation
Current SYCL backend will be broken from time to time, CI/CD needed.
cc @arthw @OuadiElfarouki @joeatodd discussed in https://github.com/ggerganov/llama.cpp/discussions/6399#discussioncomment-9778501
Possible Implementation
No response
@tonym97 FYI
@airMeng Thank you - just responded to the email. Let's continue the discussion there.
@ggerganov have you checked your gmail, there should be invitation email of intel developer cloud and related instructions, I think it will include SSH access
@airMeng Yup, I just signed up for the Intel cloud, but having trouble getting access to the instances. Sent you e-mail with a question.
@ggerganov have you got the access to the machine?
@ggerganov do you need more support from us?
@airMeng Don't think anything is needed from your side for now. I'm working with Intel Customer Support to resolve some connectivity issues. Feel free to reach over email for more info.
@airMeng Don't think anything is needed from your side for now. I'm working with Intel Customer Support to resolve some connectivity issues. Feel free to reach over email for more info.
Ok I think you already get a Meteor-lake Linux machine and we can expect SYCL CI online soon?
@airMeng The ggml-ci is now running on the provided cloud instance with SYCL enabled. It will run on all commits on the master branch and also those that have the ggml-ci string in the commit message:
Here is a sample run from current master:
https://github.com/ggml-org/ci/tree/results/llama.cpp/40/f2555797f97314de749873cdc29dc102be66e2/ggml-7-sycl
@ggerganov Do we really have a CI for GGML SYCL? I was discussing internally within our company for a dedicated Intel GPU based CI/CD instance (we have an A770 GPU Ubuntu instance and should be available online now).
So far I can see the PRs only do the building and not testing, can you please verify what's the case here?
Unfortunately, the ggml-ci for SYCL didn't work out - there were some intermittent connectivity issues with the node that we deployed on Intel AI Cloud (it would work for a while and then suddenly lose connection).
I was discussing internally within our company for a dedicated Intel GPU based CI/CD instance (we have an A770 GPU Ubuntu instance and should be available online now).
Sounds good. If you can provide SSH access to it, I can set it up. Or I can send you instructions over e-mail. Let me know.
Can you please send an email to [email protected] so that we can give you ssh access?
Edit: Let's test it out first to make sure it is stable.