vtl icon indicating copy to clipboard operation
vtl copied to clipboard

Adding GPU support to VTL

Open ibrandiay opened this issue 1 year ago • 4 comments

The goal is to add different backends like CUDA, Vulkan and others, in order to enable training of AI models on GPUs. I also plan to add GPU support for ARM (Android and others)... I have initialized this part that I will develop and update from time to time. Thank you.

Summary by CodeRabbit

  • New Features
    • Added comprehensive documentation for multiple backends (Cuda, OpenCL, OpenMP, Vulkan) of the VTL Engine, including installation instructions and usage examples.
    • Introduced a README for the Image module, enhancing user understanding of its functionality.

These updates improve accessibility and usability for developers integrating different backend technologies within their applications.

ibrandiay avatar Aug 05 '24 14:08 ibrandiay

Walkthrough

This update introduces comprehensive README files for each backend of the VTL Engine, including Cuda, OpenCL, OpenMP, and Vulkan, as well as a README for the Image module. These documents provide essential installation instructions, usage examples, and functionality overviews, enhancing user understanding and accessibility. The consistent format across backends simplifies switching between them, making it easier for developers to implement GPU acceleration and parallel processing in their applications.

Changes

Files Change Summary
backends/*/README.md New README files added for Cuda, OpenCL, OpenMP, and Vulkan backends, detailing installation and usage.
tools/Image/Readme.md New README file added for the Image module, providing information on its purpose and functionality.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant VTL Engine
    participant Backend

    User->>VTL Engine: Initialize tensor
    VTL Engine->>Backend: Select appropriate backend (e.g., cuda())
    Backend-->>VTL Engine: Device management and tensor creation
    VTL Engine-->>User: Return tensor ready for use

🐇 In the meadow, the code does bloom,
With backends ready to lift the gloom.
Cuda, OpenCL, they dance and play,
Making computations bright as day!
Documentation clear, all users cheer,
A hop of joy, for changes here! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

coderabbitai[bot] avatar Aug 05 '24 14:08 coderabbitai[bot]

That's amazing, good luck with this!

medvednikov avatar Aug 06 '24 09:08 medvednikov

hey @ibrandiay ! nice to know you are planning to work on this. There is already some work initiated on this since we have VCL integrated on VTL, for example here https://github.com/vlang/vtl/blob/main/storage/vcl_d_vcl.v we define the store using VCL and here we have the instantiation method https://github.com/vlang/vtl/blob/ad0161891a4795fa99274d795cc2089c5c41743c/src/tensor_vcl_d_vcl.v#L17

(VCL is the official OpenCL wrapper for V part of VSL)

I would like to discuss the design of this solution with you before you start the implementation and maybe implement one backend for each PR just to make it easier to test 😊

ulises-jeremias avatar Aug 18 '24 07:08 ulises-jeremias

@ulises-jeremias Thanks for the info, I didn't know an OpenCL wrapper already existed. No need to re-implement it then. For now, I'm working on the CUDA part for Nvidia GPU support. Once each backend is implemented, I'll submit a pull request. Thanks again.

ibrandiay avatar Aug 18 '24 14:08 ibrandiay