ramalama icon indicating copy to clipboard operation
ramalama copied to clipboard

Add support for converting to OCI artifacts

Open rhatdan opened this issue 2 months ago • 9 comments

Just showing this off, I am still working on PRs to get into podman 5.7 release.

https://github.com/containers/podman/pull/27329 https://github.com/containers/podman/pull/27328 https://github.com/containers/podman/pull/27325 https://github.com/containers/podman/pull/27324 https://github.com/containers/podman/pull/27319 https://github.com/containers/podman/pull/27253

Summary by Sourcery

Add comprehensive support for storing and managing AI models as OCI artifacts, extending the transport, CLI, and configuration layers, and covering the new functionality with extensive documentation and end-to-end system tests.

New Features:

  • Add support for OCI 'artifact' type for model conversion, push, pull, list, inspect, and removal operations
  • Introduce CLI options and configuration for specifying 'artifact' conversion type alongside 'car' and 'raw'

Enhancements:

  • Extend OCI transport to handle artifact-specific commands (_add_artifact, _rm_artifact, inspect, mount_cmd, is_artifact)
  • Integrate artifact listing into oci_tools and combine with existing image and manifest listings
  • Update base transport for correct mounting and inspect behavior based on artifact flag
  • Allow default conversion type to be set via configuration and environment variable with correct precedence

Documentation:

  • Update command-line help and man pages to include 'artifact' type and describe its behavior
  • Add configuration documentation for the new 'convert_type' setting

Tests:

  • Add comprehensive system tests for artifact workflows including conversion, push, pull, listing, removal, error handling, configuration precedence, and size reporting

rhatdan avatar Oct 20 '25 14:10 rhatdan

Reviewer's Guide

This PR extends RamaLama to support OCI artifacts as a new target type by implementing artifact-specific logic across transport and CLI modules, updating configuration and documentation, and adding comprehensive system tests.

File-Level Changes

Change Details Files
Extend OCI transport layer for artifact operations
  • Add artifact detection flag and mount command logic
  • Implement _create_artifact, _add_artifact, _rm_artifact methods
  • Branch _convert, push, remove, exists, inspect to handle artifact type
ramalama/transports/oci.py
Refactor base transport to support artifacts and unify inspect output
  • Introduce artifact flag and mount adjustments in base class
  • Split print_inspect into get_inspect returning JSON
  • Ensure inspect returns consistent JSON regardless of model type
ramalama/transports/base.py
Add artifact listing and size parsing in utility layer
  • Implement list_artifacts with JSON parsing and type filtering
  • Add convert_from_human_readable_size helper
  • Integrate artifact list into overall model listing
ramalama/oci_tools.py
Expand CLI to include 'artifact' type and override behavior
  • Add 'artifact' choice to convert and push commands
  • Respect config and environment defaults for convert_type
  • Adjust push_cli and rm_cli flows for artifact removal
ramalama/cli.py
Introduce convert_type config option with validation
  • Add convert_type field to BaseConfig
  • Default CLI flags to CONFIG.convert_type and allow env overrides
  • Validate convert_type values at startup
ramalama/config.py
Update documentation for artifact support
  • Describe 'artifact' type in convert manpage and config docs
  • List available types and their semantics in docs
  • Provide sample config snippets for default convert_type
docs/ramalama-convert.1.md
docs/ramalama.conf.5.md
docs/ramalama.conf
Add system tests covering artifact workflows
  • Create comprehensive Bats tests for convert, push, list, remove
  • Cover error handling, config precedence, size reporting, concurrency
  • Verify integration with podman artifact commands
test/system/056-artifact.bats

Possibly linked issues

  • #123: The PR introduces comprehensive OCI artifact support in Ramalama, including new commands for convert, push, list, and remove, and enables mounting artifacts into containers, fulfilling the issue's requirements.
  • #N/A: The PR introduces comprehensive OCI artifact support, adding new commands like convert --type artifact and updating existing functionalities for AI models.
  • #New OCI artifact type "docker model runner": The PR adds comprehensive support for OCI artifacts, addressing the issue's request for a new OCI artifact type.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an issue from a review comment by replying to it. You can also reply to a review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull request title to generate a title at any time. You can also comment @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in the pull request body to generate a PR summary at any time exactly where you want it. You can also comment @sourcery-ai summary on the pull request to (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the pull request to resolve all Sourcery comments. Useful if you've already addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull request to dismiss all existing Sourcery reviews. Especially useful if you want to start fresh with a new review - don't forget to comment @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

  • Contact our support team for questions or feedback.
  • Visit our documentation for detailed guides and information.
  • Keep in touch with the Sourcery team by following us on X/Twitter, LinkedIn or GitHub.

sourcery-ai[bot] avatar Oct 20 '25 14:10 sourcery-ai[bot]

Summary of Changes

Hello @rhatdan, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the ramalama tool by integrating full support for OCI artifacts. It allows users to convert, manage, and interact with AI models packaged as OCI artifacts, providing a more flexible and potentially more efficient way to handle models compared to traditional OCI images. The changes span documentation, command-line interface, configuration, core logic for model handling, and include a robust set of new system tests to ensure functionality and reliability.

Highlights

  • OCI Artifact Support: Introduced 'artifact' as a new type for converting and managing OCI models, allowing AI models to be stored as OCI artifacts rather than traditional OCI images.
  • CLI and Configuration Updates: The --type option for convert and push commands now includes 'artifact', and a new convert_type configuration option has been added to specify the default conversion type.
  • Artifact Management Logic: Implemented new functions and modified existing ones to list, create, push, remove, and inspect OCI artifacts, integrating them seamlessly into the ramalama tool's lifecycle.
  • Comprehensive Testing: Added a new test suite (056-artifact.bats) with extensive test cases covering basic artifact conversion, push operations, listing, error handling, configuration precedence, and performance with large and multi-file artifacts.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

gemini-code-assist[bot] avatar Oct 20 '25 14:10 gemini-code-assist[bot]

/gemini retest

rhatdan avatar Oct 24 '25 15:10 rhatdan

It looks like you tried to use the /gemini retest command, but this is not a recognized command. If you intended to trigger a new code review, please use /gemini review.

gemini-code-assist[bot] avatar Oct 24 '25 15:10 gemini-code-assist[bot]

@sourcery-ai review

rhatdan avatar Oct 24 '25 15:10 rhatdan

/gemini review

rhatdan avatar Oct 24 '25 15:10 rhatdan

[!WARNING] Gemini encountered an error creating the review. You can try again by commenting /gemini review.

gemini-code-assist[bot] avatar Oct 24 '25 15:10 gemini-code-assist[bot]

/gemini review

rhatdan avatar Oct 27 '25 17:10 rhatdan

Fixes: https://github.com/containers/ramalama/issues/1152

rhatdan avatar Nov 05 '25 15:11 rhatdan

Hey @rhatdan is there anything I can help with to push this through? Also, congrats on the official retirement! I tried emailing you but it looks like your RedHat address disappeared.

ieaves avatar Dec 09 '25 17:12 ieaves

For some reason they changed my email to [email protected], now. Trying to get the alias dwalsh->dawalsh back.

I will work on fixing this up today.

rhatdan avatar Dec 10 '25 13:12 rhatdan

@ieaves @olliewalsh @engelmi this is finally ready to go in. PTAL /gemini review

rhatdan avatar Dec 11 '25 23:12 rhatdan

It might be better to just merge and iterate but from what I can tell

  1. Artifact detection relies on having access to podman inspect meaning the artifact has to already be local. I've already solved for this in my follow-on PR so it shouldn't be a blocker

  2. It looks like this is currently looking at the top level org.opencontainers.image.title annotation to construct the file mount path (/mnt/models/<title>) but if I'm reading the CNAI spec correctly we need to be looking at the layer annotation org.cnai.model.filepath. I think this is actually an issue because relying on the top level annotation will block us from handling multifile artifacts (like split safetensors or gguf). I think it would make sense to look for both the CNAI annotation alongside org.opencontainers.image.title but I think it has to be at the layer level.

EDIT: I've bundled a bunch of model artifacts you can test with (each repo has a :gguf tag which is an artifact). These aren't fully standards compliant either but if it's at all useful:

podman artifact inspect rlcr.io/ramalama/gemma3-270m:gguf                                                                                                     
{
     "Manifest": {
          "schemaVersion": 2,
          "mediaType": "application/vnd.oci.image.manifest.v1+json",
          "artifactType": "application/vnd.cnai.model.manifest.v1+json",
          "config": {
               "mediaType": "application/vnd.cnai.model.config.v1+json",
               "digest": "sha256:516af74e2d0b0634d2d565f4c7b380777975a952fdbd6e3a368b70dbe075ae06",
               "size": 453
          },
          "layers": [
               {
                    "mediaType": "application/vnd.cnai.model.layer.gguf",
                    "digest": "sha256:9826846190dea7bdd334fb834d5a1d3b8bf95b14a9833fe792e0abbc49b4927f",
                    "size": 282975264,
                    "annotations": {
                         "org.opencontainers.image.title": "gemma-3-270m-it-Q6_K.gguf"
                    }
               }
          ],
          "annotations": {
               "com.ramalama.build.run_id": "19554594157",
               "com.ramalama.build.workflow": "Build Model Artifacts",
               "com.ramalama.model.file.format": "gguf",
               "com.ramalama.model.file.location": "/models/gemma-3-270m-it-Q6_K.gguf",
               "com.ramalama.model.file.name": "gemma-3-270m-it-Q6_K.gguf",
               "com.ramalama.model.file.sha256": "9826846190dea7bdd334fb834d5a1d3b8bf95b14a9833fe792e0abbc49b4927f",
               "com.ramalama.model.file.size": "282975264",
               "com.ramalama.model.files.all_files": "gemma-3-270m-it-Q6_K.gguf",
               "com.ramalama.model.files.total_size": "282975264",
               "com.ramalama.model.name": "gemma3-270m",
               "com.ramalama.model.source": "https://huggingface.co/unsloth/gemma-3-270m-it-GGUF",
               "com.ramalama.source.commit": "c90975dbd40c0c7b275fefaae758c3415c906238",
               "org.opencontainers.image.authors": "[email protected]",
               "org.opencontainers.image.created": "2025-11-20T23:28:48Z",
               "org.opencontainers.image.description": "gemma3-270m model file(s)",
               "org.opencontainers.image.title": "gemma3-270m",
               "org.opencontainers.image.vendor": "RamaLama Labs"
          }
     },
     "Name": "rlcr.io/ramalama/gemma3-270m:gguf",
     "Digest": "sha256:02a97d0fb0a0952f6a8df657b66cf1ad8b6f9d2a683b77fc4a7948df5af587c1"
}

ieaves avatar Dec 12 '25 20:12 ieaves

Yes let's get this merged and then we can iterate, the PR is already too big. I will fix the test conflicts.

rhatdan avatar Dec 15 '25 13:12 rhatdan

@ieaves Now it is in your court.

rhatdan avatar Dec 15 '25 15:12 rhatdan