Add support for converting to OCI artifacts
Just showing this off, I am still working on PRs to get into podman 5.7 release.
https://github.com/containers/podman/pull/27329 https://github.com/containers/podman/pull/27328 https://github.com/containers/podman/pull/27325 https://github.com/containers/podman/pull/27324 https://github.com/containers/podman/pull/27319 https://github.com/containers/podman/pull/27253
Summary by Sourcery
Add comprehensive support for storing and managing AI models as OCI artifacts, extending the transport, CLI, and configuration layers, and covering the new functionality with extensive documentation and end-to-end system tests.
New Features:
- Add support for OCI 'artifact' type for model conversion, push, pull, list, inspect, and removal operations
- Introduce CLI options and configuration for specifying 'artifact' conversion type alongside 'car' and 'raw'
Enhancements:
- Extend OCI transport to handle artifact-specific commands (_add_artifact, _rm_artifact, inspect, mount_cmd, is_artifact)
- Integrate artifact listing into oci_tools and combine with existing image and manifest listings
- Update base transport for correct mounting and inspect behavior based on artifact flag
- Allow default conversion type to be set via configuration and environment variable with correct precedence
Documentation:
- Update command-line help and man pages to include 'artifact' type and describe its behavior
- Add configuration documentation for the new 'convert_type' setting
Tests:
- Add comprehensive system tests for artifact workflows including conversion, push, pull, listing, removal, error handling, configuration precedence, and size reporting
Reviewer's Guide
This PR extends RamaLama to support OCI artifacts as a new target type by implementing artifact-specific logic across transport and CLI modules, updating configuration and documentation, and adding comprehensive system tests.
File-Level Changes
| Change | Details | Files |
|---|---|---|
| Extend OCI transport layer for artifact operations |
|
ramalama/transports/oci.py |
| Refactor base transport to support artifacts and unify inspect output |
|
ramalama/transports/base.py |
| Add artifact listing and size parsing in utility layer |
|
ramalama/oci_tools.py |
| Expand CLI to include 'artifact' type and override behavior |
|
ramalama/cli.py |
| Introduce convert_type config option with validation |
|
ramalama/config.py |
| Update documentation for artifact support |
|
docs/ramalama-convert.1.mddocs/ramalama.conf.5.mddocs/ramalama.conf |
| Add system tests covering artifact workflows |
|
test/system/056-artifact.bats |
Possibly linked issues
- #123: The PR introduces comprehensive OCI artifact support in Ramalama, including new commands for convert, push, list, and remove, and enables mounting artifacts into containers, fulfilling the issue's requirements.
- #N/A: The PR introduces comprehensive OCI artifact support, adding new commands like
convert --type artifactand updating existing functionalities for AI models. - #New OCI artifact type "docker model runner": The PR adds comprehensive support for OCI artifacts, addressing the issue's request for a new OCI artifact type.
Tips and commands
Interacting with Sourcery
- Trigger a new review: Comment
@sourcery-ai reviewon the pull request. - Continue discussions: Reply directly to Sourcery's review comments.
- Generate a GitHub issue from a review comment: Ask Sourcery to create an
issue from a review comment by replying to it. You can also reply to a
review comment with
@sourcery-ai issueto create an issue from it. - Generate a pull request title: Write
@sourcery-aianywhere in the pull request title to generate a title at any time. You can also comment@sourcery-ai titleon the pull request to (re-)generate the title at any time. - Generate a pull request summary: Write
@sourcery-ai summaryanywhere in the pull request body to generate a PR summary at any time exactly where you want it. You can also comment@sourcery-ai summaryon the pull request to (re-)generate the summary at any time. - Generate reviewer's guide: Comment
@sourcery-ai guideon the pull request to (re-)generate the reviewer's guide at any time. - Resolve all Sourcery comments: Comment
@sourcery-ai resolveon the pull request to resolve all Sourcery comments. Useful if you've already addressed all the comments and don't want to see them anymore. - Dismiss all Sourcery reviews: Comment
@sourcery-ai dismisson the pull request to dismiss all existing Sourcery reviews. Especially useful if you want to start fresh with a new review - don't forget to comment@sourcery-ai reviewto trigger a new review!
Customizing Your Experience
Access your dashboard to:
- Enable or disable review features such as the Sourcery-generated pull request summary, the reviewer's guide, and others.
- Change the review language.
- Add, remove or edit custom review instructions.
- Adjust other review settings.
Getting Help
- Contact our support team for questions or feedback.
- Visit our documentation for detailed guides and information.
- Keep in touch with the Sourcery team by following us on X/Twitter, LinkedIn or GitHub.
Summary of Changes
Hello @rhatdan, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the ramalama tool by integrating full support for OCI artifacts. It allows users to convert, manage, and interact with AI models packaged as OCI artifacts, providing a more flexible and potentially more efficient way to handle models compared to traditional OCI images. The changes span documentation, command-line interface, configuration, core logic for model handling, and include a robust set of new system tests to ensure functionality and reliability.
Highlights
- OCI Artifact Support: Introduced 'artifact' as a new type for converting and managing OCI models, allowing AI models to be stored as OCI artifacts rather than traditional OCI images.
- CLI and Configuration Updates: The
--typeoption forconvertandpushcommands now includes 'artifact', and a newconvert_typeconfiguration option has been added to specify the default conversion type. - Artifact Management Logic: Implemented new functions and modified existing ones to list, create, push, remove, and inspect OCI artifacts, integrating them seamlessly into the
ramalamatool's lifecycle. - Comprehensive Testing: Added a new test suite (
056-artifact.bats) with extensive test cases covering basic artifact conversion, push operations, listing, error handling, configuration precedence, and performance with large and multi-file artifacts.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.
/gemini retest
It looks like you tried to use the /gemini retest command, but this is not a recognized command. If you intended to trigger a new code review, please use /gemini review.
@sourcery-ai review
/gemini review
[!WARNING] Gemini encountered an error creating the review. You can try again by commenting
/gemini review.
/gemini review
Fixes: https://github.com/containers/ramalama/issues/1152
Hey @rhatdan is there anything I can help with to push this through? Also, congrats on the official retirement! I tried emailing you but it looks like your RedHat address disappeared.
For some reason they changed my email to [email protected], now. Trying to get the alias dwalsh->dawalsh back.
I will work on fixing this up today.
@ieaves @olliewalsh @engelmi this is finally ready to go in. PTAL /gemini review
It might be better to just merge and iterate but from what I can tell
-
Artifact detection relies on having access to
podman inspectmeaning the artifact has to already be local. I've already solved for this in my follow-on PR so it shouldn't be a blocker -
It looks like this is currently looking at the top level
org.opencontainers.image.titleannotation to construct the file mount path (/mnt/models/<title>) but if I'm reading the CNAI spec correctly we need to be looking at the layer annotationorg.cnai.model.filepath. I think this is actually an issue because relying on the top level annotation will block us from handling multifile artifacts (like split safetensors or gguf). I think it would make sense to look for both the CNAI annotation alongsideorg.opencontainers.image.titlebut I think it has to be at the layer level.
EDIT: I've bundled a bunch of model artifacts you can test with (each repo has a :gguf tag which is an artifact). These aren't fully standards compliant either but if it's at all useful:
podman artifact inspect rlcr.io/ramalama/gemma3-270m:gguf
{
"Manifest": {
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"artifactType": "application/vnd.cnai.model.manifest.v1+json",
"config": {
"mediaType": "application/vnd.cnai.model.config.v1+json",
"digest": "sha256:516af74e2d0b0634d2d565f4c7b380777975a952fdbd6e3a368b70dbe075ae06",
"size": 453
},
"layers": [
{
"mediaType": "application/vnd.cnai.model.layer.gguf",
"digest": "sha256:9826846190dea7bdd334fb834d5a1d3b8bf95b14a9833fe792e0abbc49b4927f",
"size": 282975264,
"annotations": {
"org.opencontainers.image.title": "gemma-3-270m-it-Q6_K.gguf"
}
}
],
"annotations": {
"com.ramalama.build.run_id": "19554594157",
"com.ramalama.build.workflow": "Build Model Artifacts",
"com.ramalama.model.file.format": "gguf",
"com.ramalama.model.file.location": "/models/gemma-3-270m-it-Q6_K.gguf",
"com.ramalama.model.file.name": "gemma-3-270m-it-Q6_K.gguf",
"com.ramalama.model.file.sha256": "9826846190dea7bdd334fb834d5a1d3b8bf95b14a9833fe792e0abbc49b4927f",
"com.ramalama.model.file.size": "282975264",
"com.ramalama.model.files.all_files": "gemma-3-270m-it-Q6_K.gguf",
"com.ramalama.model.files.total_size": "282975264",
"com.ramalama.model.name": "gemma3-270m",
"com.ramalama.model.source": "https://huggingface.co/unsloth/gemma-3-270m-it-GGUF",
"com.ramalama.source.commit": "c90975dbd40c0c7b275fefaae758c3415c906238",
"org.opencontainers.image.authors": "[email protected]",
"org.opencontainers.image.created": "2025-11-20T23:28:48Z",
"org.opencontainers.image.description": "gemma3-270m model file(s)",
"org.opencontainers.image.title": "gemma3-270m",
"org.opencontainers.image.vendor": "RamaLama Labs"
}
},
"Name": "rlcr.io/ramalama/gemma3-270m:gguf",
"Digest": "sha256:02a97d0fb0a0952f6a8df657b66cf1ad8b6f9d2a683b77fc4a7948df5af587c1"
}
Yes let's get this merged and then we can iterate, the PR is already too big. I will fix the test conflicts.
@ieaves Now it is in your court.