remote-apis icon indicating copy to clipboard operation
remote-apis copied to clipboard

add GITSHA1 value to supported digest functions

Open asartori86 opened this issue 1 year ago • 11 comments

see #255 for more details

asartori86 avatar May 26 '23 10:05 asartori86

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

google-cla[bot] avatar May 26 '23 10:05 google-cla[bot]

In my honest opinion this change is making the semantics of the protocol a bit hairy. Because Git directories don't support object sizes, Digest.size_bytes will need to be set to zero, both for files and directories. Just approaching this from the Buildbarn side, it is unlikely that we can ever support this, as we use the size_bytes field quite intensively to allocate space and track transfer progress. bb_clientd would also be unable to display/list these files, as it relies on accurate file sizes to instantiate inodes.

Thanks, @EdSchouten, for your thoughts. We do appreciate it. I know how much buildbarn relies on size_bytes. I implemented some "workaround" that I would love to discuss with you. But they work only with the hard-linking rather than with fuse.

Note that its size is known when a blob (and a tree) is uploaded. When we traverse a tree already in CAS, we don't know the size of the referenced objects. We might add a dedicated cache to it eventually..

One could also come up with a "lite" version of this proposal, where merely the Git-style SHA-1 hashing algorithm is added, while regular REv2 Directory nodes continue to be used. I can imagine that it would be easier to get traction for that.

What do you mean that regular Directory nodes are used? Will they contain just the Digest of the git tree or all the list of contained objects? For the latter, I am not sure it is worthy. The main advantage is avoiding traversing a large tree and discovering that that tree is already present in the remote cas. Think, for example, about external dependencies. On the client side, that dep can be traversed only once to upload all the blobs, and never again (at least until the whole tree is kept in the remote cas). Afterward, it is sufficient to check if only one hash (the git tree hash) is present to be sure that the whole dep is still there.

Would you be interested in attending the 2023-06-13 working group meeting to discuss this proposal in more detail?

I think @aehlig is the best person for this :) which is also knowledgeable of the protocol

asartori86 avatar May 26 '23 13:05 asartori86

I know how much buildbarn relies on size_bytes. I implemented some "workaround" that I would love to discuss with you. But they work only with the hard-linking rather than with fuse.

Yeah, I already imagined that that would be the case. Unfortunately, I consider the hardlinking use case to be a dead end. Especially now that Buildbarn's NFSv4 support is rock solid (on macOS Ventura 13.3 and later), there isn't really an appealing use case for workers to use hardlinking. Supporting both FUSE and NFSv4 is good enough. At some point in the future we might as well just kick hardlinking support out of the tree.

The main advantage is avoiding traversing a large tree and discovering that that tree is already present in the remote cas. Think, for example, about external dependencies. On the client side, that dep can be traversed only once to upload all the blobs, and never again (at least until the whole tree is kept in the remote cas). Afterward, it is sufficient to check if only one hash (the git tree hash) is present to be sure that the whole dep is still there.

You might be able to already achieve that by having some kind of separate cache that can map between Git SHA-1 and a hash function of choice, right? Whenever you're about to clone a Git repository, you may check this cache to obtain the SHA-256/MD5/SHA-1/.... hash of the corresponding REv2 Directory containing the same data...

EdSchouten avatar May 26 '23 13:05 EdSchouten

Would you be interested in attending the 2023-06-13 working group meeting to discuss this proposal in more detail?

I think @aehlig is the best person for this :) which is also knowledgeable of the protocol

just to be more clear, I also would love to participate :)

asartori86 avatar May 26 '23 13:05 asartori86

I think @aehlig is the best person for this :) which is also knowledgeable of the protocol

just to be more clear, I also would love to participate :)

Great! @bergsieker, could you please add @asartori86 and @aehlig to the invite? Thanks!

EdSchouten avatar May 26 '23 14:05 EdSchouten

You might be able to already achieve that by having some kind of separate cache that can map between Git SHA-1 and a hash function of choice, right? Whenever you're about to clone a Git repository, you may check this cache to obtain the SHA-256/MD5/SHA-1/.... hash of the corresponding REv2 Directory containing the same data...

@aehlig, what do you think?

asartori86 avatar May 26 '23 14:05 asartori86

You might be able to already achieve that by having some kind of separate cache that can map between Git SHA-1 and a hash function of choice, right? Whenever you're about to clone a Git repository, you may check this cache to obtain the SHA-256/MD5/SHA-1/.... hash of the corresponding REv2 Directory containing the same data...

I could also switch to a content-addressable version-control system based on REv2 Directory messages, however git definitely is more popular. The idea is to make use of the fact that git already contains suitable hashes to identify objects and in this way we can avoid rehashing while still having a suitable "file system" for unmodified sources.

So, if the only concern is the missing size, I would consider it more natural that the remote side internally keep a cache of git object identifier to size. No object ever enters the CAS without size declared ahead of time.

aehlig avatar May 26 '23 14:05 aehlig

First of all, I just want to state that I am very happy to see a VCS-aware proposal. There are a lot of overlappings between a Build system and a Version Control system and closing the gap would definitely be beneficial to both sides.

I wonder if we could handle git support a bit differently. For example, it's not common to store/transfer git objects (Tree or Blob) one by one. Instead, objects are delta-compressed into a packfile and packfile is what's usually transferred and stored instead. See https://git-scm.com/docs/pack-format for more technical details.

So a hunch of mine is that we should do the same here: store and transfer packfiles as special CAS objects that could be "chunk"-ed into smaller objects. (See #233 for chunking discussion).

In truth, the FindMissing RPC is very similar to the Git Protocol where the client and server negotiate which objects exist on 2 sides to find the delta needed to transfer. The only difference is that instead of BatchReadBlobs RPC returning multiple objects, Git protocol would combine(archive + compress) all those objects into 1 single pack and return that pack instead.

sluongng avatar May 26 '23 15:05 sluongng

I wonder if we could handle git support a bit differently. For example, it's not common to store/transfer git objects (Tree or Blob) one by one. Instead, objects are delta-compressed into a packfile and packfile is what's usually transferred and stored instead. See https://git-scm.com/docs/pack-format for more technical details.

I'm not convinced that's worth the effort. Keep in mind that in the case of Git repositories, you're dealing with trees of commits that form very long chains. For those long chains it makes a lot of sense to store them in pack files, as you don't want to make constant trips back and forth to iterate over the full commit chain.

As far as I know, @asartori86 and @aehlig are not interested in pushing a full repository including its commit history into the CAS. They want to upload the contents of a given repository at a single commit, so that it may be built. Such trees tend to be relatively shallow.

EdSchouten avatar May 28 '23 20:05 EdSchouten

I wonder if we could handle git support a bit differently. For example, it's not common to store/transfer git objects (Tree or Blob) one by one. Instead, objects are delta-compressed into a packfile and packfile is what's usually transferred and stored instead. See https://git-scm.com/docs/pack-format for more technical details.

I'm not convinced that's worth the effort. Keep in mind that in the case of Git repositories, you're dealing with trees of commits that form very long chains. For those long chains it makes a lot of sense to store them in pack files, as you don't want to make constant trips back and forth to iterate over the full commit chain.

As far as I know, @asartori86 and @aehlig are not interested in pushing a full repository including its commit history into the CAS. They want to upload the contents of a given repository at a single commit, so that it may be built. Such trees tend to be relatively shallow.

Indeed. Moreover, likely an "earlier version" of that tree (i.e., that directory at an earlier commit or in a different branch) was already used in remote execution. So if the tree is not known already (the typical case), it is worth the effort to investigate with FindMissingBlobs to find out the small part that is not known to the remote execution rather than proactively sending everything.

aehlig avatar Jun 13 '23 13:06 aehlig

I just want to note that it's true that by default, git tree does not store the size of each child object, but obtaining the size of the objects is relatively easy.

For a repository with a checked-out working copy, all files in the current working copy have their size stored inside the git's index file, which you could get very quickly with.

> git ls-files --debug

That means you would only need to manually check the size of (1) gitignore files and (2) newly added, modified files. Both of these lists could be obtained using some git commands fairly quickly.

For a repository without a working copy (bare git repository), you could have to call git ls-tree to ask git to read the file size from the ObjectDBs (loose or packed). But all git forge vendors have no problem doing this (and caching the result) to help power their WebUI as it's a relatively fast operation.

So I suspect that we would still be able to fit the "size" into GITSHA1 with a relatively smaller cost.


Realistically, I think it would be very hard to implement support on the server side for GITSHA1 without knowing the object size. Server implementations would want to use this information as a requirement to bin pack workers efficiently. For example: without knowing how big a tree is, I could only schedule a worker based on some heuristic and potentially run out of disk space while downloading the tree, leading to execution failure.

So including size in digest would be very important if we want server implementations to adopt this proposal down the line.

sluongng avatar Jun 13 '23 18:06 sluongng