sapling icon indicating copy to clipboard operation
sapling copied to clipboard

`fatal: fetch-pack: invalid index-pack` on large repos

Open keith opened this issue 2 years ago • 4 comments

% sl clone https://github.com/llvm/llvm-project.git
remote: Enumerating objects: 5061204, done.
remote: Counting objects: 100% (6460/6460), done.
remote: Compressing objects: 100% (587/587), done.
fatal: fetch-pack: invalid index-pack output3.45 MiB | 11.47 MiB/s
% sl clone https://github.com/apple/swift
remote: Enumerating objects: 1267322, done.
remote: Counting objects: 100% (55/55), done.
remote: Compressing objects: 100% (39/39), done.
fatal: fetch-pack: invalid index-pack output7.25 MiB | 11.12 MiB/s

Once cloned the repos are empty beside from a .sl directory that does have some contents

keith avatar Nov 16 '22 01:11 keith

sl currently runs git to perform a clone. Would you get the same error if running the following git commands directly?

git -c 'init.defaultBranch=_unused_branch' init -q --bare llvm-git
git '--git-dir=llvm-git' fetch --no-tags --prune --no-write-fetch-head 'https://github.com/llvm/llvm-project.git' '+a214c521f8763b36dd400b89017f74ad5ae4b6c7:refs/remotes/remote/main' '+8b754e2f756775c9ac20363753fc51d011f164db:refs/remotes/remote/master'

What is your git version?

From https://stackoverflow.com/questions/70250904/git-error-fatal-fetch-pack-invalid-index-pack-output it seems the system ulimit might affect this too. Assuming you're using macOS, if you add:

ulimit -n unlimited
ulimit -f unlimited

to your ~/.zshrc and start a new terminal window, would the issue reproduce?

quark-zju avatar Nov 16 '22 02:11 quark-zju

Nice it seems like it was the ulimits. I followed the initial setup instructions which recommended just ulimit -n 1048576 1048576, should we change them to those commands?

keith avatar Nov 16 '22 03:11 keith

I cannot get a large repository to clone even with ulimit -n unlimited and ulimit -f unlimited in my .zshrc. This is on Linux (Ubuntu 20.04). I always get

error.RustError: "/home/user/code/reponame/contrib/googletest/.sl/../../../.sl/store/gitmodules/769aad[...]/.sl/store/segments/v1/idmap2/index2-group-name": cannot duplicate
in Log:try_clone
  Log.dir = SharedMeta { path: Filesystem("/home/user/code/reponame/contrib/googletest/.sl/../../../.sl/store/gitmodules/769aad[...]/.sl/store/segments/v1/idmap2"), meta: Mutex { data: LogMetadata { primary_len: 12, indexes: {}, epoch: 15171938287129757762 }, poisoned: false, .. } }
Caused by 1 errors:
- Too many open files (os error 24)

Is this the same bug or should I file a new one? The others I found seem to be macOS-related, not linux.

lorenzhs avatar Nov 18 '22 12:11 lorenzhs

@lorenzhs There was an issue when you set a new ulimit it does not take effect since the daemon still uses the old ulimit. You can try sl --kill-chg-daemon to kill the daemon and try again. If the problem persists, try a different number like 10000 instead of ulimited and run ulimit -a to confirm that the limit is applied.

quark-zju avatar Nov 22 '22 00:11 quark-zju