dvc icon indicating copy to clipboard operation
dvc copied to clipboard

import: local cache is ignored when importing data that already exist in cache

Open Honzys opened this issue 1 year ago • 14 comments

Bug Report

DVC - local cache is ignored when importing data that already exist in the local cache

DVC local cache is ignored when importing data that already exists in the local cache

Description

When using dvc import with shared local cache the cache is ignored when the .dvc files doesn't exist yet.

Reproduce

Step list of how to reproduce the bug

Example:

$ cd /tmp
$ dvc init --no-scm
$ echo "[core]
    no_scm = True
[cache]
    dir = /srv/dvc_cache/datasets_import
    type = "reflink,symlink,copy"
    shared = group" > .dvc/config
$ dvc import some_url some_data --force
$ rm -rf *.dvc
# Here I would expect to just link it from the local cache and not pull it from remote
$ dvc import some_url some_data --force

Expected

I would expect that the second import would first check if the data are in the cache dir and just link them, when they have been already pulled previously.

Environment information

This is required to ensure that we can reproduce the bug.

Output of dvc doctor:

$ dvc doctor
 ->  dvc doctor
DVC version: 3.42.0 (pip)
-------------------------
Platform: Python 3.10.11 on Linux-5.15.0-89-generic-x86_64-with-glibc2.31
Subprojects:
        dvc_data = 3.8.0
        dvc_objects = 3.0.6
        dvc_render = 1.0.1
        dvc_task = 0.3.0
        scmrepo = 2.0.4
Supports:
        http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
        https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
        s3 (s3fs = 2023.12.2, boto3 = 1.34.22)
Config:
        Global: /root/.config/dvc
        System: /etc/xdg/dvc
Cache types: reflink, hardlink, symlink
Cache directory: xfs on /dev/mapper/data-srv
Caches: local
Remotes: None
Workspace directory: xfs on /dev/mapper/data-srv
Repo: dvc (no_scm)
Repo.site_cache_dir: /var/tmp/dvc/repo/736a03a99cffc314a8a900b728659998

Additional Information (if any):

Honzys avatar Jan 24 '24 19:01 Honzys

I guess it's similar to already resolved regression in the past - https://github.com/iterative/dvc/issues/9385 ?

Honzys avatar Jan 25 '24 09:01 Honzys

Another note / info.

When I am importing using dvc==3.x data which were created with dvc==2.x it works OK as expected.

If I try to do the same with data created with dvc==3.x it downloads data from remote everytime (even if the exact files are in the cache).

Honzys avatar Jan 25 '24 10:01 Honzys

Just bit me as well. Thought the reason was my mix of dvc2/3 caches so recreated all in 3.0. Glad you found easy repro steps.

12michi34 avatar Jan 25 '24 12:01 12michi34

Hello, Is there any update on this issue please? Were you able to reproduce it, or do you need more input from my part?

Honzys avatar Feb 06 '24 08:02 Honzys

I'm unable to reproduce this in the latest DVC release (3.45.0). Can you try updating and verify whether or not you still see the issue?

pmrowla avatar Feb 20 '24 05:02 pmrowla

This is so odd. I am still seeing a "Downloading" message in my test case with 3.45.0 and I am running out of diskspace so I am quite sure the data gets downloaded from remote and only later a symlink generated. But this happens only with my case and not with the repo mentioned (but it looks to me the exact same situation). Not sure if I got more time to look into this .. suffering from that issue for many months : https://discuss.dvc.org/t/help-with-upgrading-imported-via-dvc2-x-dvc-data-with-dvc3-0/1750/22

12michi34 avatar Feb 20 '24 09:02 12michi34

Not sure if the following just confuses this issue but I noticed that when I do the "1st import" (similar to the repro steps) then I get asked the password for remote and then the download starts. On the 2nd "import" (similar to the repro steps) I do NOT get asked the password, "downloading" is displayed BUT its a lot faster (which makes me think it copies/downloads it from the external cache). BUT (and this is the main issue) it still copies the files before creating a symlink (which is bad because I run out hdd space in my realworld use case although the data could be just symlinked from a shared cache).

12michi34 avatar Feb 20 '24 16:02 12michi34

Yes as you noted, it's not actually downloading from a remote, it's being copied from your existing cache. Ideally we should be preserving the link type in this case and we can look into fixing that (but not using the symlink here is not a regression)

cc @efiop

pmrowla avatar Feb 21 '24 03:02 pmrowla

Thanx for your quick reply. Was that behaviour already in place with dvc 2.0 ? The good news is that once the import happens the "dvc checkout" behaves way better as it does not download from the cache before creating the symlink.

So the problem I am facing is
a) I have a projectA that stores ~4TB of data in a shared external cache (on a large external drive). b) many different project live on a smaller drive (<4TB) and would like to import the data from projectA

This all should just "work" because the data is already in the cache. But because "import" "download"s it from the cache before creating the symlink I will run out of diskspace and the "import" fails.

Ideally we should be preserving the link type in this case and we can look into fixing that (but not using the symlink here is not a regression)

That would be really great. I can't see the workflow to import large sets of data from a shared external cache otherwise. How likely is this to happen ? I am still confused why I didn't run into the issue with dvc 2.x .

Kindest regards

12michi34 avatar Feb 21 '24 10:02 12michi34

Sorry for my late response, i've been quite busy lately. From my experients it's other way around now.

With dvc 3.x version I am able to skip the downloading of the file created by dvc 3.x, if it already exists in the cache - which is great.

But if I want to import files created by dvc 2.x the cache is ignored and it always downloads from the remote.

So I guess this state is actually better than the previously reported, since we can migrate the data from 2.x to 3.x. But I guess it would be worth fixing this issue too?

Honzys avatar Mar 01 '24 10:03 Honzys

Here's the output of dvc doctor:

DVC version: 3.48.0 (pip)
-------------------------
Platform: Python 3.11.7 on Linux-5.15.0-97-generic-x86_64-with-glibc2.31
Subprojects:
        dvc_data = 3.13.0
        dvc_objects = 5.1.0
        dvc_render = 1.0.1
        dvc_task = 0.3.0
        scmrepo = 3.2.0
Supports:
        http (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
        https (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
        s3 (s3fs = 2024.2.0, boto3 = 1.34.51)
Config:
        Global: /root/.config/dvc
        System: /etc/xdg/dvc
Cache types: symlink
Cache directory: xfs on /dev/mapper/data-srv
Caches: local
Remotes: None
Workspace directory: overlay on overlay
Repo: dvc (no_scm)
Repo.site_cache_dir: /var/tmp/dvc/repo/de89edf83a919aae8b7ee93ba17c75e0

Honzys avatar Mar 01 '24 10:03 Honzys

I'm able to reproduce and confirm that this is a regression introduced in https://github.com/iterative/dvc/pull/9246. See the reproduction script below. It creates a source repo with a shared cache. The initial dvc add takes ~10 minutes on my machine. Then it creates a new repo is to import the source data, and the import takes ~15 minutes, even if using a shared cache that already contains all the data. Before that regression, it takes seconds.

set -eux

echo "setup source repo"
CACHE=$(mktemp -d)
REPO_SOURCE=$(mktemp -d)
cd $REPO_SOURCE
git init
dvc init -q
dvc cache dir $CACHE
dvc config cache.shared group
dvc config cache.type symlink

echo "generate data"
mkdir dir
for i in {1..100}; do
    head -c 1000000000 < /dev/urandom > dir/${i};
done

echo "dvc add data"
time dvc add dir
git add .
git commit -m "add data"

echo "dvc import data with shared cache"
REPO_IMPORT=$(mktemp -d)
cd $REPO_IMPORT
git init
dvc init -q
dvc cache dir $CACHE
dvc config cache.shared group
dvc config cache.type symlink
time dvc import $REPO_SOURCE dir

dberenbaum avatar Apr 04 '24 13:04 dberenbaum

I don't think local cache is being ignored here. @dberenbaum, in the above script, DVCFileSystem is copying from workspace files. If you add rm -rf dir after git commit, it'll use the cache.

Using --rev will force dvc to import from a certain git revisions. And this behaviour does not happen in case of remote repositories.

skshetry avatar Apr 29 '24 04:04 skshetry

If you add rm -rf dir after git commit, it'll use the cache.

Yes, it will copy from the cache. However, this doesn't solve the underlying problem that this copy operation takes way longer than checkout. You can adjust the script above to just a few files (as long as each is large) and still see the difference between now and either before #9246 or with #10388.

dberenbaum avatar Apr 29 '24 21:04 dberenbaum