bedup icon indicating copy to clipboard operation
bedup copied to clipboard

AttributeError: 'Volume' object has no attribute 'live' (error with multiple filesystems)

Open nefelim4ag opened this issue 10 years ago • 10 comments

Traceback (most recent call last):
  File "/usr/bin/bedup", line 9, in <module>
    load_entry_point('bedup==0.9.0', 'console_scripts', 'bedup')()
  File "/usr/lib/python3.3/site-packages/bedup/__main__.py", line 482, in script_main
    sys.exit(main(sys.argv))
  File "/usr/lib/python3.3/site-packages/bedup/__main__.py", line 471, in main
    return args.action(args)
  File "/usr/lib/python3.3/site-packages/bedup/__main__.py", line 197, in vol_cmd
    dedup_tracked(sess, volset, tt, defrag=args.defrag)
  File "/usr/lib/python3.3/site-packages/bedup/tracking.py", line 393, in dedup_tracked
    dedup_tracked1(ds, ofile_reserved, query)
  File "/usr/lib/python3.3/site-packages/bedup/tracking.py", line 459, in dedup_tracked1
    with open_by_inode(inode, ds.sess, query) as rfile:
  File "/usr/lib/python3.3/contextlib.py", line 48, in __enter__
    return next(self.gen)
  File "/usr/lib/python3.3/site-packages/bedup/tracking.py", line 403, in open_by_inode
    pathb = inode.vol.live.lookup_one_path(inode)
AttributeError: 'Volume' object has no attribute 'live'

nefelim4ag avatar Jan 19 '14 18:01 nefelim4ag

Same for me. When rerunning it gets a little further, then stops with the same error

phiresky avatar Jan 19 '14 18:01 phiresky

same problem here

neopeak avatar Feb 09 '14 15:02 neopeak

found a workaround: Delete the sqlite databases under share/bedup and restarted the process..

I might had an old sqlite db schema from a previous btrfs experiment lying around.

neopeak avatar Feb 09 '14 15:02 neopeak

thanks, this seems to have solved the problem for me also

phiresky avatar Feb 13 '14 22:02 phiresky

I can reproduce this with Python 3 only (though @adamryczkowski / #41 saw this in Python 2). I'm guessing it has to do with the way strings are stored in the db.

g2p avatar Mar 07 '14 02:03 g2p

got this with pypy 2.2.1. Not always! It did one FS fine, and got this partway through a run of another FS.

The one it gave this message on was specified by UUID.

and I'd interrupted a previous run with Ctrl-C during the scan phase, when it told me "/ is not the root, use a UUID instead." Don't know if interrupting it corrupted something.

kernel 3.13.0-16-generic #36-Ubuntu x86_64

keturn avatar Mar 12 '14 06:03 keturn

I made a temporary workaround. https://github.com/alanfairless/bedup/commit/f8e56ed0d1827bb5c237a3db55f8c5ffbe312b8a

alanfairless avatar Mar 15 '14 21:03 alanfairless

I thought deleting the cache db and using the newest git version while not interrupting scanning fixes it, but still the same: img

phiresky avatar Aug 12 '14 22:08 phiresky

Also always reproduced:

$ sudo rm ~/.local/share/bedup/db.sqlite*
$ sudo ~/.local/bin/bedup dedup --defrag /dev/sda3 
Scanning volume {e0fcba5f-71a4-404e-9a25-a11ccc745f01} generations from 0 to 69077, with size cutoff 8388608
...
01:04.0 Size group 105/105 sampled 348 hashed 65 freed 95994243
00.00 Committing tracking stateNo handlers could be found for logger "sqlalchemy.pool.SingletonThreadPool"
00.15 Committing tracking state

$ sudo ~/.local/bin/bedup dedup --defrag /dev/sda4
Scanning volume {7f788e51-7f6b-470c-90a9-59fcb980c05c} generations from 0 to 7, with size cutoff 8388608
...
Deduplicating filesystem {7f788e51-7f6b-470c-90a9-59fcb980c05c}
01:00.6 Size group 23/48 sampled 48 hashed 10 freed 0
Traceback (most recent call last):
  File "/home/nastja/.local/bin/bedup", line 9, in <module>
    load_entry_point('bedup==0.9.0', 'console_scripts', 'bedup')()
  File "/home/nastja/.local/lib/python2.7/site-packages/bedup/__main__.py", line 483, in script_main
    sys.exit(main(sys.argv))
  File "/home/nastja/.local/lib/python2.7/site-packages/bedup/__main__.py", line 472, in main
    return args.action(args)
  File "/home/nastja/.local/lib/python2.7/site-packages/bedup/__main__.py", line 198, in vol_cmd
    dedup_tracked(sess, volset, tt, defrag=args.defrag)
  File "/home/nastja/.local/lib/python2.7/site-packages/bedup/tracking.py", line 394, in dedup_tracked
    dedup_tracked1(ds, ofile_reserved, query)
  File "/home/nastja/.local/lib/python2.7/site-packages/bedup/tracking.py", line 460, in dedup_tracked1
    with open_by_inode(inode, ds.sess, query) as rfile:
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/nastja/.local/lib/python2.7/site-packages/bedup/tracking.py", line 404, in open_by_inode
    pathb = inode.vol.live.lookup_one_path(inode)
AttributeError: 'Volume' object has no attribute 'live'

Versions: OS - Kubuntu 14.04 with all update

 13:31:58  
$ pip install -U --user bedup
Requirement already up-to-date: bedup in ./.local/lib/python2.7/site-packages
Requirement already up-to-date: alembic in ./.local/lib/python2.7/site-packages (from bedup)
Requirement already up-to-date: cffi>=0.4.2 in ./.local/lib/python2.7/site-packages (from bedup)
Requirement already up-to-date: pyxdg in /usr/lib/python2.7/dist-packages (from bedup)
Requirement already up-to-date: SQLAlchemy in ./.local/lib/python2.7/site-packages (from bedup)
Requirement already up-to-date: contextlib2 in ./.local/lib/python2.7/site-packages (from bedup)
Requirement already up-to-date: Mako in ./.local/lib/python2.7/site-packages (from alembic->bedup)
Requirement already up-to-date: pycparser in ./.local/lib/python2.7/site-packages (from cffi>=0.4.2->bedup)
Requirement already up-to-date: MarkupSafe>=0.9.2 in ./.local/lib/python2.7/site-packages (from Mako->alembic->bedup)
Cleaning up...

tonal avatar Jan 06 '15 07:01 tonal

I've finally got some ideas on this issue.

When I only dedup one filesystem it works perfectly, but as I use the same database for multiple filesystems this issue may occur, because there are two filesystems with the same volumn id 5 (the root subvolume). In the example below, I'm deduping fs_id=2, but there is one Inode with fs_id=1 got creeped in.

>>> inodes
[Inode(ino=10930, volume=109), Inode(ino=48100, volume=5), Inode(ino=96584, volume=109)]
>>> inodes[1].vol.fs_id
1
>>> inodes[2].vol.fs_id
2

This explains why in #46 someone says removing the database works around this (but it doesn't work for me). The "real" workaround might be use one database per filesystem. I'll try that later.

lilydjwg avatar Jan 21 '15 07:01 lilydjwg