GDriveFS
GDriveFS copied to clipboard
'du' returns 0 for files, though 'ls' and 'stat' report correct value.
'du' must get the size of a file from a different location than 'stat' or 'ls':
du -sh /mnt/uw-gdrive-seys/backups/tardis01-duplicity/*
0 /mnt/uw-gdrive-seys/backups/tardis01-duplicity/duplicity-full.20150529T183953Z.manifest 0 /mnt/uw-gdrive-seys/backups/tardis01-duplicity/duplicity-full.20150529T183953Z.vol10.difftar.gz
ls -alh /mnt/uw-gdrive-seys/backups/tardis01-duplicity/*
-rw-rw-rw- 1 root root 3.2K May 29 13:48 /mnt/uw-gdrive-seys/backups/tardis01-duplicity/duplicity-full.20150529T183953Z.manifest -rw-rw-rw- 1 root root 25M May 29 13:43 /mnt/uw-gdrive-seys/backups/tardis01-duplicity/duplicity-full.20150529T183953Z.vol10.difftar.gz
File: ‘/mnt/uw-gdrive-seys/backups/tardis01-duplicity/duplicity-full-signatures.20150529T183953Z.sigtar.gz’ Size: 23061558 Blocks: 0 IO Block: 4096 regular file Device: 15h/21d Inode: 22 Links: 1 Access: (0666/-rw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-05-29 13:54:01.393340110 -0500 Modify: 2015-05-29 13:48:31.000000000 -0500 Change: 2015-05-29 13:48:31.000000000 -0500 Birth: -
Have fun! C.
"stat" and "ls" show file size. "du" shows disk usage. Technically the files on GoogleDrive take 0 bytes on your disk, so some people would argue that 0 is the correct value.
If you are not one of those weirdos, you can change gdfuse.py to return a close approximation of disk usage:
Add line 10:
import math
Add line 166
stat_result["st_blocks"] = int(math.ceil(stat_result["st_size"] / 512.0))
return stat_result
Cheers,
Michael
Hi Michael, Thanks for the patch! Yeah, those would be weirdos. 'du' also works on NFS and AFS network filesystems. I think it is commonly used to add up the size of file in a directory.
I'm testing gdrivefs on terabytes of small files 0-100KB range. It is
about 2-3x faster uploading files than google-drive-ocamlfuse , but possibly stats files more slowly (not measured exactly). Sadly I can only get about 60 KB/sec out of it. For larger files (25MB) I get about 940 KB/sec. A lot faster, but still slower than 1 Gb/sec! My guess is that google is throttling in some way.
Thanks again and hope you're having fun! Chad.
"stat" and "ls" show file size. "du" shows disk usage. Technically the files on GoogleDrive take 0 bytes on your disk, so some people would argue that 0 is the correct value.
If you are not one of those weirdos, you can change gdfuse.py to return a close approximation of disk usage:
Add line 10:
import math
Add line 166
stat_result["st_blocks"] = int(math.ceil(stat_result["st_size"] / 512)) return stat_result
I use 512 is what block_size is defined as in statfs... I guess it should be moved to a global.
Cheers,
Michael
Reply to this email directly or view it on GitHub: https://github.com/dsoprea/GDriveFS/issues/133#issuecomment-107007438
As far as I can tell, the current code includes the changes @arniotis proposed. Is this issue resolved?
On Thursday, May 05, 2016 09:13:49 PythonNut wrote:
As far as I can tell, the current code includes the changes @arniotis proposed. Is this issue resolved?
gdrive 0.14.2 does not have the working code. This is what pip installs.
Possibly newer unreleased version will work.
I've just released it and pushed.
Currently, du
crawls because we seem to be doing individual lookups for every member file. This shouldn't be the case, though. I'll try to take a closer look in the next week.