cloudcmd
cloudcmd copied to clipboard
move between drives dont get followed by moving to destination.
- Version (
cloudcmd -v): 17.4.0 - Node Version
node -v: 20.12.2 - OS (
uname -aon Linux): debian 6.1.85-1 - Browser name/version: Firefox 125.0.2
- Used Command Line Parameters: F6 between 2 volumes
- Changed Config:
- [ ] I'm ready to donate on Patreon 🎁
- [X] I'm willing to work on this issue 💪
moving files between sda1 (4Tb hdd) and sdb1 (4Tb hdd) in fact filled to fullness system drive (64Gb sd) ./srv/dev-disk-by-uuid-"destination drive" folder.
i'm now stuck with locked server, accessible by console (ssh) but with data stuck in between here and there, not where intended, and in fearness of data transfer loss.
i will add that copy from sda NTFS format drive to sdb ext4 worked like a charm, i got fucked when trying to get from sdb to sda back after formating sda to ext4, so if we don't solve the what and where, i may have lost data just by moving it. may in last resort restore it by drive forensics (testdisk) if we dont mess up before.
Could you provide more information, as I understand you move files from one directory to another? What size of files? Do you have any errors?
Here is the code: when copying is done - files from source is removed, this is what move is, you suggest to add an option to disable move?
Hello, where do i need to go to find logs if there are any ? i didn't had any error, just didn't find files where i expected them to be, had an impossibility to login in openmediavault simultaneously, then investigated and found that the system drive of 64Gb was full and then that some of the data i intended to transfer back from slave A to B were the culprit. At that moment, we have a 48Gb in "./srv/disk-" and i was trying to transfer a bunch of files for almost 1.5Tb (if i recall well). my main goal is to avoid messing again, because i am going to testdisk drive B to restore the files i wanted to rollback to slave A after switching it from NTFS to EXT4.
Write a video of what you doing in file manager, and what the error is. All this filesystem types and volumes sizes gives me no information at all.
There is no additional logs in Cloud Commander, only that it writes to stdout.
What errors you had during move?
container log, hopes it will clarify for you. I don't remember error message, an error may had popped, but i don't recall.
What exactly went wrong? You moved files, and then not all files copied before removing?
not a single file to destination.
it may be related to the first drive being erased and not reindexed well il the container parameters, but in that case cloudcmd might have raised an error "destination unreachable" and i couldn't have had the destination displayed in the gui ?
extracted config.v2.json, and it is related to the container parameters. "/dev/sda1":{ "Source":"/srv/dev-disk-by-uuid-acfcc3d1-07a2-4eb0-a40f-b0b4c3597b6a","Destination":"/dev/sda1","RW":true,"Name":"","Driver":"","Type":"bind","Relabel":"rw","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/srv/dev-disk-by-uuid-acfcc3d1-07a2-4eb0-a40f-b0b4c3597b6a","Target":"/dev/sda1"},"SkipMountpointCreation":false}, "/dev/sdb1":{ "Source":"/srv/dev-disk-by-uuid-6C32CA2B03A0E2C5","Destination":"/dev/sdb1","RW":true,"Name":"","Driver":"","Type":"bind","Relabel":"rw","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/srv/dev-disk-by-uuid-6C32CA2B03A0E2C5","Target":"/dev/sdb1"},"SkipMountpointCreation":false}, "/mount/fs":{ "Source":"/","Destination":"/mount/fs","RW":true,"Name":"","Driver":"","Type":"bind","Relabel":"rw","Propagation":"rslave","Spec":{"Type":"bind","Source":"/","Target":"/mount/fs"},"SkipMountpointCreation":false}},
i can't understand why the container used a "server local" copy of the real drive, instead of ~~using the physical disk~~ not displaying the drive that wasn't there as the UUID changed after the EXT4 formating, and how container and cloudcmd worked together to ~~get there~~ use a phantom drive and manage to write a terabyte in the phantom space.
"For I was conscious that I knew practically nothing..." (Plato, Apology 22d)
Looks like it is related to containers you using, Cloud Commander is node.js based, it knows nothing about any containers.