clonezilla icon indicating copy to clipboard operation
clonezilla copied to clipboard

[ERR] System.DllNotFoundException: Dll was not found.

Open AndrzejXYZ opened this issue 2 years ago • 7 comments

This is the error I get:

[ERR] System.DllNotFoundException: Dll was not found. at DokanNet.Native.NativeMethods.DokanInit() at DokanNet.Dokan..ctor(ILogger logger) at libClonezilla.VFS.OnDemandVFS.<>c__DisplayClass0_1.<.ctor>b__2() Unhandled exception. System.DllNotFoundException: Dll was not found. at DokanNet.Native.NativeMethods.DokanShutdown() at DokanNet.Dokan.Dispose(Boolean disposing) at DokanNet.Dokan.Finalize()

Tried on 2 different computers, CMD is with admin rights, please advise, Best regards, Andrzej

AndrzejXYZ avatar Jul 01 '23 15:07 AndrzejXYZ

Clonezilla is run on GNU/Linux. Please make sure you use in the way we provide: https://clonezilla.org/clonezilla-live.php

Steven

stevenshiau avatar Jul 02 '23 05:07 stevenshiau

Hi Steven,

Thank you for answer, but frankly I don’t understand it.

I use Clonezille as a live USB stick and it works perfectly fine.

I created disk image backup from Centos 7 server with XFS file system, the image was restorable (as it was checked by clonezilla at the end).

Now the issue is that I can’t restore this image using clonezilla, so I was really hoping that program like you wrote will allow me to mount the image and restore the files “manually” – I need some catalogs, that is all.

Is there any way I can get into this backup and restore important files?

Best regards,

Andrzej

From: Steven Shiau @.> Sent: Sunday, July 2, 2023 7:52 AM To: stevenshiau/clonezilla @.> Cc: AndrzejXYZ @.>; Author @.> Subject: Re: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Clonezilla is run on GNU/Linux. Please make sure you use in the way we provide: https://clonezilla.org/clonezilla-live.php

Steven

— Reply to this email directly, view it on GitHub https://github.com/stevenshiau/clonezilla/issues/94#issuecomment-1616391014 , or unsubscribe https://github.com/notifications/unsubscribe-auth/BA7IB74V2AVUP4FTTIHFLYDXOED63ANCNFSM6AAAAAAZ23WXTI . You are receiving this because you authored the thread. https://github.com/notifications/beacon/BA7IB77GQET4IJAOQW6IGHDXOED63A5CNFSM6AAAAAAZ23WXTKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTALAVWM.gif Message ID: @.*** @.***> >

AndrzejXYZ avatar Jul 03 '23 09:07 AndrzejXYZ

Now the issue is that I can’t restore this image using clonezilla

Please explain your issue fully with as much detail as possible, it is probably an easy fix.

JonnyTech avatar Jul 03 '23 10:07 JonnyTech

Hi,

thanks again for your time, I will try to explain, but a lot of info you can see in the logs attached.

My friend runs a small business, has an old HP ML110 G6 server – no RAID controller, just plain 4 SATA HDDs.

He asked me to check the condition last week – I started with full drive image with latest stable Clonezilla, booted from USB stick.

I choosed to check if the backups were restorable and yes, at the end there was such confirmation (see screen attached).

First of all I’m not Linux guy.

HP server did not report any errors, HDD missing, etc. during the boot or normal operation.

Looking at the Clonezilla logs I see that there were 4 x 500GB HDDs connected in software RAID 10, using LVM and Volume Groups.

It looks like 1 drive was already dead at the time I made backup.

Server was Centos 7, with SAMBA and ZIMBRA on XFS partitions.

I don’t need to restore entire server – all I need is to take SAMBA and ZIMBRA files.

I tried many times to restore the backup, but I get different error messages – basically the restore doesn’t even start.

As I read to restore LVM and VG backups there is a manual intervention/ modification required – but this is beyond me.

That is why I had so much hope I will be able to retrieve files or folder using some other utility, like yours.

Let me know if I should explain anything more.

Thanks a lot for your effort,

Andrzej

From: JonnyTech @.> Sent: Monday, July 3, 2023 12:44 PM To: stevenshiau/clonezilla @.> Cc: AndrzejXYZ @.>; Author @.> Subject: Re: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Now the issue is that I can’t restore this image using clonezilla

Please explain your issue fully with as much detail as possible, it is probably an easy fix.

— Reply to this email directly, view it on GitHub https://github.com/stevenshiau/clonezilla/issues/94#issuecomment-1617896523 , or unsubscribe https://github.com/notifications/unsubscribe-auth/BA7IB74MSEZ5VXGSYFPQLQLXOKO7RANCNFSM6AAAAAAZ23WXTI . You are receiving this because you authored the thread. https://github.com/notifications/beacon/BA7IB72X4YNJGGWY53CICWLXOKO7RA5CNFSM6AAAAAAZ23WXTKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTAN4SEW.gif Message ID: @.*** @.***> >

From: @.*** @.> Sent: Monday, July 3, 2023 11:43 AM To: 'stevenshiau/clonezilla' @.>; 'stevenshiau/clonezilla' @.> Cc: 'Author' @.> Subject: RE: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Hi Steven,

Thank you for answer, but frankly I don’t understand it.

I use Clonezille as a live USB stick and it works perfectly fine.

I created disk image backup from Centos 7 server with XFS file system, the image was restorable (as it was checked by clonezilla at the end).

Now the issue is that I can’t restore this image using clonezilla, so I was really hoping that program like you wrote will allow me to mount the image and restore the files “manually” – I need some catalogs, that is all.

Is there any way I can get into this backup and restore important files?

Best regards,

Andrzej

From: Steven Shiau @.*** @.> > Sent: Sunday, July 2, 2023 7:52 AM To: stevenshiau/clonezilla @. @.> > Cc: AndrzejXYZ @. @.> >; Author @. @.***> > Subject: Re: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Clonezilla is run on GNU/Linux. Please make sure you use in the way we provide: https://clonezilla.org/clonezilla-live.php

Steven

— Reply to this email directly, view it on GitHub https://github.com/stevenshiau/clonezilla/issues/94#issuecomment-1616391014 , or unsubscribe https://github.com/notifications/unsubscribe-auth/BA7IB74V2AVUP4FTTIHFLYDXOED63ANCNFSM6AAAAAAZ23WXTI . You are receiving this because you authored the thread. https://github.com/notifications/beacon/BA7IB77GQET4IJAOQW6IGHDXOED63A5CNFSM6AAAAAAZ23WXTKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTALAVWM.gif Message ID: @.*** @.***> >

AndrzejXYZ avatar Jul 03 '23 16:07 AndrzejXYZ

You cannot attach logs when replying from email, we cannot see them.

Looking at the Clonezilla logs I see that there were 4 x 500GB HDDs connected in software RAID 10, using LVM and Volume Groups.

Did you backup the volume? Or the individual drives? If the volume then simply restore to any single drive. If the drives then try to create a identical LVM on similar drives and try restoring to them.

JonnyTech avatar Jul 03 '23 17:07 JonnyTech

Hi again,

Sorry for the delay, I’m trying different ways to solve the issue.

Below is just one of the errors I got:

target_parts is NOT assigned in function task_restoreparts.

Program terminated!.

"ocs-live-general" finished with error!

Check /var/log/clonezilla.log for more details.

I will paste more the Clonezilla error messages, but I will have to try to restore again to recreate.

But let me tell you – I tried every option I could – restore disks, restore partitions, restore to 1 2 multiple disks, every feasible option.

I used different drives - 500GB, 1TB, 3TB drive (entire array even in RAID 10 could not have more than 2 TB (4x500GB)).

Please allow me to paste some of the logs below – if you can please take a look, you are an expert, maybe this will help to direct me:

parts

sdd1 md127

dmraid.table

dmraid: dm

disk

md127 sda sdb sdc sdd

sdd-chs.sf

cylinders=60801

heads=255

sectors=63

md127-chs.sf

cylinders=243500032

heads=2

sectors=4

swappt-centos-swap.info

UUID="69b85f71-9f10-4e62-b040-d39226d274dc"

LABEL=""

lvm_vg_dev.list

centos /dev/md127 fFCoT5-TJaX-dN80-GDh7-8aCZ-5abR-DcYqCz

Info-img-size.txt

Image size (Bytes):

274G /home/partimag/2023-06-26-15-img

Info-saved-by-cmd.txt

/usr/sbin/ocs-sr -q2 -c -j2 -z1p -i 0 -sfsck -senc -p choose savedisk 2023-06-26-15-img md127 sda sdb sdc sdd

mdadm.conf

ARRAY /dev/md/localhost.localdomain:pv00 metadata=1.2 name=localhost.localdomain:pv00 UUID=35154af7:6e81e388:1e644c7b:2be31bb4

md127-pt.parted.compact

Model: Linux Software RAID Array (md)

Disk /dev/md127: 997GB

Sector size (logical/physical): 512B/4096B

Partition Table: unknown

Disk Flags:

md127-pt.parted

Model: Linux Software RAID Array (md)

Disk /dev/md127: 1948000256s

Sector size (logical/physical): 512B/4096B

Partition Table: unknown

Disk Flags:

Info-packages.txt

Image was saved by these Clonezilla-related packages:

drbl-5.2.10-drbl1 clonezilla-5.4.6-drbl1 partclone-0.3.23-drbl-1 util-linux-2.38.1-5+b1 gdisk-1.0.9-2.1

Saved by clonezilla-live-3.1.0-22-amd64.

sdd-pt.sf

label: dos

label-id: 0x000e0796

device: /dev/sdd

unit: sectors

sector-size: 512

/dev/sdd1 : start= 2048, size= 2097152, type=83, bootable

/dev/sdd2 : start= 2099200, size= 974264320, type=fd

dev-fs.list

<Device name> <File system> <Size>

File system is got from ocs-get-dev-info. It might be different from that

of blkid or parted.

/dev/sdd1 xfs 1G

/dev/centos/root xfs 475G

/dev/centos/home xfs 450G

/dev/centos/swap swap 3.9G

mdstat.txt

Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]

md127 : active (auto-read-only) raid10 sdd2[0] sdb1[2] sda1[1]

  974000128 blocks super 1.2 512K chunks 2 near-copies [4/3] [UUU_]

  bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices:

lvm_logv.list

/dev/centos/root SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)

/dev/centos/home SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)

/dev/centos/swap Linux swap file, 4k page size, little endian, version 1, size 1015807 pages, 0 bad pages, no label, UUID=69b85f71-9f10-4e62-b040-d39226d274dc

sdd-pt.parted.compact

Model: ATA TOSHIBA DT01ACA0 (scsi)

Disk /dev/sdd: 500GB

Sector size (logical/physical): 512B/4096B

Partition Table: msdos

Disk Flags:

Number Start End Size Type File system Flags

1 1049kB 1075MB 1074MB primary xfs boot

2 1075MB 500GB 499GB primary raid

md127.txt

MD_LEVEL=raid10

MD_DEVICES=4

MD_METADATA=1.2

MD_UUID=35154af7:6e81e388:1e644c7b:2be31bb4

MD_DEVNAME=localhost.localdomain:pv00

MD_NAME=localhost.localdomain:pv00

MD_DEVICE_dev_sda1_ROLE=1

MD_DEVICE_dev_sda1_DEV=/dev/sda1

MD_DEVICE_dev_sdd2_ROLE=0

MD_DEVICE_dev_sdd2_DEV=/dev/sdd2

MD_DEVICE_dev_sdb1_ROLE=2

MD_DEVICE_dev_sdb1_DEV=/dev/sdb1

sdd-pt.parted

Model: ATA TOSHIBA DT01ACA0 (scsi)

Disk /dev/sdd: 976773168s

Sector size (logical/physical): 512B/4096B

Partition Table: msdos

Disk Flags:

Number Start End Size Type File system Flags

1 2048s 2099199s 2097152s primary xfs boot

2 2099200s 976363519s 974264320s primary raid

Info-OS-prober.txt

This OS-related info was saved from this machine with os-prober at 2023-0626-1754:

/dev/mapper/centos-root:CentOS Linux 7 (Core):CentOS:linux

*****************************************************.

This Linux boot related info was saved from this machine with linux-boot-prober at 2023-0626-1754:

/dev/sdd1:/dev/sdd1::/vmlinuz-3.10.0-957.el7.x86_64:/initramfs-3.10.0-957.e l7.x86_64.img:root=/dev/sdd1

/dev/sdd1:/dev/sdd1::/vmlinuz-3.10.0-957.1.3.el7.x86_64:/initramfs-3.10.0-9 57.1.3.el7.x86_64.img:root=/dev/sdd1

/dev/sdd1:/dev/sdd1::/vmlinuz-0-rescue-406026e54f964d928d10801d3b5d2144:/in itramfs-0-rescue-406026e54f964d928d10801d3b5d2144.img:root=/dev/sdd1

blkid.list

/dev/loop0: TYPE="squashfs"

/dev/mapper/centos-home: UUID="ab2a319c-c59a-4896-a6e7-016c119a3fa6" BLOCK_SIZE="4096" TYPE="xfs"

/dev/mapper/centos-root: UUID="c1e50c83-2c65-4f0a-a0c0-480e51905a6e" BLOCK_SIZE="4096" TYPE="xfs"

/dev/mapper/centos-swap: UUID="69b85f71-9f10-4e62-b040-d39226d274dc" TYPE="swap"

/dev/md127: UUID="fFCoT5-TJaX-dN80-GDh7-8aCZ-5abR-DcYqCz" TYPE="LVM2_member"

/dev/sda1: UUID="35154af7-6e81-e388-1e64-4c7b2be31bb4" UUID_SUB="4515b864-16ae-0ecd-fbb2-0e508c8952f4" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="0005d8f0-01"

/dev/sdb1: UUID="35154af7-6e81-e388-1e64-4c7b2be31bb4" UUID_SUB="caae36bc-3136-7916-cd7c-f5a95f05cf23" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="00009e23-01"

/dev/sdc1: UUID="35154af7-6e81-e388-1e64-4c7b2be31bb4" UUID_SUB="a9ce22ba-510b-132f-3acd-9576f0763fef" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="000c4a5f-01"

/dev/sdd1: UUID="06964f6f-cc9e-4bce-90cc-0610735fa45c" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="000e0796-01"

/dev/sdd2: UUID="35154af7-6e81-e388-1e64-4c7b2be31bb4" UUID_SUB="969ef299-35bf-b34f-a53d-f6b5c0daaf59" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="000e0796-02"

/dev/sde1: LABEL="USB_SEAGATE_1TB" BLOCK_SIZE="512" UUID="6A4257A2425771B5" TYPE="ntfs" PARTUUID="245184ae-01"

/dev/sdf1: LABEL="3_1_0-22-AM" UUID="5888-8FBD" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="009a8d92-01"

blkdev.list

KNAME NAME SIZE TYPE FSTYPE MOUNTPOINT MODEL

loop0 loop0 327M loop squashfs /usr/lib/live/mount/rootfs/filesystem.squashfs

sda sda 465.8G disk WDC WD5003ABYX-01WERA0

sda1 -sda1 464.6G part linux_raid_member md127 -md127 928.9G raid10 LVM2_member dm-0 |-centos-root 475G lvm xfs dm-1 |-centos-home 450G lvm xfs dm-2 `-centos-swap 3.9G lvm swap sdb sdb 465.8G disk WDC WD5003ABYX-01WERA0

sdb1 -sdb1 464.6G part linux_raid_member md127 -md127 928.9G raid10 LVM2_member dm-0 |-centos-root 475G lvm xfs dm-1 |-centos-home 450G lvm xfs dm-2 `-centos-swap 3.9G lvm swap sdc sdc 465.8G disk MB0500EBNCR

sdc1 `-sdc1 464.6G part linux_raid_member sdd sdd 465.8G disk TOSHIBA DT01ACA050

sdd1 |-sdd1 1G part xfs sdd2 -sdd2 464.6G part linux_raid_member md127 -md127 928.9G raid10 LVM2_member dm-0 |-centos-root 475G lvm xfs dm-1 |-centos-home 450G lvm xfs dm-2 `-centos-swap 3.9G lvm swap sde sde 931.5G disk Parotable

sde1 `-sde1 931.5G part ntfs /home/partimag sdf sdf 3.7G disk USB Flash Drive

sdf1 `-sdf1 3.7G part vfat /usr/lib/live/mount/medium

sr0 sr0 1024M rom hp DVDROM DH40N

lvm_centos.conf

Generated by LVM2 version 2.03.16(2) (2022-05-18): Mon Jun 26 15:33:33

2023

contents = "Text Format Volume Group"

version = 1

description = "vgcfgbackup -f /tmp/vgcfg_tmp.zBARGc centos"

creation_host = "debian" # Linux debian 6.1.0-8-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.25-1 (2023-04-22) x86_64

creation_time = 1687793613 # Mon Jun 26 15:33:33 2023

centos {

   id = "DZ3zP7-81Rd-j3YP-5jCy-y8J2-A9CF-7WBTau"

   seqno = 4

   format = "lvm2"                  # informational

   status = ["RESIZEABLE", "READ", "WRITE"]

   flags = []

   extent_size = 8192        # 4 Megabytes

   max_lv = 0

   max_pv = 0

   metadata_copies = 0



   physical_volumes {



         pv0 {

                id = "fFCoT5-TJaX-dN80-GDh7-8aCZ-5abR-DcYqCz"

                device = "/dev/md127"     # Hint only



                status = ["ALLOCATABLE"]

                flags = []

                dev_size = 1948000256     # 928.879 Gigabytes

                pe_start = 2048

                pe_count = 237792  # 928.875 Gigabytes

         }

   }



   logical_volumes {



         root {

                id = "X0yGOy-Qq5E-fdDq-SRBi-N9to-zqP2-imv2P0"

                status = ["READ", "WRITE", "VISIBLE"]

                flags = []

                creation_time = 1544360063      # 2018-12-09 12:54:23

+0000

                creation_host = "localhost.localdomain"

                segment_count = 1



                segment1 {

                       start_extent = 0

                       extent_count = 121600     # 475 Gigabytes



                       type = "striped"

                       stripe_count = 1   # linear



                       stripes = [

                              "pv0", 0

                       ]

                }

         }



         home {

                id = "aGhTQ3-Le4Y-96Je-GQG9-UvB1-ZINt-0TWIvf"

                status = ["READ", "WRITE", "VISIBLE"]

                flags = []

                creation_time = 1544360071      # 2018-12-09 12:54:31

+0000

                creation_host = "localhost.localdomain"

                segment_count = 1



                segment1 {

                       start_extent = 0

                       extent_count = 115200     # 450 Gigabytes



                       type = "striped"

                       stripe_count = 1   # linear



                       stripes = [

                              "pv0", 121600

                       ]

                }

         }



         swap {

                id = "H7F2zt-1KmB-K5yU-3jTX-qnVw-iyzN-HOZJfj"

                status = ["READ", "WRITE", "VISIBLE"]

                flags = []

                creation_time = 1544360080      # 2018-12-09 12:54:40

+0000

                creation_host = "localhost.localdomain"

                segment_count = 1



                segment1 {

                       start_extent = 0

                       extent_count = 992 # 3.875 Gigabytes



                       type = "striped"

                       stripe_count = 1   # linear



                       stripes = [

                              "pv0", 236800

                       ]

                }

         }

   }

}

From: JonnyTech @.> Sent: Monday, July 3, 2023 7:27 PM To: stevenshiau/clonezilla @.> Cc: AndrzejXYZ @.>; Author @.> Subject: Re: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

You cannot attach logs when replying from email, we cannot see them.

Looking at the Clonezilla logs I see that there were 4 x 500GB HDDs connected in software RAID 10, using LVM and Volume Groups.

Did you backup the volume? Or the individual drives? If the volume then simply restore to any single drive. If the drives then try to create a identical LVM on similar drives and try restoring to them.

— Reply to this email directly, view it on GitHub <https://github.com/stevenshiau/clonezilla/issues/94#issuecomment-1618917826

, or unsubscribe <https://github.com/notifications/unsubscribe-auth/BA7IB76CX2MHHATHFKT2LG3XO L6F3ANCNFSM6AAAAAAZ23WXTI> . You are receiving this because you authored the thread. <https://github.com/notifications/beacon/BA7IB75EM4L2FBYLEPLB4MTXOL6F3A5CNFS M6AAAAAAZ23WXTKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTT AP244E.gif> Message ID: @.*** @.***> >

From: @.*** @.> Sent: Monday, July 3, 2023 6:38 PM To: 'stevenshiau/clonezilla' @.>; 'stevenshiau/clonezilla' @.> Cc: 'Author' @.> Subject: RE: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Hi,

thanks again for your time, I will try to explain, but a lot of info you can see in the logs attached.

My friend runs a small business, has an old HP ML110 G6 server – no RAID controller, just plain 4 SATA HDDs.

He asked me to check the condition last week – I started with full drive image with latest stable Clonezilla, booted from USB stick.

I choosed to check if the backups were restorable and yes, at the end there was such confirmation (see screen attached).

First of all I’m not Linux guy.

HP server did not report any errors, HDD missing, etc. during the boot or normal operation.

Looking at the Clonezilla logs I see that there were 4 x 500GB HDDs connected in software RAID 10, using LVM and Volume Groups.

It looks like 1 drive was already dead at the time I made backup.

Server was Centos 7, with SAMBA and ZIMBRA on XFS partitions.

I don’t need to restore entire server – all I need is to take SAMBA and ZIMBRA files.

I tried many times to restore the backup, but I get different error messages – basically the restore doesn’t even start.

As I read to restore LVM and VG backups there is a manual intervention/ modification required – but this is beyond me.

That is why I had so much hope I will be able to retrieve files or folder using some other utility, like yours.

Let me know if I should explain anything more.

Thanks a lot for your effort,

Andrzej

From: JonnyTech @.*** @.***>

Sent: Monday, July 3, 2023 12:44 PM To: stevenshiau/clonezilla @.*** @.> > Cc: AndrzejXYZ @. @.> >; Author @. @.***> > Subject: Re: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Now the issue is that I can’t restore this image using clonezilla

Please explain your issue fully with as much detail as possible, it is probably an easy fix.

— Reply to this email directly, view it on GitHub <https://github.com/stevenshiau/clonezilla/issues/94#issuecomment-1617896523

, or unsubscribe <https://github.com/notifications/unsubscribe-auth/BA7IB74MSEZ5VXGSYFPQLQLXO KO7RANCNFSM6AAAAAAZ23WXTI> . You are receiving this because you authored the thread. <https://github.com/notifications/beacon/BA7IB72X4YNJGGWY53CICWLXOKO7RA5CNFS M6AAAAAAZ23WXTKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTT AN4SEW.gif> Message ID: @.*** @.***> >

From: @.*** @.> @. @.> > Sent: Monday, July 3, 2023 11:43 AM To: 'stevenshiau/clonezilla' @. @.***>

; 'stevenshiau/clonezilla' @.*** @.> > Cc: 'Author' @. @.***>

Subject: RE: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Hi Steven,

Thank you for answer, but frankly I don’t understand it.

I use Clonezille as a live USB stick and it works perfectly fine.

I created disk image backup from Centos 7 server with XFS file system, the image was restorable (as it was checked by clonezilla at the end).

Now the issue is that I can’t restore this image using clonezilla, so I was really hoping that program like you wrote will allow me to mount the image and restore the files “manually” – I need some catalogs, that is all.

Is there any way I can get into this backup and restore important files?

Best regards,

Andrzej

From: Steven Shiau @.*** @.> > Sent: Sunday, July 2, 2023 7:52 AM To: stevenshiau/clonezilla @. @.> > Cc: AndrzejXYZ @. @.> >; Author @. @.***> > Subject: Re: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Clonezilla is run on GNU/Linux. Please make sure you use in the way we provide: https://clonezilla.org/clonezilla-live.php

Steven

— Reply to this email directly, view it on GitHub <https://github.com/stevenshiau/clonezilla/issues/94#issuecomment-1616391014

, or unsubscribe <https://github.com/notifications/unsubscribe-auth/BA7IB74V2AVUP4FTTIHFLYDXO ED63ANCNFSM6AAAAAAZ23WXTI> . You are receiving this because you authored the thread. <https://github.com/notifications/beacon/BA7IB77GQET4IJAOQW6IGHDXOED63A5CNFS M6AAAAAAZ23WXTKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTT ALAVWM.gif> Message ID: @.*** @.***> >

AndrzejXYZ avatar Jul 04 '23 18:07 AndrzejXYZ

Sorry, forgot about the most important:

clonezilla-img

This image was saved by Clonezilla at 2023-06-26 17:55:13 UTC.

Saved by clonezilla-live-3.1.0-22-amd64.

The log during saving:


Starting /usr/sbin/ocs-sr at 2023-06-26 15:29:33 UTC...

*****************************************************.

Clonezilla image dir: /home/partimag

Shutting down the Logical Volume Manager

Shutting Down logical volume: /dev/centos/home

Shutting Down logical volume: /dev/centos/root

Shutting Down logical volume: /dev/centos/swap

Shutting Down volume group: centos

Finished Shutting down the Logical Volume Manager

The selected devices: md127 sda sdb sdc sdd

PS. Next time you can run this command directly:

/usr/sbin/ocs-sr -q2 -c -j2 -z1p -i 0 -sfsck -senc -p choose savedisk 2023-06-26-15-img md127 sda sdb sdc sdd

*****************************************************.

The selected devices: md127 sda sdb sdc sdd

Searching for data/swap/extended partition(s)...

Searching for data/swap/extended partition(s)...

Searching for data/swap/extended partition(s)...

Searching for data/swap/extended partition(s)...

Searching for data/swap/extended partition(s)...

The data partition to be saved: sdd1 md127

The selected devices: sdd1 md127

The following step is to save the hard disk/partition(s) on this machine as an image:

*****************************************************.

Machine: ProLiant ML110 G6

sdd (500GB_TOSHIBA_DT01ACA0_TOSHIBA_DT01ACA050_Y7C392HBS)

md127 (997GB_LVM2_member_Unknown_model_lvm-pv-uuid-fFCoT5-TJaX-dN80-GDh7-8aCZ-5abR-DcYqCz)

sdd1 (1G_xfs(In_TOSHIBA_DT01ACA0)_TOSHIBA_DT01ACA050_Y7C392HBS)

md127 (997GB_LVM2_member_Unknown_model_lvm-pv-uuid-fFCoT5-TJaX-dN80-GDh7-8aCZ-5abR-DcYqCz)

*****************************************************.

-> "/home/partimag/2023-06-26-15-img".

Shutting down the Logical Volume Manager

Shutting Down volume group: centos

Finished Shutting down the Logical Volume Manager

Starting saving /dev/sdd1 as /home/partimag/2023-06-26-15-img/sdd1.XXX...

/dev/sdd1 filesystem: xfs.

*****************************************************.

*****************************************************.

Use partclone with pigz to save the image.

Image file will not be split.

*****************************************************.

If this action fails or hangs, check:

  • Is the disk full ?

*****************************************************.

Running: partclone.xfs -z 10485760 -N -L /var/log/partclone.log -c -s /dev/sdd1 --output - | pigz -c --fast -b 1024 --rsyncable > /home/partimag/2023-06-26-15-img/sdd1.xfs-ptcl-img.gz 2> /tmp/img_out_err.o4GkSn

Partclone v0.3.23 http://partclone.org

Starting to clone device (/dev/sdd1) to image (-)

Reading Super Block

xfsclone.c: Open /dev/sdd1 successfully

xfsclone.c: fs_close

memory needed: 21004292 bytes

bitmap 32768 bytes, blocks 2*10485760 bytes, checksum 4 bytes

Calculating bitmap... Please wait...

xfsclone.c: Open /dev/sdd1 successfully

xfsclone.c: bused = 39677, bfree = 222467

xfsclone.c: fs_close

done!

File system: XFS

Device size: 1.1 GB = 262144 Blocks

Space in use: 162.5 MB = 39677 Blocks

Free Space: 911.2 MB = 222467 Blocks

Block size: 4096 Byte

Total block 262144

Syncing... OK!

Partclone successfully cloned the device (/dev/sdd1) to the image (-)

Time elapsed: 9.68 secs (~ .161 mins)

*****************************************************.

Finished saving /dev/sdd1 as /home/partimag/2023-06-26-15-img/sdd1.xfs-ptcl-img.gz

*****************************************************.

Parsing LVM layout for sdd1 md127 ...

centos /dev/md127 fFCoT5-TJaX-dN80-GDh7-8aCZ-5abR-DcYqCz

Parsing logical volumes...

Saving the VG config...

Volume group "centos" successfully backed up.

done.

Checking if the VG config was saved correctly...

done.

Saving /dev/centos/root as filename: centos-root. /dev/centos/root info: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)

Starting saving /dev/centos/root as /home/partimag/2023-06-26-15-img/centos-root.XXX...

/dev/centos/root filesystem: xfs.

*****************************************************.

*****************************************************.

Use partclone with pigz to save the image.

Image file will not be split.

*****************************************************.

If this action fails or hangs, check:

  • Is the disk full ?

*****************************************************.

Running: partclone.xfs -z 10485760 -N -L /var/log/partclone.log -c -s /dev/centos/root --output - | pigz -c --fast -b 1024 --rsyncable > /home/partimag/2023-06-26-15-img/centos-root.xfs-ptcl-img.gz 2> /tmp/img_out_err.uVcQsX

Partclone v0.3.23 http://partclone.org

Starting to clone device (/dev/centos/root) to image (-)

Reading Super Block

xfsclone.c: Open /dev/centos/root successfully

xfsclone.c: fs_close

memory needed: 36536068 bytes

bitmap 15564544 bytes, blocks 2*10485760 bytes, checksum 4 bytes

Calculating bitmap... Please wait...

xfsclone.c: Open /dev/centos/root successfully

xfsclone.c: bused = 43640140, bfree = 80876212

xfsclone.c: fs_close

done!

File system: XFS

Device size: 510.0 GB = 124516352 Blocks

Space in use: 178.7 GB = 43639980 Blocks

Free Space: 331.3 GB = 80876372 Blocks

Block size: 4096 Byte

Total block 124516352

Syncing... OK!

Partclone successfully cloned the device (/dev/centos/root) to the image (-)

Time elapsed: 3466.53 secs (~ 57.775 mins)

*****************************************************.

Finished saving /dev/centos/root as /home/partimag/2023-06-26-15-img/centos-root.xfs-ptcl-img.gz

*****************************************************.

Saving /dev/centos/home as filename: centos-home. /dev/centos/home info: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)

Starting saving /dev/centos/home as /home/partimag/2023-06-26-15-img/centos-home.XXX...

/dev/centos/home filesystem: xfs.

*****************************************************.

*****************************************************.

Use partclone with pigz to save the image.

Image file will not be split.

*****************************************************.

If this action fails or hangs, check:

  • Is the disk full ?

*****************************************************.

Running: partclone.xfs -z 10485760 -N -L /var/log/partclone.log -c -s /dev/centos/home --output - | pigz -c --fast -b 1024 --rsyncable > /home/partimag/2023-06-26-15-img/centos-home.xfs-ptcl-img.gz 2> /tmp/img_out_err.dTrSLD

Partclone v0.3.23 http://partclone.org

Starting to clone device (/dev/centos/home) to image (-)

Reading Super Block

xfsclone.c: Open /dev/centos/home successfully

xfsclone.c: fs_close

memory needed: 35716868 bytes

bitmap 14745344 bytes, blocks 2*10485760 bytes, checksum 4 bytes

Calculating bitmap... Please wait...

xfsclone.c: Open /dev/centos/home successfully

xfsclone.c: bused = 45991954, bfree = 71970798

xfsclone.c: fs_close

done!

File system: XFS

Device size: 483.2 GB = 117962752 Blocks

Space in use: 188.4 GB = 45991741 Blocks

Free Space: 294.8 GB = 71971011 Blocks

Block size: 4096 Byte

Total block 117962752

Syncing... OK!

Partclone successfully cloned the device (/dev/centos/home) to the image (-)

Time elapsed: 5003.00 secs (~ 83.383 mins)

*****************************************************.

Finished saving /dev/centos/home as /home/partimag/2023-06-26-15-img/centos-home.xfs-ptcl-img.gz

*****************************************************.

Saving /dev/centos/swap as filename: centos-swap. /dev/centos/swap info: Linux swap file, 4k page size, little endian, version 1, size 1015807 pages, 0 bad pages, no label, UUID=69b85f71-9f10-4e62-b040-d39226d274dc

Dumping the device mapper table in /home/partimag/2023-06-26-15-img/dmraid.table...

Saving block devices info in /home/partimag/2023-06-26-15-img/blkdev.list...

Saving block devices attributes in /home/partimag/2023-06-26-15-img/blkid.list...

Checking the integrity of partition table in the disk /dev/sdd...

Reading the partition table for /dev/sdd...RETVAL=0

*****************************************************.

The first partition of disk /dev/sdd starts at 2048.

Saving the hidden data between MBR (1st sector, i.e. 512 bytes) and 1st partition, which might be useful for some recovery tool, by:

dd if=/dev/sdd of=/home/partimag/2023-06-26-15-img/sdd-hidden-data-after-mbr skip=1 bs=512 count=2047

2047+0 records in

2047+0 records out

1048064 bytes (1.0 MB, 1.0 MiB) copied, 0.125926 s, 8.3 MB/s

*****************************************************.

Checking the integrity of partition table in the disk /dev/md127...

Reading the partition table for /dev/md127...RETVAL=1

Saving the MBR data for sdd...

1+0 records in

1+0 records out

512 bytes copied, 0.000711783 s, 719 kB/s

Saving the MBR data for md127...

1+0 records in

1+0 records out

512 bytes copied, 0.000866152 s, 591 kB/s

End of saveparts job for image /home/partimag/2023-06-26-15-img.

*****************************************************.

*****************************************************.

End of savedisk job for image 2023-06-26-15-img.

Checking if udevd rules have to be restored...

This program is not started by Clonezilla server, so skip notifying it the job is done.

Finished!

This program is not started by Clonezilla server, so skip notifying it the job is done.

Finished!

End of log

Image created time: 2023-0626-1755

From: @.*** @.> Sent: Tuesday, July 4, 2023 8:37 PM To: 'stevenshiau/clonezilla' @.>; 'stevenshiau/clonezilla' @.> Cc: 'Author' @.> Subject: RE: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Hi again,

Sorry for the delay, I’m trying different ways to solve the issue.

Below is just one of the errors I got:

target_parts is NOT assigned in function task_restoreparts.

Program terminated!.

"ocs-live-general" finished with error!

Check /var/log/clonezilla.log for more details.

I will paste more the Clonezilla error messages, but I will have to try to restore again to recreate.

But let me tell you – I tried every option I could – restore disks, restore partitions, restore to 1 2 multiple disks, every feasible option.

I used different drives - 500GB, 1TB, 3TB drive (entire array even in RAID 10 could not have more than 2 TB (4x500GB)).

Please allow me to paste some of the logs below – if you can please take a look, you are an expert, maybe this will help to direct me:

parts

sdd1 md127

dmraid.table

dmraid: dm

disk

md127 sda sdb sdc sdd

sdd-chs.sf

cylinders=60801

heads=255

sectors=63

md127-chs.sf

cylinders=243500032

heads=2

sectors=4

swappt-centos-swap.info

UUID="69b85f71-9f10-4e62-b040-d39226d274dc"

LABEL=""

lvm_vg_dev.list

centos /dev/md127 fFCoT5-TJaX-dN80-GDh7-8aCZ-5abR-DcYqCz

Info-img-size.txt

Image size (Bytes):

274G /home/partimag/2023-06-26-15-img

Info-saved-by-cmd.txt

/usr/sbin/ocs-sr -q2 -c -j2 -z1p -i 0 -sfsck -senc -p choose savedisk 2023-06-26-15-img md127 sda sdb sdc sdd

mdadm.conf

ARRAY /dev/md/localhost.localdomain:pv00 metadata=1.2 name=localhost.localdomain:pv00 UUID=35154af7:6e81e388:1e644c7b:2be31bb4

md127-pt.parted.compact

Model: Linux Software RAID Array (md)

Disk /dev/md127: 997GB

Sector size (logical/physical): 512B/4096B

Partition Table: unknown

Disk Flags:

md127-pt.parted

Model: Linux Software RAID Array (md)

Disk /dev/md127: 1948000256s

Sector size (logical/physical): 512B/4096B

Partition Table: unknown

Disk Flags:

Info-packages.txt

Image was saved by these Clonezilla-related packages:

drbl-5.2.10-drbl1 clonezilla-5.4.6-drbl1 partclone-0.3.23-drbl-1 util-linux-2.38.1-5+b1 gdisk-1.0.9-2.1

Saved by clonezilla-live-3.1.0-22-amd64.

sdd-pt.sf

label: dos

label-id: 0x000e0796

device: /dev/sdd

unit: sectors

sector-size: 512

/dev/sdd1 : start= 2048, size= 2097152, type=83, bootable

/dev/sdd2 : start= 2099200, size= 974264320, type=fd

dev-fs.list

<Device name> <File system> <Size>

File system is got from ocs-get-dev-info. It might be different from that of blkid or parted.

/dev/sdd1 xfs 1G

/dev/centos/root xfs 475G

/dev/centos/home xfs 450G

/dev/centos/swap swap 3.9G

mdstat.txt

Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]

md127 : active (auto-read-only) raid10 sdd2[0] sdb1[2] sda1[1]

  974000128 blocks super 1.2 512K chunks 2 near-copies [4/3] [UUU_]

  bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices:

lvm_logv.list

/dev/centos/root SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)

/dev/centos/home SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)

/dev/centos/swap Linux swap file, 4k page size, little endian, version 1, size 1015807 pages, 0 bad pages, no label, UUID=69b85f71-9f10-4e62-b040-d39226d274dc

sdd-pt.parted.compact

Model: ATA TOSHIBA DT01ACA0 (scsi)

Disk /dev/sdd: 500GB

Sector size (logical/physical): 512B/4096B

Partition Table: msdos

Disk Flags:

Number Start End Size Type File system Flags

1 1049kB 1075MB 1074MB primary xfs boot

2 1075MB 500GB 499GB primary raid

md127.txt

MD_LEVEL=raid10

MD_DEVICES=4

MD_METADATA=1.2

MD_UUID=35154af7:6e81e388:1e644c7b:2be31bb4

MD_DEVNAME=localhost.localdomain:pv00

MD_NAME=localhost.localdomain:pv00

MD_DEVICE_dev_sda1_ROLE=1

MD_DEVICE_dev_sda1_DEV=/dev/sda1

MD_DEVICE_dev_sdd2_ROLE=0

MD_DEVICE_dev_sdd2_DEV=/dev/sdd2

MD_DEVICE_dev_sdb1_ROLE=2

MD_DEVICE_dev_sdb1_DEV=/dev/sdb1

sdd-pt.parted

Model: ATA TOSHIBA DT01ACA0 (scsi)

Disk /dev/sdd: 976773168s

Sector size (logical/physical): 512B/4096B

Partition Table: msdos

Disk Flags:

Number Start End Size Type File system Flags

1 2048s 2099199s 2097152s primary xfs boot

2 2099200s 976363519s 974264320s primary raid

Info-OS-prober.txt

This OS-related info was saved from this machine with os-prober at 2023-0626-1754:

/dev/mapper/centos-root:CentOS Linux 7 (Core):CentOS:linux

*****************************************************.

This Linux boot related info was saved from this machine with linux-boot-prober at 2023-0626-1754:

/dev/sdd1:/dev/sdd1::/vmlinuz-3.10.0-957.el7.x86_64:/initramfs-3.10.0-957.el7.x86_64.img:root=/dev/sdd1

/dev/sdd1:/dev/sdd1::/vmlinuz-3.10.0-957.1.3.el7.x86_64:/initramfs-3.10.0-957.1.3.el7.x86_64.img:root=/dev/sdd1

/dev/sdd1:/dev/sdd1::/vmlinuz-0-rescue-406026e54f964d928d10801d3b5d2144:/initramfs-0-rescue-406026e54f964d928d10801d3b5d2144.img:root=/dev/sdd1

blkid.list

/dev/loop0: TYPE="squashfs"

/dev/mapper/centos-home: UUID="ab2a319c-c59a-4896-a6e7-016c119a3fa6" BLOCK_SIZE="4096" TYPE="xfs"

/dev/mapper/centos-root: UUID="c1e50c83-2c65-4f0a-a0c0-480e51905a6e" BLOCK_SIZE="4096" TYPE="xfs"

/dev/mapper/centos-swap: UUID="69b85f71-9f10-4e62-b040-d39226d274dc" TYPE="swap"

/dev/md127: UUID="fFCoT5-TJaX-dN80-GDh7-8aCZ-5abR-DcYqCz" TYPE="LVM2_member"

/dev/sda1: UUID="35154af7-6e81-e388-1e64-4c7b2be31bb4" UUID_SUB="4515b864-16ae-0ecd-fbb2-0e508c8952f4" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="0005d8f0-01"

/dev/sdb1: UUID="35154af7-6e81-e388-1e64-4c7b2be31bb4" UUID_SUB="caae36bc-3136-7916-cd7c-f5a95f05cf23" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="00009e23-01"

/dev/sdc1: UUID="35154af7-6e81-e388-1e64-4c7b2be31bb4" UUID_SUB="a9ce22ba-510b-132f-3acd-9576f0763fef" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="000c4a5f-01"

/dev/sdd1: UUID="06964f6f-cc9e-4bce-90cc-0610735fa45c" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="000e0796-01"

/dev/sdd2: UUID="35154af7-6e81-e388-1e64-4c7b2be31bb4" UUID_SUB="969ef299-35bf-b34f-a53d-f6b5c0daaf59" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="000e0796-02"

/dev/sde1: LABEL="USB_SEAGATE_1TB" BLOCK_SIZE="512" UUID="6A4257A2425771B5" TYPE="ntfs" PARTUUID="245184ae-01"

/dev/sdf1: LABEL="3_1_0-22-AM" UUID="5888-8FBD" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="009a8d92-01"

blkdev.list

KNAME NAME SIZE TYPE FSTYPE MOUNTPOINT MODEL

loop0 loop0 327M loop squashfs /usr/lib/live/mount/rootfs/filesystem.squashfs

sda sda 465.8G disk WDC WD5003ABYX-01WERA0

sda1 `-sda1 464.6G part linux_raid_member

md127 `-md127 928.9G raid10 LVM2_member

dm-0 |-centos-root 475G lvm xfs

dm-1 |-centos-home 450G lvm xfs

dm-2 `-centos-swap 3.9G lvm swap

sdb sdb 465.8G disk WDC WD5003ABYX-01WERA0

sdb1 `-sdb1 464.6G part linux_raid_member

md127 `-md127 928.9G raid10 LVM2_member

dm-0 |-centos-root 475G lvm xfs

dm-1 |-centos-home 450G lvm xfs

dm-2 `-centos-swap 3.9G lvm swap

sdc sdc 465.8G disk MB0500EBNCR

sdc1 `-sdc1 464.6G part linux_raid_member

sdd sdd 465.8G disk TOSHIBA DT01ACA050

sdd1 |-sdd1 1G part xfs

sdd2 `-sdd2 464.6G part linux_raid_member

md127 `-md127 928.9G raid10 LVM2_member

dm-0 |-centos-root 475G lvm xfs

dm-1 |-centos-home 450G lvm xfs

dm-2 `-centos-swap 3.9G lvm swap

sde sde 931.5G disk Parotable

sde1 `-sde1 931.5G part ntfs /home/partimag

sdf sdf 3.7G disk USB Flash Drive

sdf1 `-sdf1 3.7G part vfat /usr/lib/live/mount/medium

sr0 sr0 1024M rom hp DVDROM DH40N

lvm_centos.conf

Generated by LVM2 version 2.03.16(2) (2022-05-18): Mon Jun 26 15:33:33 2023

contents = "Text Format Volume Group"

version = 1

description = "vgcfgbackup -f /tmp/vgcfg_tmp.zBARGc centos"

creation_host = "debian" # Linux debian 6.1.0-8-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.25-1 (2023-04-22) x86_64

creation_time = 1687793613 # Mon Jun 26 15:33:33 2023

centos {

   id = "DZ3zP7-81Rd-j3YP-5jCy-y8J2-A9CF-7WBTau"

   seqno = 4

   format = "lvm2"                  # informational

   status = ["RESIZEABLE", "READ", "WRITE"]

   flags = []

   extent_size = 8192        # 4 Megabytes

   max_lv = 0

   max_pv = 0

   metadata_copies = 0



   physical_volumes {



         pv0 {

                id = "fFCoT5-TJaX-dN80-GDh7-8aCZ-5abR-DcYqCz"

                device = "/dev/md127"     # Hint only



                status = ["ALLOCATABLE"]

                flags = []

                dev_size = 1948000256     # 928.879 Gigabytes

                pe_start = 2048

                pe_count = 237792  # 928.875 Gigabytes

         }

   }



   logical_volumes {



         root {

                id = "X0yGOy-Qq5E-fdDq-SRBi-N9to-zqP2-imv2P0"

                status = ["READ", "WRITE", "VISIBLE"]

                flags = []

                creation_time = 1544360063      # 2018-12-09 12:54:23 +0000

                creation_host = "localhost.localdomain"

                segment_count = 1



                segment1 {

                       start_extent = 0

                       extent_count = 121600     # 475 Gigabytes



                       type = "striped"

                       stripe_count = 1   # linear



                       stripes = [

                              "pv0", 0

                       ]

                }

         }



         home {

                id = "aGhTQ3-Le4Y-96Je-GQG9-UvB1-ZINt-0TWIvf"

                status = ["READ", "WRITE", "VISIBLE"]

                flags = []

                creation_time = 1544360071      # 2018-12-09 12:54:31 +0000

                creation_host = "localhost.localdomain"

                segment_count = 1



                segment1 {

                       start_extent = 0

                       extent_count = 115200     # 450 Gigabytes



                       type = "striped"

                       stripe_count = 1   # linear



                       stripes = [

                              "pv0", 121600

                       ]

                }

         }



         swap {

                id = "H7F2zt-1KmB-K5yU-3jTX-qnVw-iyzN-HOZJfj"

                status = ["READ", "WRITE", "VISIBLE"]

                flags = []

                creation_time = 1544360080      # 2018-12-09 12:54:40 +0000

                creation_host = "localhost.localdomain"

                segment_count = 1



                segment1 {

                       start_extent = 0

                       extent_count = 992 # 3.875 Gigabytes



                       type = "striped"

                       stripe_count = 1   # linear



                       stripes = [

                              "pv0", 236800

                       ]

                }

         }

   }

}

From: JonnyTech @.*** @.> > Sent: Monday, July 3, 2023 7:27 PM To: stevenshiau/clonezilla @. @.> > Cc: AndrzejXYZ @. @.> >; Author @. @.***> > Subject: Re: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

You cannot attach logs when replying from email, we cannot see them.

Looking at the Clonezilla logs I see that there were 4 x 500GB HDDs connected in software RAID 10, using LVM and Volume Groups.

Did you backup the volume? Or the individual drives? If the volume then simply restore to any single drive. If the drives then try to create a identical LVM on similar drives and try restoring to them.

— Reply to this email directly, view it on GitHub https://github.com/stevenshiau/clonezilla/issues/94#issuecomment-1618917826 , or unsubscribe https://github.com/notifications/unsubscribe-auth/BA7IB76CX2MHHATHFKT2LG3XOL6F3ANCNFSM6AAAAAAZ23WXTI . You are receiving this because you authored the thread. https://github.com/notifications/beacon/BA7IB75EM4L2FBYLEPLB4MTXOL6F3A5CNFSM6AAAAAAZ23WXTKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTAP244E.gif Message ID: @.*** @.***> >

From: @.*** @.> @. @.> > Sent: Monday, July 3, 2023 6:38 PM To: 'stevenshiau/clonezilla' @. @.> >; 'stevenshiau/clonezilla' @. @.> > Cc: 'Author' @. @.***> > Subject: RE: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Hi,

thanks again for your time, I will try to explain, but a lot of info you can see in the logs attached.

My friend runs a small business, has an old HP ML110 G6 server – no RAID controller, just plain 4 SATA HDDs.

He asked me to check the condition last week – I started with full drive image with latest stable Clonezilla, booted from USB stick.

I choosed to check if the backups were restorable and yes, at the end there was such confirmation (see screen attached).

First of all I’m not Linux guy.

HP server did not report any errors, HDD missing, etc. during the boot or normal operation.

Looking at the Clonezilla logs I see that there were 4 x 500GB HDDs connected in software RAID 10, using LVM and Volume Groups.

It looks like 1 drive was already dead at the time I made backup.

Server was Centos 7, with SAMBA and ZIMBRA on XFS partitions.

I don’t need to restore entire server – all I need is to take SAMBA and ZIMBRA files.

I tried many times to restore the backup, but I get different error messages – basically the restore doesn’t even start.

As I read to restore LVM and VG backups there is a manual intervention/ modification required – but this is beyond me.

That is why I had so much hope I will be able to retrieve files or folder using some other utility, like yours.

Let me know if I should explain anything more.

Thanks a lot for your effort,

Andrzej

From: JonnyTech @.*** @.> > Sent: Monday, July 3, 2023 12:44 PM To: stevenshiau/clonezilla @. @.> > Cc: AndrzejXYZ @. @.> >; Author @. @.***> > Subject: Re: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Now the issue is that I can’t restore this image using clonezilla

Please explain your issue fully with as much detail as possible, it is probably an easy fix.

— Reply to this email directly, view it on GitHub https://github.com/stevenshiau/clonezilla/issues/94#issuecomment-1617896523 , or unsubscribe https://github.com/notifications/unsubscribe-auth/BA7IB74MSEZ5VXGSYFPQLQLXOKO7RANCNFSM6AAAAAAZ23WXTI . You are receiving this because you authored the thread. https://github.com/notifications/beacon/BA7IB72X4YNJGGWY53CICWLXOKO7RA5CNFSM6AAAAAAZ23WXTKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTAN4SEW.gif Message ID: @.*** @.***> >

From: @.*** @.> @. @.> > Sent: Monday, July 3, 2023 11:43 AM To: 'stevenshiau/clonezilla' @. @.> >; 'stevenshiau/clonezilla' @. @.> > Cc: 'Author' @. @.***> > Subject: RE: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Hi Steven,

Thank you for answer, but frankly I don’t understand it.

I use Clonezille as a live USB stick and it works perfectly fine.

I created disk image backup from Centos 7 server with XFS file system, the image was restorable (as it was checked by clonezilla at the end).

Now the issue is that I can’t restore this image using clonezilla, so I was really hoping that program like you wrote will allow me to mount the image and restore the files “manually” – I need some catalogs, that is all.

Is there any way I can get into this backup and restore important files?

Best regards,

Andrzej

From: Steven Shiau @.*** @.> > Sent: Sunday, July 2, 2023 7:52 AM To: stevenshiau/clonezilla @. @.> > Cc: AndrzejXYZ @. @.> >; Author @. @.***> > Subject: Re: [stevenshiau/clonezilla] [ERR] System.DllNotFoundException: Dll was not found. (Issue #94)

Clonezilla is run on GNU/Linux. Please make sure you use in the way we provide: https://clonezilla.org/clonezilla-live.php

Steven

— Reply to this email directly, view it on GitHub https://github.com/stevenshiau/clonezilla/issues/94#issuecomment-1616391014 , or unsubscribe https://github.com/notifications/unsubscribe-auth/BA7IB74V2AVUP4FTTIHFLYDXOED63ANCNFSM6AAAAAAZ23WXTI . You are receiving this because you authored the thread. https://github.com/notifications/beacon/BA7IB77GQET4IJAOQW6IGHDXOED63A5CNFSM6AAAAAAZ23WXTKWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTALAVWM.gif Message ID: @.*** @.***> >

AndrzejXYZ avatar Jul 04 '23 19:07 AndrzejXYZ