idevicerestore icon indicating copy to clipboard operation
idevicerestore copied to clipboard

Restore Error 'Failed to install splat'

Open irainbw opened this issue 4 months ago • 5 comments

Hello all,

I encountered some problems when flashing 18.5 on iPhone 11. The log prompt (failed to install splat)

ipsw: iPhone12,1_18.5_22F76_Restore.ipsw

some info:

ramrod_dump_mounted_filesystem_info:**DUMPING MOUNTED FILESYSTEMS ramrod_dump_mounted_filesystem_info: 3 filesystems are mounted tmpfs is mounted at /mnt5 devfs is mounted at /dev /dev/md0 is mounted at / ramrod_dump_mounted_filesystem_info: *DONE DUMPING MOUNTED FILESYSTEMS invert_apfs_image : inverting : /System/Library/Filesystems/apfs.fs/apfs_invert /dev/disk2 0 apfs_invert_asr_img entering ramrod_execute_command_with_config: /System/Library/Filesystems/apfs.fs/apfs_invert executing /System/Library/Filesystems/apfs.fs/apfs_invert -d /dev/disk2 -s 1 -n apfs_invert_asr_img -f ASR: *** Mounting outer volume (/dev/disk2 s1)... ASR: nx_mount:1308: disk2 initializing cache w/hash_size 32768 and cache size 65536 ASR: nx_mount:1454: disk2 container cleanly-unmounted flag set. ASR: nx_mount:1630: disk2 checkpoint search: largest xid 37, best xid 37 @ 73 ASR: nx_mount:1657: disk2 stable checkpoint indices: desc 72 data 142 ASR: spaceman_datazone_init:611: disk2 allocation zone on dev 0 for allocations of 1 blocks starting at paddr 4096000 ASR: spaceman_datazone_init:611: disk2 allocation zone on dev 0 for allocations of 2 blocks starting at paddr 32768 ASR: spaceman_datazone_init:611: disk2 allocation zone on dev 0 for allocations of 3 blocks starting at paddr 65536 ASR: spaceman_datazone_init:611: disk2 allocation zone on dev 0 for allocations of 4 blocks starting at paddr 98304 ASR: spaceman_scan_free_blocks:4106: disk2 scan took 0.007333 s, trims took 0.000000 s ASR: spaceman_scan_free_blocks:4110: disk2 13753997 blocks free in 35 extents, avg 392971.34 ASR: spaceman_scan_free_blocks:4119: disk2 13753997 blocks trimmed in 35 extents (0 us/trim, 35000000 trims/s) ASR: spaceman_scan_free_blocks:4122: disk2 trim distribution 1:9 2+:10 4+:2 16+:3 64+:2 256+:9 ASR: spaceman_fxc_print_stats:477: disk2 dev 0 smfree 13753997/15624989 table 32/186 blocks 13753997 1:429812:11250545 100.00% range 31332:15593657 99.79% scans 1 ASR: *** Getting image dstream info... ASR: apfs_invert_asr_img: dstream_id=16, size=7377780736 ASR: *** Mounting inner volume (apfs_invert_asr_img)... ASR: nx_mount:1308: initializing cache w/hash_size 32768 and cache size 65536 ASR: nx_mount:1630: checkpoint search: largest xid 452, best xid 452 @ 5 ASR: nx_mount:1657: stable checkpoint indices: desc 4 data 9 ASR: *** Copying inner volume extentref tree into outer volume... ASR: Copied 45682 original extents and created 0 new extents ASR: *** Copying inner volume fsroot tree into outer volume... ASR: Copied 1134073 fs_root records plus 0 new file extents ASR: *** Cleaning up unused blocks... ASR: Freed 139630 data blocks and kept 1661586 data blocks ASR: *** Updating the superblock... ASR: *** Deleting old fs_root... ASR: *** Deleting old extentref tree... mDNS [ ]: no usable interfaces found ASR: *** Finishing transaction... ASR: *** Unmounting... ASR: sanity_check_alloced_blocks:730: disk2s1 fs_alloc_count mismatch: fs root nodes 1 extent 1 omap 363 snap_meta 1 doc_id 0 prev_doc_id 0 fext: 0 pfkur: 0 er: 0 udata: 1661586 fs_alloc_count 1702536 != count 1661953 ASR: *** Success! waiting for child to exit child exited exit status: 0 invert_apfs_image : succeeded inverting : /System/Library/Filesystems/apfs.fs/apfs_invert /dev/disk2 0 apfs_invert_asr_img ASR succeed on initial attempt IODeviceTree:/arm-io/sep/iop-sep-nub/xART found We should have an xART partition. entering mount_partition entering ramrod_execute_command_with_config: /sbin/mount_apfs executing /sbin/mount_apfs -R /dev/disk2s3 /mnt7 waiting for child to exit child exited exit status: 0 /dev/disk2s3 mounted on /mnt7 xART mounted read-write entering ramrod_init_gigalocker IODeviceTree:/arm-io/sep/iop-sep-nub/xART found We should have an xART partition. entering ramrod_execute_command_with_config: /usr/libexec/seputil executing /usr/libexec/seputil --gigalocker-init seputil: Gigalocker file (/mnt7/744A9F09-1B82-5C78-A173-00E5E8BCBCFA.gl) exists seputil: Gigalocker initialization completed waiting for child to exit child exited exit status: 0 gigalocker: ONLINE entering mount_filesystems ramrod_display_set_granular_progress_forced: 98.000000 entering mount_partition entering ramrod_execute_command_with_config: /sbin/mount_apfs executing /sbin/mount_apfs -R /dev/disk2s1 /mnt1 waiting for child to exit child exited exit status: 0 /dev/disk2s1 mounted on /mnt1 System mounted read-write Not counting data volume as required. Not counting user volume as required. ramrod_display_set_granular_progress_forced: 98.000000 entering mount_partition entering ramrod_execute_command_with_config: /sbin/mount_apfs executing /sbin/mount_apfs -R /dev/disk2s4 /mnt3 waiting for child to exit child exited exit status: 0 /dev/disk2s4 mounted on /mnt3 Baseband Data mounted read-write ramrod_display_set_granular_progress_forced: 98.000000 entering mount_partition entering ramrod_execute_command_with_config: /sbin/mount_apfs executing /sbin/mount_apfs -R /dev/disk2s5 /mnt6 waiting for child to exit child exited exit status: 0 /dev/disk2s5 mounted on /mnt6 Hardware mounted read-write Skipping mount of update partition ramrod_display_set_granular_progress_forced: 98.000000 entering mount_partition entering ramrod_execute_command_with_config: /sbin/mount_apfs executing /sbin/mount_apfs -R /dev/disk2s6 /mnt9 waiting for child to exit child exited exit status: 0 /dev/disk2s6 mounted on /mnt9 Preboot mounted read-write entering mount_partition entering ramrod_execute_command_with_config: /sbin/mount_apfs executing /sbin/mount_apfs -R /dev/disk2s7 /mnt4 waiting for child to exit child exited exit status: 0 /dev/disk2s7 mounted on /mnt4 Update mounted read-write [00:34:53.0678-GMT]{3>6} CHECKPOINT END: (null):[0x06B4] await_system_image_invert_retry restore-step-ids = {0x1103067B:66;0x1103132C:112} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian} restore-step-uptime = 309 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:34:53.0679-GMT]{3>6} CHECKPOINT BEGIN: (null):[0x0674] create_protected_filesystems restore-step-ids = {0x1103067B:66;0x1103132C:112;0x11030674:127} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x11030674:create_protected_filesystems} restore-step-uptime = 309 restore-step-user-progress = 98 entering create_protected_filesystems ramrod_display_set_granular_progress_forced: 98.000000 creating class d key for /mnt2 creating encrypted Data partition unable to open /dev/disk2 to get block size: Resource busy block size for /dev/disk2: 0 /System/Library/Filesystems/apfs.fs/newfs_apfs -A -D -o role=d -v Data -P /dev/disk2 entering ramrod_execute_command_with_config: /System/Library/Filesystems/apfs.fs/newfs_apfs executing /System/Library/Filesystems/apfs.fs/newfs_apfs -A -D -o role=d -v Data -P /dev/disk2 waiting for child to exit child exited exit status: 0 entering ramrod_probe_media_internal entering wait_for_device: 'EmbeddedDeviceTypeRoot' Using device path /dev/disk0 for EmbeddedDeviceTypeRoot device partitioning scheme is GPT APFS Container 'Container' /dev/disk0s1 Found synthesized APFS container. Using disk2 instead of /dev/disk0s1 device is APFS formatted found system volume not at /dev/disk0s1s1: System Captured preboot partition on main OS container 2 find_filesystem_partitions: storage=/dev/disk0 container=/dev/disk2 system=/dev/disk2s1 data=/dev/disk2s2 user= baseband data=/dev/disk2s4 log= update=/dev/disk2s7 xart=/dev/disk2s3 hardware=/dev/disk2s5 scratch= preboot=/dev/disk2s6 find_filesystem_partitions: recovery os container= volume= entering mount_partition entering ramrod_execute_command_with_config: /sbin/mount_apfs executing /sbin/mount_apfs -R /dev/disk2s2 /mnt2 waiting for child to exit child exited exit status: 0 /dev/disk2s2 mounted on /mnt2 Data mounted read-write create_protected_filesystems: created data partition ramrod_is_data_volume_split_required_block_invoke: YES, enhanced apfs is supported creating encrypted User partition unable to open /dev/disk2 to get block size: Resource busy block size for /dev/disk2: 0 /System/Library/Filesystems/apfs.fs/newfs_apfs -A -D -o role=u -v User -P /dev/disk2 entering ramrod_execute_command_with_config: /System/Library/Filesystems/apfs.fs/newfs_apfs executing /System/Library/Filesystems/apfs.fs/newfs_apfs -A -D -o role=u -v User -P /dev/disk2 waiting for child to exit child exited exit status: 0 entering ramrod_probe_media_internal entering wait_for_device: 'EmbeddedDeviceTypeRoot' Using device path /dev/disk0 for EmbeddedDeviceTypeRoot device partitioning scheme is GPT APFS Container 'Container' /dev/disk0s1 Found synthesized APFS container. Using disk2 instead of /dev/disk0s1 device is APFS formatted found system volume not at /dev/disk0s1s1: System Captured preboot partition on main OS container 2 find_filesystem_partitions: storage=/dev/disk0 container=/dev/disk2 system=/dev/disk2s1 data=/dev/disk2s2 user=/dev/disk2s8 baseband data=/dev/disk2s4 log= update=/dev/disk2s7 xart=/dev/disk2s3 hardware=/dev/disk2s5 scratch= preboot=/dev/disk2s6 find_filesystem_partitions: recovery os container= volume= entering mount_partition entering ramrod_execute_command_with_config: /sbin/mount_apfs executing /sbin/mount_apfs -R /dev/disk2s8 /mnt10 waiting for child to exit child exited exit status: 0 /dev/disk2s8 mounted on /mnt10 User mounted read-write create_protected_filesystems: created user partition configure_data_volumes: primary user create with uuid: B1DEB362-11D9-4D95-AE15-0E39D33FF01D and session uid:501 mDNS [ ]: no usable interfaces found configure_data_volumes: AKSIdentityCreateFirst success, loading the identity configure_data_volumes: AKSIdentityLoad Succeeded, calling SetPrimary configure_data_volumes: AKSIdentitySetPrimary succeded, binding System Data Volume [bindAPFSSystemDataVolume] binding System Data Volume to PrimaryIdentity [bindAPFSSystemDataVolume] Calling APFSVolumeEnableUserProtectionWithOptions with device_node:/dev/disk2s2 userUUID:B1DEB362-11D9-4D95-AE15-0E39D33FF01D [bindAPFSSystemDataVolume] System Data Volume, bound to AKS with primary configure_data_volumes: Shared Data Volume, bound to AKS with primary configure_data_volumes: User Data Volume, calling volume map on disk2s8 configure_data_volumes: AKS VolumeMapPath Success configure_data_volumes: createPrimaryUserLayoutWithOnUserVolumePath Success [00:35:14.0956-GMT]{3>6} CHECKPOINT END: (null):[0x0674] create_protected_filesystems restore-step-ids = {0x1103067B:66;0x1103132C:112} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian} restore-step-uptime = 330 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:35:14.0957-GMT]{3>6} CHECKPOINT BEGIN: (null):[0x065F] reserve_overprov_space restore-step-ids = {0x1103067B:66;0x1103132C:112;0x1103065F:128} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x1103065F:reserve_overprov_space} restore-step-uptime = 330 restore-step-user-progress = 98 Overprovisioning model is 2. Not reserving space for overprov file [00:35:14.0957-GMT]{3>6} CHECKPOINT END: (null):[0x065F] reserve_overprov_space restore-step-ids = {0x1103067B:66;0x1103132C:112} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian} restore-step-uptime = 330 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:35:14.0958-GMT]{3>6} CHECKPOINT BEGIN: (null):[0x0680] read_new_os_build_version restore-step-ids = {0x1103067B:66;0x1103132C:112;0x11030680:129} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x11030680:read_new_os_build_version} restore-step-uptime = 330 restore-step-user-progress = 98 entering mount_partition System already mounted read-write (mount ignored) ramrod_read_new_os_build_version: new OS version: 22F76 (user) [00:35:14.0960-GMT]{3>6} CHECKPOINT END: (null):[0x0680] read_new_os_build_version restore-step-ids = {0x1103067B:66;0x1103132C:112} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian} restore-step-uptime = 330 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:35:14.0960-GMT]{3>6} CHECKPOINT BEGIN: (null):[0x06A6] install_splat restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98 ramrod_copy_device_identity_static_info: Attempting to read device identity info ramrod_copy_device_identity_static_info: image4CryptoHashMethod: sha2-384 Need to entangle the nonce with UID 1. Aborting because we don't know how to do that yet. Retrieved APNonce (without regeneration) was NULL. ApNonce (retrieved): NULLSuccessfully copied device Identity DeviceInfo: BoardID: 4 ChipID: 32816 _personalize_splat_ticket: prerolling splat nonce _personalize_splat_ticket: failed to preroll nonce: 28 (No space left on device) [00:35:14.0981-GMT]{3>6} CHECKPOINT FAILURE:(FAILURE:-1) (null):[0x06A6] install_splat [0]D(failed to install splat) restore-step-results = {0x110706A6:{0:-1};0x1107132C:{0:-1}} restore-step-codes = {0x110706A6:{0:-1};0x1120132C:{0:1017}} restore-step-domains = {0x110706A6:{0:"AMRestoreErrorDomain"};0x1120132C:{0:"RamrodErrorDomain"}} restore-step-error = {0x110706A6:"[0]D(failed to install splat)"} restore-step-uptime = 330 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:35:14.0981-GMT]{3>6} CHECKPOINT FAILURE:(FAILURE:-1) RESTORED:[0x067B] perform_restore_installing [0]D(failed to install splat) restore-step-results = {0x1107067B:{0:-1};0x110706A6:{0:-1};0x1107132C:{0:-1}} restore-step-codes = {0x1107067B:{0:-1};0x110706A6:{0:-1};0x1120132C:{0:1017}} restore-step-domains = {0x1107067B:{0:"AMRestoreErrorDomain"};0x110706A6:{0:"AMRestoreErrorDomain"};0x1120132C:{0:"RamrodErrorDomain"}} restore-step-error = {0x1107067B:"[0]D(failed to install splat)"} restore-step-uptime = 330 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:35:14.0982-GMT]{3>6} CHECKPOINT BEGIN: RESTORED:[0x067C] cleanup_boot_command restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130;0x1103067C:131} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98 entering reset_boot_command_if_in_values recovery-boot-mode = iboot-failure-reason = [00:35:14.0983-GMT]{3>6} CHECKPOINT END: RESTORED:[0x067C] cleanup_boot_command restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:35:14.0983-GMT]{3>6} CHECKPOINT BEGIN: RESTORED:[0x1613] cleanup_recovery_os_volume restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130;0x11031613:132} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98 [00:35:14.0983-GMT]{3>6} CHECKPOINT END: RESTORED:[0x1613] cleanup_recovery_os_volume restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:35:14.0983-GMT]{3>6} CHECKPOINT BEGIN: RESTORED:[0x0647] cleanup_check_result restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130;0x11030647:133} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98 [00:35:14.0983-GMT]{3>6} CHECKPOINT END: RESTORED:[0x0647] cleanup_check_result restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:35:14.0984-GMT]{3>6} CHECKPOINT BEGIN: RESTORED:[0x06C2] cleanup_send_crash_logs restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130;0x110306C2:134} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98 [00:35:14.0984-GMT]{3>6} CHECKPOINT END: RESTORED:[0x06C2] cleanup_send_crash_logs restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98 Considering saving tolerated failures: true /mnt4/lastOTA/ota_tolerated_failures.plist false [00:35:14.0984-GMT]{3>6} CHECKPOINT BEGIN: RESTORED:[0x0648] cleanup_send_final_status restore-step-ids = {0x1103067B:66;0x1103132C:112;0x110306A6:130;0x11030648:135} restore-step-names = {0x1103067B:perform_restore_installing;0x1103132C:await_update_update_veridian;0x110306A6:install_splat} restore-step-uptime = 330 restore-step-user-progress = 98

ERROR: Unable to successfully restore device Checkpoint completed id: 0x648 (cleanup_send_final_status) result=0 Checkpoint FAILURE id: 0x648 result=0: [0]D(failed to install splat) ReverseProxy[Ctrl]: Terminating ReverseProxy[Ctrl]: (status=2) Terminated ERROR: Unable to restore device

irainbw avatar Jul 30 '25 09:07 irainbw

Bumping this I have an unrestorable device thanks to this issue (iPad Air 5, iOS 26b4)

themacintoshnerd avatar Aug 04 '25 17:08 themacintoshnerd

I am also currently experiencing this problem. (iPhone 11 Pro Max, iOS 26b3 and now attempting to get out of restore mode by either reupdating to 26b3 or above so I don't lose my data as I have no backups :[ )

CrusherNotDrip avatar Aug 24 '25 21:08 CrusherNotDrip

@CrusherNotDrip did you manage to fix it? I have a iphone 16 pro 256gb on 26.0.1 and i'm having a similar issue. I was thinking of updating to a beta version of the ios 26 update in hopes it will work and i can recover my data. I am in the same position as you

redoninho avatar Oct 25 '25 00:10 redoninho

@CrusherNotDrip did you manage to fix it? I have a iphone 16 pro 256gb on 26.0.1 and i'm having a similar issue. I was thinking of updating to a beta version of the ios 26 update in hopes it will work and i can recover my data. I am in the same position as you

no unfortunately i haven't and i just went off by resetting my phone only thing i lost was all my photos but the rest was backed up to icloud so

CrusherNotDrip avatar Oct 25 '25 00:10 CrusherNotDrip

Thats a shame. I will take it into apple but don't really expect much. I have 11 years of photos that I would ideally like to save. Its pathetic that a trillion dollar company has such a common issue that wipes peoples phones. I have never heard of any other device doing something like this

redoninho avatar Oct 25 '25 11:10 redoninho