Unable to upgrade from 5.21 to 6.6: Assertion `header.wal_size == 0' failed
Please confirm
- [x] I have searched existing issues to check if an issue already exists for the bug I encountered.
Distribution
Ubuntu
Distribution version
22.04
Output of "snap list --all lxd core20 core22 core24 snapd"
# snap list --all lxd core20 core22 core24 snapd
Name Version Rev Tracking Publisher Notes
core20 20250730 2669 latest/stable canonical✓ base,disabled
core20 20250822 2682 latest/stable canonical✓ base
core22 20250923 2139 latest/stable canonical✓ base,disabled
core22 20251009 2163 latest/stable canonical✓ base
lxd 5.21.4-8a3cf61 36579 5.21/stable canonical✓ disabled
lxd 5.21.4-9eb1368 36971 5.21/stable canonical✓ -
snapd 2.71 25202 latest/stable canonical✓ snapd,disabled
snapd 2.72 25577 latest/stable canonical✓ snapd
Output of "lxc info" or system info if it fails
config:
images.auto_update_interval: "0"
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- storage_api_remote_volume_snapshot_copy
- zfs_delegate
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- init_preseed_storage_volumes
- metrics_instances_count
- server_instance_type_info
- resources_disk_mounted
- server_version_lts
- oidc_groups_claim
- loki_config_instance
- storage_volatile_uuid
- import_instance_devices
- instances_uefi_vars
- instances_migration_stateful
- container_syscall_filtering_allow_deny_syntax
- access_management
- vm_disk_io_limits
- storage_volumes_all
- instances_files_modify_permissions
- image_restriction_nesting
- container_syscall_intercept_finit_module
- device_usb_serial
- network_allocate_external_ips
- explicit_trust_token
- instance_import_conversion
- instance_create_start
- devlxd_images_vm
- instance_protection_start
- disk_io_bus_virtio_blk
- metadata_configuration_entity_types
- network_allocations_ovn_uplink
- network_ovn_uplink_vlan
- shared_custom_block_volumes
- metrics_api_requests
- projects_limits_disk_pool
- access_management_tls
- state_logical_cpus
- vm_limits_cpu_pin_strategy
- gpu_cdi
- metadata_configuration_scope
- unix_device_hotplug_ownership_inherit
- unix_device_hotplug_subsystem_device_option
- storage_ceph_osd_pool_size
- network_get_target
- network_zones_all_projects
- vm_root_volume_attachment
- projects_limits_uplink_ips
- entities_with_entitlements
- profiles_all_projects
- storage_driver_powerflex
- storage_driver_pure
- cloud_init_ssh_keys
- oidc_scopes
- project_default_network_and_storage
- ubuntu_pro_guest_attach
- images_all_projects
- client_cert_presence
- resources_device_fs_uuid
- clustering_groups_used_by
- container_bpf_delegation
- override_snapshot_profiles_on_copy
- backup_metadata_version
- storage_buckets_all_projects
- network_acls_all_projects
- networks_all_projects
- clustering_restore_skip_mode
- disk_io_threads_virtiofsd
- oidc_client_secret
- pci_hotplug
- device_patch_removal
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
client_certificate: false
auth_user_name: root
auth_user_method: unix
environment:
addresses: []
architectures:
- x86_64
- i686
backup_metadata_version_range:
- 1
- 2
certificate: |
-----BEGIN CERTIFICATE-----
MIICBjCCAY2gAwIBAgIRAMbCPL1qJT79VQ18T7n9iFIwCgYIKoZIzj0EAwMwNTEc
MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBob21l
bnVjMB4XDTIxMDEwMzIyMjQ0NFoXDTMxMDEwMTIyMjQ0NFowNTEcMBoGA1UEChMT
bGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBob21lbnVjMHYwEAYH
KoZIzj0CAQYFK4EEACIDYgAEnwocWRVYkWkBilczbOsWA+7sy9nQV81dMZhb4Ajb
mRTT+ANznQH7LbEePPJfFYGJH+Ec5hqSIk+IikWo0oCKsSDKMXISoZPk36oDTDYE
ukU0sEQCl2UqMHEfqZ2pBEKQo2EwXzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww
CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAqBgNVHREEIzAhggdob21lbnVjhwR/
AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2cAMGQCMHOAoHDVj82h
cJnw4xrBMFqz50y7UgE+5a4YjscNxmS5P7k35MGFIPrT7UzUO+iIFQIwdFxvOO9e
HNHZ1yGhcP6FQs8z0SLRnaozV5NlbyzxRZyPpatzBox7PljZy/Vtkssv
-----END CERTIFICATE-----
certificate_fingerprint: 47781315172481dee99d2bd9a518b3dbd37ac056ce63f387c711a4b80670ea7b
driver: lxc | qemu
driver_version: 6.0.4 | 8.2.2
instance_types:
- container
- virtual-machine
firewall: nftables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
bpf_token: "false"
idmapped_mounts: "true"
netnsid_getifaddrs: "true"
seccomp_listener: "true"
seccomp_listener_continue: "true"
uevent_injection: "true"
unpriv_binfmt: "false"
unpriv_fscaps: "true"
kernel_version: 5.15.0-163-generic
lxc_features:
cgroup2: "true"
core_scheduling: "true"
devpts_fd: "true"
idmapped_mounts_v2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
pidfd: "true"
seccomp_allow_deny_syntax: "true"
seccomp_notify: "true"
seccomp_proxy_send_notify_fd: "true"
os_name: Ubuntu
os_version: "22.04"
project: default
server: lxd
server_clustered: false
server_event_mode: full-mesh
server_name: homenuc
server_pid: 22474
server_version: 5.21.4
server_lts: true
storage: btrfs
storage_version: 5.16.2
storage_supported_drivers:
- name: cephfs
version: 17.2.9
remote: true
- name: dir
version: "1"
remote: false
- name: lvm
version: 2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.45.0
remote: false
- name: powerflex
version: 1.16 (nvme-cli)
remote: true
- name: pure
version: 2.1.5 (iscsiadm) / 1.16 (nvme-cli)
remote: true
- name: zfs
version: 2.1.5-1ubuntu6~22.04.6
remote: false
- name: btrfs
version: 5.16.2
remote: false
- name: ceph
version: 17.2.9
remote: true
- name: cephobject
version: 17.2.9
remote: true
Issue description
I'm using LXD 5.21 from Snap.
I'm unable to upgrade to LXD 6.6 (current stable) from Snap because it fails to start due to the following error:
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Loading snap configuration
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Setting up mntns symlink (mnt:[4026532617])
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Setting up kmod wrapper
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Preparing /boot
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Preparing a clean copy of /run
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Preparing /run/bin
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Preparing a clean copy of /etc
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Preparing a clean copy of /usr/share/misc
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Setting up ceph configuration
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Setting up LVM configuration
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Setting up OVN configuration
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Rotating logs
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Unsupported ZFS version (0.8)
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: Consider installing ZFS tools in the host and use zfs.external
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Escaping the systemd cgroups
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ====> Detected cgroup V1
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Escaping the systemd process resource limits
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Exposing LXD documentation
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: => Re-using existing LXCFS
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> snap base has changed, restart system to upgrade LXCFS
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: ==> Cleaning up existing LXCFS namespace
Dec 06 17:25:34 homenuc lxd.daemon[4143871]: => Starting LXD
Dec 06 17:25:35 homenuc lxd.daemon[4144539]: time="2025-12-06T17:25:35+03:00" level=warning msg=" - Couldn't find the CGroup blkio.weight, disk priority will be ignored"
Dec 06 17:25:35 homenuc lxd.daemon[4144539]: time="2025-12-06T17:25:35+03:00" level=warning msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"
Dec 06 17:25:35 homenuc lxd.daemon[4144539]: lxd: src/fsm.c:204: decodeDatabase: Assertion `header.wal_size == 0' failed.
Dec 06 17:25:37 homenuc lxd.daemon[4143871]: Killed
Dec 06 17:25:37 homenuc lxd.daemon[4143871]: => LXD failed to start
Steps to reproduce
# lxd --debug
DEBUG [2025-12-06T19:13:17+03:00] Connecting to a local LXD over a Unix socket
DEBUG [2025-12-06T19:13:17+03:00] Sending request to LXD etag= method=GET url="http://unix.socket/1.0"
INFO [2025-12-06T19:13:17+03:00] LXD is starting mode=normal path=/var/snap/lxd/common/lxd version=6.6
INFO [2025-12-06T19:13:17+03:00] Kernel uid/gid map:
INFO [2025-12-06T19:13:17+03:00] - u 0 0 4294967295
INFO [2025-12-06T19:13:17+03:00] - g 0 0 4294967295
INFO [2025-12-06T19:13:17+03:00] Configured LXD uid/gid map:
INFO [2025-12-06T19:13:17+03:00] - u 0 1000000 1000000000
INFO [2025-12-06T19:13:17+03:00] - g 0 1000000 1000000000
INFO [2025-12-06T19:13:17+03:00] AppArmor support is enabled version=4.0.1
DEBUG [2025-12-06T19:13:17+03:00] AppArmor feature cache directory detected cache_dir=/var/snap/lxd/common/lxd/security/apparmor/cache/72e9179b.0
INFO [2025-12-06T19:13:17+03:00] Kernel features:
INFO [2025-12-06T19:13:17+03:00] - closing multiple file descriptors efficiently: yes
DEBUG [2025-12-06T19:13:17+03:00] fsconfig succeed to set delegate_cmds, but must fail
INFO [2025-12-06T19:13:17+03:00] - netnsid-based network retrieval: yes
INFO [2025-12-06T19:13:17+03:00] - pidfds: yes
INFO [2025-12-06T19:13:17+03:00] - core scheduling: no
INFO [2025-12-06T19:13:17+03:00] - uevent injection: yes
INFO [2025-12-06T19:13:17+03:00] - seccomp listener: yes
INFO [2025-12-06T19:13:17+03:00] - seccomp listener continue syscalls: yes
INFO [2025-12-06T19:13:17+03:00] - seccomp listener add file descriptors: yes
INFO [2025-12-06T19:13:17+03:00] - attach to namespaces via pidfds: yes
INFO [2025-12-06T19:13:17+03:00] - safe native terminal allocation: yes
INFO [2025-12-06T19:13:17+03:00] - unprivileged binfmt_misc: no
INFO [2025-12-06T19:13:17+03:00] - BPF Token: no
INFO [2025-12-06T19:13:17+03:00] - unprivileged file capabilities: yes
INFO [2025-12-06T19:13:17+03:00] - cgroup layout: cgroup2
WARNING[2025-12-06T19:13:17+03:00] - Couldn't find the CGroup hugetlb controller, hugepage limits will be ignored
WARNING[2025-12-06T19:13:17+03:00] - Couldn't find the CGroup network priority controller, per-instance network priority will be ignored. Please use per-device limits.priority instead
INFO [2025-12-06T19:13:17+03:00] - idmapped mounts kernel support: yes
INFO [2025-12-06T19:13:17+03:00] Instance type operational driver=lxc features="map[]" type=container
ERROR [2025-12-06T19:13:17+03:00] Unable to run feature checks during QEMU initialization: Unable to locate a VM UEFI firmware
WARNING[2025-12-06T19:13:17+03:00] Instance type not operational driver=qemu err="QEMU failed to run feature checks" type=virtual-machine
INFO [2025-12-06T19:13:17+03:00] Initializing local database
DEBUG [2025-12-06T19:13:17+03:00] Refreshing identity cache with local trusted certificates
INFO [2025-12-06T19:13:17+03:00] Set client certificate to server certificate fingerprint=47781315172481dee99d2bd9a518b3dbd37ac056ce63f387c711a4b80670ea7b
INFO [2025-12-06T19:13:17+03:00] Loading daemon configuration
DEBUG [2025-12-06T19:13:17+03:00] Initializing database gateway
INFO [2025-12-06T19:13:17+03:00] Starting database node id=1 local=1 role=voter
lxd: src/fsm.c:204: decodeDatabase: Assertion `header.wal_size == 0' failed.
Aborted (core dumped)
Information to attach
- [ ] Any relevant kernel output (
dmesg) - [ ] Instance log (
lxc info NAME --show-log) - [ ] Instance configuration (
lxc config show NAME --expanded) - [ ] Main daemon log (at
/var/log/lxd/lxd.logor/var/snap/lxd/common/lxd/logs/lxd.log) - [x] Output of the client with
--debug - [ ] Output of the daemon with
--debug(or uselxc monitorwhile reproducing the issue)
/var/snap/lxd/common/lxd/database/global# ls -lah
total 88M
drwxr-x--- 1 root root 1.5K Dec 6 19:18 .
drwx------ 1 root root 52 Dec 6 19:17 ..
-rw-r--r-- 1 root root 8.0M Dec 6 17:27 0000000000182273-0000000000183075
-rw-r--r-- 1 root root 2.3M Dec 6 17:27 0000000000183076-0000000000183296
-rw-r--r-- 1 root root 8.0M Dec 6 17:27 0000000000183297-0000000000184095
-rw-r--r-- 1 root root 2.3M Dec 6 17:27 0000000000184096-0000000000184320
-rw-r--r-- 1 root root 8.0M Dec 6 17:27 0000000000184321-0000000000185121
-rw-r--r-- 1 root root 2.2M Dec 6 17:27 0000000000185122-0000000000185344
-rw-r--r-- 1 root root 8.0M Dec 6 17:27 0000000000185345-0000000000186143
-rw-r--r-- 1 root root 2.4M Dec 6 17:27 0000000000186144-0000000000186368
-rw-r--r-- 1 root root 8.0M Dec 6 17:27 0000000000186369-0000000000187167
-rw-r--r-- 1 root root 2.2M Dec 6 17:27 0000000000187168-0000000000187392
-rw-r--r-- 1 root root 8.0M Dec 6 17:27 0000000000187393-0000000000188191
-rw-r--r-- 1 root root 2.3M Dec 6 17:27 0000000000188192-0000000000188416
-rw-r--r-- 1 root root 8.0M Dec 6 17:27 0000000000188417-0000000000189215
-rw-r--r-- 1 root root 2.4M Dec 6 17:27 0000000000189216-0000000000189440
-rw-r--r-- 1 root root 8.0M Dec 6 17:27 0000000000189441-0000000000190239
-rw-r--r-- 1 root root 2.2M Dec 6 17:27 0000000000190240-0000000000190464
-rw-r--r-- 1 root root 4.0M Dec 6 17:27 0000000000190465-0000000000190866
-rw-r--r-- 1 root root 190K Dec 6 17:27 0000000000190867-0000000000190886
-rw-r--r-- 1 root root 948K Dec 6 17:27 db.bin
-rw-r--r-- 1 root root 32K Dec 6 19:17 db.bin-shm
-rw-r--r-- 1 root root 0 Dec 6 17:27 db.bin-wal
-rw-r--r-- 1 root root 0 Dec 6 17:27 dqlite-lock
-rw-r--r-- 1 root root 32 Dec 6 17:27 metadata1
-rw-r--r-- 1 root root 130K Dec 6 17:27 snapshot-1-189440-10022891640
-rw-r--r-- 1 root root 56 Dec 6 17:27 snapshot-1-189440-10022891640.meta
-rw-r--r-- 1 root root 77K Dec 6 17:27 snapshot-1-190464-10830866836
-rw-r--r-- 1 root root 56 Dec 6 17:27 snapshot-1-190464-10830866836.meta
Does refreshing to the 6/candidate channel work?
Might be related to https://github.com/canonical/lxd/issues/17130
Does refreshing to the
6/candidatechannel work?
Dec 06 19:55:34 homenuc lxd.daemon[112750]: => Starting LXD
Dec 06 19:55:35 homenuc lxd.daemon[112897]: time="2025-12-06T19:55:35+03:00" level=warning msg=" - Couldn't find the CGroup network priority controller, per-instance network priority will be ignored. Please use per-device limits.priority instead"
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 0x7f427060f1b8 dqlite_print_trace
Dec 06 19:55:35 homenuc lxd.daemon[112897]: ???:0
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 0x7f427060f1f3 dqlite_fail
Dec 06 19:55:35 homenuc lxd.daemon[112897]: ???:0
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 0x7f4270607f62 ???
Dec 06 19:55:35 homenuc lxd.daemon[112897]: ???:0
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 0x7f427063392b ???
Dec 06 19:55:35 homenuc lxd.daemon[112897]: ???:0
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 0x7f4270633e6d ???
Dec 06 19:55:35 homenuc lxd.daemon[112897]: ???:0
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 0x7f4270614df7 ???
Dec 06 19:55:35 homenuc lxd.daemon[112897]: ???:0
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 0x7f427044baa3 ???
Dec 06 19:55:35 homenuc lxd.daemon[112897]: ???:0
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 0x7f42704d8c6b ???
Dec 06 19:55:35 homenuc lxd.daemon[112897]: ???:0
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 0xffffffffffffffff ???
Dec 06 19:55:35 homenuc lxd.daemon[112897]: ???:0
Dec 06 19:55:35 homenuc lxd.daemon[112897]: Tentatively showing last 62 crash trace records:
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135180354083 112933 src/vfs.c:2585 VfsInit vfs init
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135180593458 112933 src/transport.c:242 raftProxyInit raft proxy init
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135180630553 112933 src/fsm.c:506 fsm__init fsm init
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135238891172 112933 src/transport.c:45 impl_init impl init
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135239510587 112933 src/server.c:831 dqlite_node_start dqlite node start
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135239881596 112936 src/raft/start.c:159 raft_start starting version:1800 revision:unknown
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135239998568 112936 src/raft/uv_list.c:91 UvList segment 00000000001822
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240007895 112936 src/raft/uv_list.c:91 UvList segment 00000000001830
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240011763 112936 src/raft/uv_list.c:91 UvList segment 00000000001832
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240015577 112936 src/raft/uv_list.c:91 UvList segment 00000000001840
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240019398 112936 src/raft/uv_list.c:91 UvList segment 00000000001843
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240023198 112936 src/raft/uv_list.c:91 UvList segment 00000000001851
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240026994 112936 src/raft/uv_list.c:91 UvList segment 00000000001853
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240030845 112936 src/raft/uv_list.c:91 UvList segment 00000000001861
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240034695 112936 src/raft/uv_list.c:91 UvList segment 00000000001863
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240038518 112936 src/raft/uv_list.c:91 UvList segment 00000000001871
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240042379 112936 src/raft/uv_list.c:91 UvList segment 00000000001873
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240046188 112936 src/raft/uv_list.c:91 UvList segment 00000000001881
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240050075 112936 src/raft/uv_list.c:91 UvList segment 00000000001884
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240053909 112936 src/raft/uv_list.c:91 UvList segment 00000000001892
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240057711 112936 src/raft/uv_list.c:91 UvList segment 00000000001894
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240061518 112936 src/raft/uv_list.c:91 UvList segment 00000000001902
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240065327 112936 src/raft/uv_list.c:91 UvList segment 00000000001904
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240069127 112936 src/raft/uv_list.c:91 UvList segment 00000000001908
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240072961 112936 src/raft/uv_list.c:91 UvList segment 00000000001908
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240076785 112936 src/raft/uv_list.c:91 UvList segment 00000000001909
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240080028 112936 src/raft/uv_list.c:95 UvList ignore db.bin
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240082897 112936 src/raft/uv_list.c:95 UvList ignore db.bin-shm
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240085734 112936 src/raft/uv_list.c:95 UvList ignore db.bin-wal
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240092470 112936 src/raft/uv_list.c:95 UvList ignore dqlite-lock
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240094356 112936 src/raft/uv_list.c:68 UvList ignore metadata1
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240097955 112936 src/raft/uv_list.c:95 UvList ignore snapshot-1-189
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240113927 112936 src/raft/uv_list.c:80 UvList snapshot snapshot-1-189
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240117455 112936 src/raft/uv_list.c:95 UvList ignore snapshot-1-190
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135240131463 112936 src/raft/uv_list.c:80 UvList snapshot snapshot-1-190
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135243273754 112936 src/raft/uv.c:398 uvLoadSnapshotAndEntries most recent snapshot at 190464
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135243275683 112936 src/raft/uv.c:278 uvFilterSegments most recent closed segment is 00000000001909
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135243277774 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001822
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135272572152 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001830
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135281051578 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001832
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135310421006 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001840
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135318750168 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001843
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135348005968 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001851
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135356189761 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001853
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135385330227 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001861
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135394099696 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001863
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135423234881 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001871
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135431508884 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001873
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135460671042 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001881
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135469070363 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001884
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135498342904 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001892
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135507250590 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001894
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135536484246 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001902
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135544941230 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001904
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135559786990 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001908
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135560470203 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001908
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135562302398 112936 src/raft/uv_segment.c:908 uvSegmentLoadAll load segment 00000000001909
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135563149299 112936 src/raft/uv.c:509 uvLoad start index 182273, 8664 entries
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135563151021 112936 src/raft/start.c:168 raft_start current_term: 1 voted_for: 0 start_index: 182273 n_entries: 8664
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135563152492 112936 src/raft/start.c:174 raft_start restore snapshot with last index 190464 and last term 1
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135563154732 112936 src/fsm.c:463 fsm__restore fsm restore
Dec 06 19:55:35 homenuc lxd.daemon[112897]: 1765040135563157114 112936 src/fsm.c:196 decodeDatabase main_size:970752 wal_size:856992
Dec 06 19:55:35 homenuc lxd.daemon[112897]: lxd: src/fsm.c:205: decodeDatabase: Assertion `header.wal_size == 0' failed.
Ok thanks for the details,ill refer this to @marco6 from the dqlite team to look into. Thanks
This happens when the dqlite version bumps from 1.16(or before) to 1.18 without having the chance to create a new snapshot. While in general bumping two versions is not supported, there is already a fix for this problem as it it clearly unreasonable to ask to user to "wait enough time" in version 1.17. The fix however is not tested (that's why it didn't make it in 1.18.4). We are at the time close to being able to write a proper test, but not there yet.
@ValdikSS did you upgrade from LXD 5.0 to 5.21 and then to 6.6?
I don't remember. I was using 5.21 since at least April 2024: https://github.com/canonical/lxd/issues/13326#issuecomment-2078013117
@marco6 sounds like this could be affecting going from 1.17 to 1.18 too.
I tested the upgrade from 5.21.4-9eb1368 36971 to 6.6-3c9aa6d 36883 and it worked fine, so looks like there's additional timing or environmental factors at play.
It sounds like we need to wait for the next release of dqlite which has a fix for this known issue.
Confirmed also that 5.21 -> 6.6 using 6/candidate works OK too.
LXD with Dqlite https://github.com/canonical/dqlite/releases/tag/v1.18.4 has been released to 6/stable and latest/stable channels.
This issue depends on a fix from the dqlite team that will be in v1.18.5 hopefully.