lxd icon indicating copy to clipboard operation
lxd copied to clipboard

`lxc list` does not respect filtering when JSON or YAML output is being used

Open perlun opened this issue 2 years ago • 5 comments

Required information

  • Distribution: Debian GNU/Linux
  • Distribution version: 11
  • The output of "lxc info" or if that fails:
Expand for the full output
config:
  core.https_address: 127.0.0.1
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses:
  - 127.0.0.1:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIICCDCCAY2gAwIBAgIRAMWgBXs4ePN18bALIodKjPEwCgYIKoZIzj0EAwMwNTEc
    MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBhbGxh
    ZGluMB4XDTIxMDkwMzA1NTc1NloXDTMxMDkwMTA1NTc1NlowNTEcMBoGA1UEChMT
    bGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBhbGxhZGluMHYwEAYH
    KoZIzj0CAQYFK4EEACIDYgAEQEip/SgQXvi0cEjvvXrn8R9a7d2T16cnd5NcTq2Q
    U6tcIoU6XlLqWMwEaDgOFeaHOQE29P5+cSsbhTxIfQHQg9/Jg8H4xsTy0mwed0ug
    wzdLDB9FKeCFUN44NiSEdbb3o2EwXzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww
    CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAqBgNVHREEIzAhggdhbGxhZGluhwR/
    AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2kAMGYCMQCLCIHI6XyJ
    jmk5/t002rSxOC8GIMbr6gsqlNKtGH9LX21njYtF5MQ+1nD8aLjMNA0CMQDQnSUU
    6FwaiTAVP4lPzZiTWfgAdgJlnXQxN/HPVt8IuELlXuxY4A+KG95+6w9IsJw=
    -----END CERTIFICATE-----
  certificate_fingerprint: 8ed1a33b3fb2d979ae9ed524a3752530bd334de17644bc0534b8dafc6340c246
  driver: lxc | qemu
  driver_version: 4.0.12 | 6.1.1
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "false"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.10.0-8-amd64
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Debian GNU/Linux
  os_version: "11"
  project: default
  server: lxd
  server_clustered: false
  server_event_mode: full-mesh
  server_name: alladin
  server_pid: 669213
  server_version: "5.1"
  storage: dir
  storage_version: "1"
  storage_supported_drivers:
  - name: lvm
    version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.43.0
    remote: false
  - name: ceph
    version: 15.2.16
    remote: true
  - name: btrfs
    version: 5.4.1
    remote: false
  - name: cephfs
    version: 15.2.16
    remote: true
  - name: dir
    version: "1"
    remote: false

Issue description

I just noticed that lxc list behaves differently depending on the output format selected. The IP address 192.168.97.58 refers to a nonexistent container on my machine.

In table (default), CSV and compact modes, it behaves correctly:

$ lxc list ipv4=192.168.97.58 
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ lxc list ipv4=192.168.97.58 -f csv
$ lxc list ipv4=192.168.97.58 -f compact
  NAME  STATE  IPV4  IPV6  TYPE  SNAPSHOTS  

But if I switch to either JSON mode or YAML mode, it ignores the filter altogether. Note how the outputted container has IP address 192.168.97.59, not 192.168.97.58.

$ lxc list ipv4=192.168.97.58 -f yaml
- architecture: x86_64
  config:
    image.architecture: amd64
    image.description: ubuntu 22.04 LTS amd64 (release) (20220506)
    image.label: release
    image.os: ubuntu
    image.release: jammy
    image.serial: "20220506"
    image.type: squashfs
    image.version: "22.04"
    volatile.base_image: dda8ea8622eab3df7f71b274a436ee972610efc85d66e0fee14bdb3b4d492072
    volatile.cloud-init.instance-id: 8e236150-1e86-4036-a3bd-d4b3974737be
    volatile.eth0.host_name: veth160350ef
    volatile.eth0.hwaddr: 00:16:3e:94:33:0b
    volatile.idmap.base: "0"
    volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.power: RUNNING
    volatile.uuid: accd6a64-bb9b-4367-9642-08c22ca4a6ee
  devices: {}
  ephemeral: false
  profiles:
  - default
  stateful: false
  description: ""
  created_at: 2022-05-12T05:39:03.074486106Z
  expanded_config:
    image.architecture: amd64
    image.description: ubuntu 22.04 LTS amd64 (release) (20220506)
    image.label: release
    image.os: ubuntu
    image.release: jammy
    image.serial: "20220506"
    image.type: squashfs
    image.version: "22.04"
    volatile.base_image: dda8ea8622eab3df7f71b274a436ee972610efc85d66e0fee14bdb3b4d492072
    volatile.cloud-init.instance-id: 8e236150-1e86-4036-a3bd-d4b3974737be
    volatile.eth0.host_name: veth160350ef
    volatile.eth0.hwaddr: 00:16:3e:94:33:0b
    volatile.idmap.base: "0"
    volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.power: RUNNING
    volatile.uuid: accd6a64-bb9b-4367-9642-08c22ca4a6ee
  expanded_devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: master-newt
  status: Running
  status_code: 103
  last_used_at: 2022-05-12T05:39:08.241626008Z
  location: none
  type: container
  project: default
  backups: []
  state:
    status: Running
    status_code: 103
    disk: {}
    memory:
      usage: 173330432
      usage_peak: 0
      swap_usage: 0
      swap_usage_peak: 0
    network:
      eth0:
        addresses:
        - family: inet
          address: 192.168.97.59
          netmask: "24"
          scope: global
        - family: inet6
          address: fd42:ca9f:e5d0:40b7:a0a4:76ff:fe9b:a2c2
          netmask: "64"
          scope: global
        - family: inet6
          address: fe80::a0a4:76ff:fe9b:a2c2
          netmask: "64"
          scope: link
        counters:
          bytes_received: 36742
          bytes_sent: 12546
          packets_received: 138
          packets_sent: 85
          errors_received: 0
          errors_sent: 0
          packets_dropped_outbound: 0
          packets_dropped_inbound: 0
        hwaddr: a2:a4:76:9b:a2:c2
        host_name: veth160350ef
        mtu: 1500
        state: up
        type: broadcast
      lo:
        addresses:
        - family: inet
          address: 127.0.0.1
          netmask: "8"
          scope: local
        - family: inet6
          address: ::1
          netmask: "128"
          scope: local
        counters:
          bytes_received: 1436
          bytes_sent: 1436
          packets_received: 16
          packets_sent: 16
          errors_received: 0
          errors_sent: 0
          packets_dropped_outbound: 0
          packets_dropped_inbound: 0
        hwaddr: ""
        host_name: ""
        mtu: 65536
        state: up
        type: loopback
    pid: 820862
    processes: 52
    cpu:
      usage: 11840020000
  snapshots: []

perlun avatar May 12 '22 05:05 perlun

Ah yeah, there are a few things we should be doing here:

  • Filtering should be possible and generally should be applied
  • Columns however don't make sense and should cause an error if used with yaml/json

stgraber avatar May 12 '22 06:05 stgraber

Having the same trouble - --format with yaml/json filtering is not working now.

It works few days ago, but not now with latest/stable: 5.1-1f6f485 2022-05-12 from snap.

At the same time lxc list $filter -f csv - works.

knutov avatar May 12 '22 19:05 knutov

any update on this issue? As I can't deploy a new machine with snap on ubuntu as 5.0.1 has introduced the filtering issue that was not in 5.0.0 and has broken all my automation.

More than happy to help get a PR through to help if the issue.

scotty-c avatar Aug 25 '22 03:08 scotty-c

@scotty-c we are intending to look at this ASAP, if you're able to have a go at fixing it, I would be happy to assign to you.

tomponline avatar Aug 25 '22 07:08 tomponline

I will have sometime next week to have a look at the issue.

scotty-c avatar Aug 26 '22 02:08 scotty-c

@scotty-c how ate you getting on with this? Thanks

tomponline avatar Oct 10 '22 08:10 tomponline

@stgraber happy for me to take this one?

tomponline avatar Oct 11 '22 14:10 tomponline

Yep!

stgraber avatar Oct 11 '22 20:10 stgraber

Looking at this now.

tomponline avatar Oct 14 '22 13:10 tomponline