glusterfs
glusterfs copied to clipboard
Quota bad unit
Description of problem: 1)- Reading quota from a directory that was made with a base 10 number, ie.: 10000000000 (required, as a HDD's count is always in base 10 not base 2) displays base 2 results (GiB instead of GB). Power of 2 make sens with memory, not HDD.
2)- Creating a quota with "10GB" creates in fact one of 10GiB (base 2).
This is a real concern as with a lot of users it fools the admin when he checks quotas - this is an even bigger problem when quotas have been set low and regularly grown to adjust to users' volume filling.
The exact command to reproduce the issue: 1)- gluster volume quota gv0 limit-usage /NIFF 10000000000 gluster volume quota gv0 list
2)- gluster volume quota gv0 limit-usage /NIFF 10GB gluster volume quota gv0 list
The full output of the command that failed: 1)-
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/NIFF 9.3GB 80%(7.5GB) 0Bytes 9.3GB No No
^^^^^ ^^^^^ ^^^^^
GiB GiB GiB
2)-
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/NIFF 10.0GB 80%(8.0GB) 0Bytes 10.0GB No No
^^^^^ ^^^^^ ^^^^^
GiB GiB GiB
Expected results: For 1 and 2 :
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/NIFF 10.0GB 80%(8.0GB) 0Bytes 10.0GB No No
with a gluster input notation rectification from GB to GiB (or both, if people want to spend most of their time manipulating quotas filling a spreadsheet, it is their problem after all).
Mandatory info:
- The output of the gluster volume info
command:
Volume Name: gv0
Type: Replicate
Volume ID: 52e316a4-61b2-499e-8091-b9434d2b836c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: t60:/DATA/GLUSTER/gv0
Brick2: t61:/DATA/GLUSTER/gv0
Options Reconfigured:
ssl.cipher-list: TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
auth.reject: 192.168.1.1
auth.allow: 192.168.1.*
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
cluster.granular-entry-heal: enable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
features.trash: on
features.trash-dir: Corbeille
features.trash-max-filesize: 10485760
features.trash-internal-op: on
cluster.self-heal-daemon: enable
cluster.min-free-disk: 20%
server.allow-insecure: on
- The output of the gluster volume status
command:
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick t60:/DATA/GLUSTER/gv0 58893 0 Y 1231
Brick t61:/DATA/GLUSTER/gv0 52363 0 Y 1234
Self-heal Daemon on localhost N/A N/A Y 1322
Quota Daemon on localhost N/A N/A Y 46094
Self-heal Daemon on t61 N/A N/A Y 1314
Quota Daemon on t61 N/A N/A Y 4713
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
- The output of the gluster volume heal
command:
gluster volume heal => returns : unrecognized word: gv0 (position 0)
gluster volume heal gv0 info summary => returns :
Brick t60:/DATA/GLUSTER/gv0
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick t61:/DATA/GLUSTER/gv0
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
**- Provide logs present on following locations of client and server nodes - /var/log/glusterfs/
Irrelevant.
**- Is there any crash ? Provide the backtrace and coredump
No, just the use of an inappropriate unit.
Additional info:
- The operating system / glusterfs version:
Debian bullseye (V.11.11) on ARM64 / glusterfs 10.1
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration