k8s-csi-s3
k8s-csi-s3 copied to clipboard
Disk quota implementation
Description
对于限额的配置目前没有支持,只能通过自身去实现限制配额的大小
https://github.com/yandex-cloud/k8s-csi-s3/issues/59
I tried to do this using ceph's resource quotas, but encountered a stuck pod mount, here is my example
BUG REPORT
Versions
kubeadm version (use kubeadm version
):
v1.23.4
Environment:
-
Kubernetes version (use
kubectl version
): v1.23.4 - Cloud provider or hardware configuration: vsphere
-** ceph version** (use ceph -v
):
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
- OS (e.g. from /etc/os-release):
# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
-
Kernel (e.g.
uname -a
):
# uname -a
Linux localhost.localdomain 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
1、Create a user
radosgw-admin user create --uid="hengshi" --display-name="hengshi User"
{
"user_id": "hengshi",
"display_name": "hengshi User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "hengshi",
"access_key": "981YUOTAZG8BMUMUNMQ1",
"secret_key": "QKdZWnXzb36M4fVA8slJnO6aSmb5csFm2DjcJasv"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": 1638400
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
- Create a bucket for a specific user
s3cmd --access_key="981YUOTAZG8BMUMUNMQ1" --secret_key="QKdZWnXzb36M4fVA8slJnO6aSmb5csFm2DjcJasv" mb s3://hengshibucket
Bucket 's3://hengshibucket/' created
3.Verify buckets under a specific user
# radosgw-admin bucket list --uid=hengshi
[
"hengshibucket"
]
4、Set user quotas
radosgw-admin quota set --quota-scope=user --uid=hengshi --max-objects=10 --max-size=1024
5、Start the user quota
radosgw-admin quota enable --quota-scope=user --uid=hengshi
- Set a bucket quota
radosgw-admin quota set --uid=hengshi --quota-scope=bucket --max-objects=10 --max-size=1024
- Start the user quota
radosgw-admin quota enable --quota-scope=bucket --uid=hengshi
{
"user_id": "hengshi",
"display_name": "hengshi User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "hengshi",
"access_key": "981YUOTAZG8BMUMUNMQ1",
"secret_key": "QKdZWnXzb36M4fVA8slJnO6aSmb5csFm2DjcJasv"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 1024,
"max_size_kb": 1,
"max_objects": 10
},
"user_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 1024,
"max_size_kb": 1,
"max_objects": 10
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
9、 Update quota statistics
# radosgw-admin user stats --uid=hengshi --sync-stats
{
"stats": {
"total_entries": 0,
"total_bytes": 0,
"total_bytes_rounded": 0
},
"last_stats_sync": "2024-03-07 09:07:15.438609Z",
"last_stats_update": "2024-03-07 09:07:15.435407Z"
}
- Obtain quota information
s3cmd --access_key="981YUOTAZG8BMUMUNMQ1" --secret_key="QKdZWnXzb36M4fVA8slJnO6aSmb5csFm2DjcJasv" put kube-1.23.4-amd64.tar.gz s3://hengshibucket
upload: 'kube-1.23.4-amd64.tar.gz' -> 's3://hengshibucket/kube-1.23.4-amd64.tar.gz' [part 1 of 44, 15MB] [1 of 1]
1507328 of 15728640 9% in 0s 26.65 MB/s failed
1507328 of 15728640 9% in 0s 26.21 MB/s done
ERROR:
Upload of 'kube-1.23.4-amd64.tar.gz' part 1 failed. Use
/usr/bin/s3cmd abortmp s3://hengshibucket/kube-1.23.4-amd64.tar.gz 2~NdLSs4TWdctL60iPAkhM4Vzj8mR0e3r
to abort the upload, or
/usr/bin/s3cmd --upload-id 2~NdLSs4TWdctL60iPAkhM4Vzj8mR0e3r put ...
to continue the upload.
ERROR: S3 error: 403 (QuotaExceeded)
- Try to make the pod mount this bucket to see if it can be restricted
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: csi-s3-secret
stringData:
accessKeyID: 981YUOTAZG8BMUMUNMQ1
secretAccessKey: QKdZWnXzb36M4fVA8slJnO6aSmb5csFm2DjcJasv
endpoint: http://10.102.26.11:8080
Create pvc and specify a new bucket
volumeHandle: hengshibucket
cat pvc-manual.yaml
# Statically provisioned PVC:
# An existing bucket or path inside bucket manually created
# by the administrator beforehand will be bound to the PVC,
# and it won't be removed when you remove the PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: manualbucket-with-path
spec:
storageClassName: csi-s3
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
claimRef:
namespace: default
name: csi-s3-manual-pvc
csi:
driver: ru.yandex.s3.csi
controllerPublishSecretRef:
name: csi-s3-secret
namespace: kube-system
nodePublishSecretRef:
name: csi-s3-secret
namespace: kube-system
nodeStageSecretRef:
name: csi-s3-secret
namespace: kube-system
volumeAttributes:
capacity: 1Gi
mounter: geesefs
options: --memory-limit 1000 --dir-mode 0777 --file-mode 0666
volumeHandle: hengshibucket
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-s3-manual-pvc
spec:
# Empty storage class disables dynamic provisioning
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
create pod
cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: csi-s3-test-nginx
namespace: default
spec:
containers:
- name: csi-s3-test-nginx
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html/s3
name: webroot
volumes:
- name: webroot
persistentVolumeClaim:
claimName: csi-s3-manual-pvc
readOnly: false
Because the limit is set, it is stuck here and no error will be reported
root@csi-s3-test-nginx:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 142G 17G 125G 12% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mapper/centos-root 142G 17G 125G 12% /etc/hosts
shm 64M 0 64M 0% /dev/shm
hengshibucket 1.0P 0 1.0P 0% /usr/share/nginx/html/s3
tmpfs 5.9G 12K 5.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
root@csi-s3-test-nginx:/# dd if=/dev/zero of=/usr/share/nginx/html/s3/1.txt bs=1G count=1
^@^@
Error information exists in csi-s3 logs
# kubectl logs csi-s3-fpmnm -n kube-system -c driver-registrar
I0308 01:29:21.149155 1 main.go:110] Version: v1.2.0-0-g6ef000ae
I0308 01:29:21.149202 1 main.go:120] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0308 01:29:21.149226 1 connection.go:151] Connecting to unix:///csi/csi.sock
I0308 01:29:22.149892 1 main.go:127] Calling CSI driver to discover driver name
I0308 01:29:22.151135 1 main.go:137] CSI driver name: "ru.yandex.s3.csi"
I0308 01:29:22.151214 1 node_register.go:58] Starting Registration Server at: /registration/ru.yandex.s3.csi-reg.sock
I0308 01:29:22.151420 1 node_register.go:67] Registration Server started at: /registration/ru.yandex.s3.csi-reg.sock
I0308 01:29:23.985943 1 main.go:77] Received GetInfo call: &InfoRequest{}
I0308 01:29:24.004019 1 main.go:87] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}
When I created pvc, but ceph doesn't seem to go far enough to see the creation of pvc
[root@ceph-1 ~]# s3cmd --access_key="981YUOTAZG8BMUMUNMQ1" --secret_key="QKdZWnXzb36M4fVA8slJnO6aSmb5csFm2DjcJasv" ls s3://hengshibucket
Is there something wrong with the above error message?
Hi! What's your question? Are you asking why the write operation hangs when it's over quota? GeeseFS retries most upload errors (including quota errors) infinitely to give a chance to increase the quota, recover the mount and not lose any changes.
Yes Write operations are suspended when they exceed the quota
It's OK, it's the expected behaviour
It is recommended to throw an error message instead of sticking here