azurefile-csi-driver
azurefile-csi-driver copied to clipboard
can't use my own storage account which in another resourceGroup
What happened:
I want to use my own storage account which is in another resourse group to dynamic create pvs. And I found some parameters here. The follows are my yaml files.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: azurefile-myown
provisioner: file.csi.azure.com
allowVolumeExpansion: true
parameters:
resourceGroup: RG-Name
storageAccount: accountname
shareName: sharename
storeAccountKey: "true"
secretName: azurefile-myown-secret
secretNamespace: default
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict # https://linux.die.net/man/8/mount.cifs
- nosharesock # reduce probability of reconnect race
- actimeo=30 # reduce latency for metadata-heavy workload
---
apiVersion: v1
kind: Secret
metadata:
name: azurefile-myown-secret
type: Opaque
data:
azurestorageaccountname: somebase64accountname
azurestorageaccountkey: somebase64accountkey
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: azurefile-myown
volumeMode: Filesystem
With these yaml files, when I use "kubectl describe pvc test-pvc" to check my pvc, the following are shown.
Warning ProvisioningFailed 10s (x3 over 13s) file.csi.azure.com_csi-azurefile-controller-xxxxxxxxxx-xxxxx_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx failed to provision volume with StorageClass "azurefile-myown": rpc error: code = Internal desc = storage.FileSharesClient#Get: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' with object id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' does not have authorization to perform action 'Microsoft.Storage/storageAccounts/fileServices/shares/read' over scope '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/RG-Name/providers/Microsoft.Storage/storageAccounts/accountname/fileServices/default/shares/sharename' or the scope is invalid. If access was recently granted, please refresh your credentials."
I also find that you(I mean this project, don't mind) provide another way to do what I want to. link here
And I did this with following yaml files.
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: azurefile-myown
provisioner: file.csi.azure.com
allowVolumeExpansion: true
parameters:
csi.storage.k8s.io/provisioner-secret-name: azurefile-myown-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: azurefile-myown-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: azurefile-myown-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict # https://linux.die.net/man/8/mount.cifs
- nosharesock # reduce probability of reconnect race
- actimeo=30 # reduce latency for metadata-heavy workload
---
apiVersion: v1
kind: Secret
metadata:
name: azurefile-myown-secret
type: Opaque
data:
azurestorageaccountname: somebase64accountname
azurestorageaccountkey: somebase64accountkey
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: azurefile-myown
volumeMode: Filesystem
But it also occured an error with following info.
Warning ProvisioningFailed 12s file.csi.azure.com_csi-azurefile-controller-xxxxxxxxxx-xxxxx_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx failed to provision volume with StorageClass "azurefile-myown": rpc error: code = Internal desc = storage: service returned error: StatusCode=403, ErrorCode=403 This request is not authorized to perform this operation., ErrorMessage=no response body was available for error status code, RequestInitiated=Tue, 12 Apr 2022 01:23:45GMT, RequestId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, API Version=, QueryParameterName=, QueryParameterValue=
What you expected to happen:
Use my own storage account which is in another resourse group to dynamic create pvs. I dont have permission to change IAM.
How to reproduce it:
Create a resource group named RG-Name. Create a storage account named accountname in RG RG-Name. Create a file share named sharename in storage account accountname. Use the yaml file that I provided.
Anything else we need to know?: Maybe I was in wrong usage. But my Azure MVP couldn't provide the resolution for me. So I asked here.
Environment:
- CSI Driver version: I don't know how to find it. I just updated my AKS cluster into 1.21, and it installed itself.
- Kubernetes version (use
kubectl version): v1.21.9 - OS (e.g. from /etc/os-release): It is based on image AKSUbuntu-1804gen2containerd-2022.03.21.
- Kernel (e.g.
uname -a): It is based on image AKSUbuntu-1804gen2containerd-2022.03.21.
for the first usage, you should grant cluster identity permission to that resource group, follow example guide here: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/install-driver-on-aks.md#option2-enable-csi-driver-on-existing-cluster-with-version--121
for the second usage, is your storage account Premium? if yes, the minimum size should be 100GB
For the first usage, I cant change IAM. Maybe I cant use is anymore. For the second usage, my storage account is standard LRS with LFS on.
For the first usage, I cant change IAM. Maybe I cant use is anymore. For the second usage, my storage account is standard LRS with LFS on.
@OP-Kobayashi what is LFS ? could you provide controller pods logs if it's not on AKS? follow: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/csi-debug.md#case1-volume-createdelete-issue
I do use an AKS cluster. LFS means large file shares.
does the original azurefile storage class work on your cluster? not sure whether you have provided a correct account name and key in secret
I tried same account name and key with another usage here, and it works.
I want to use the storage class method. Cloud you help me.
I want to use the storage class method. Cloud you help me.
@OP-Kobayashi you need to provide api-server address, otherwise it's quite hard to diagnostics. I would suggest creating an azure ticket.
api address? Do you mean endpoint?
api address? Do you mean endpoint?
aks api-server address, it's better filing an Azure support ticket.
o, since you are in Azure China, try add storageEndpointSuffix: "core.chinacloudapi.cn" in parameters, e.g.
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: azurefile-myown
provisioner: file.csi.azure.com
allowVolumeExpansion: true
parameters:
csi.storage.k8s.io/provisioner-secret-name: azurefile-myown-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: azurefile-myown-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: azurefile-myown-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
storageEndpointSuffix: "core.chinacloudapi.cn"
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict # https://linux.die.net/man/8/mount.cifs
- nosharesock # reduce probability of reconnect race
- actimeo=30 # reduce latency for metadata-heavy workload
It doesnot work with following info.
Warning ProvisioningFailed 18s file.csi.azure.com_csi-azurefile-controller-57fc787986-gknch_xxxxxxxxxx-xxxxx_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx failed to provision volume with StorageClass "azurefile-myown": rpc error: code = Internal desc = storage: service returned error: StatusCode=403, ErrorCode=403 This request is not authorized to perform this operation., ErrorMessage=no response body was available for error status code, RequestInitiated=Thu, 14 Apr 2022 09:19:39 GMT, RequestId=xxxxxxxxxx-xxxxx_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, API Version=, QueryParameterName=, QueryParameterValue=
I have tried following options with many combinations, but all useless.
apiVersion: [storage.k8s.io/v1](http://storage.k8s.io/v1)
kind: StorageClass
metadata:
name: azurefile-myown
provisioner: file.csi.azure.com
allowVolumeExpansion: true
parameters:
#resourceGroup: RG-Name
#storageAccount: accountname
#shareName: sharename
#location: china-east2
#server: accountname.file.core.chinacloudapi.cn
storageEndpointSuffix: "core.chinacloudapi.cn"
#storeAccountKey: "true"
#secretName: azurefile-myown-secret
#secretNamespace: default
csi.storage.k8s.io/provisioner-secret-name: azurefile-myown-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: azurefile-myown-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: azurefile-myown-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict # https://linux.die.net/man/8/mount.cifs
- nosharesock # reduce probability of reconnect race
- actimeo=30 # reduce latency for metadata-heavy workload
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.