freenas-provisioner
freenas-provisioner copied to clipboard
Invalid character 'E' looking for beginning of value
If I set the 'host' portion of secret.yml to a base64 of an IP address or a DNS hostname, I get this issue in the logs:
kubectl -n kube-system logs -f freenas-nfs-provisioner-5559b967df-js7sb
I0114 21:55:22.266603 1 controller.go:926] provision "default/freenas-test-pvc" class "freenas-nfs": started
I0114 21:55:22.271765 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"freenas-test-pvc", UID:"a2d0a669-1846-11e9-b420-000c29289e1c", APIVersion:"v1", ResourceVersion:"10976528", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/freenas-test-pvc"
W0114 21:55:22.310721 1 dataset.go:60] invalid character 'E' looking for beginning of value
W0114 21:55:22.310772 1 controller.go:685] Retrying syncing claim "default/freenas-test-pvc" because failures 14 < threshold 15
E0114 21:55:22.310798 1 controller.go:700] error syncing claim "default/freenas-test-pvc": failed to provision volume with StorageClass "freenas-nfs": invalid character 'E' looking for beginning of value
I0114 21:55:22.310998 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"freenas-test-pvc", UID:"a2d0a669-1846-11e9-b420-000c29289e1c", APIVersion:"v1", ResourceVersion:"10976528", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "freenas-nfs": invalid character 'E' looking for beginning of value
What version are you running and can you send the secret (minus creds of course)?
Kubernetes v1.12.1 FreeNAS-9.10.2.U1
---
apiVersion: v1
kind: Secret
metadata:
name: freenas-nfs
namespace: kube-system
type: Opaque
data:
# all values should be base64 encoded (ie: echo -n 'https' | base64)
#
# note that all values are relative to where the provisioner is running
# ie: if running the provisioner directly on the FreeNAS server 'localhost'
# would be a valid 'host' value
# http|https
# default: http
# default: localhost (for running the provisioner out of cluster directly on FreeNAS node)
host: MTkyLjE2OC4wLjIxMQ==
# default: 80
port: NDQz
# default: root (api is only available for root currently)
#username: root
password: *redacted*
# allow for self-signed/untrusted certs if using https
# true|false
# default: false
allowInsecure: dHJ1ZQ==
Can you send the version of the controller as well?
Hi,
I'm not sure what you mean by controller version, I cloned the repo yesterday and applied the configuration as stated in the README. Where would I obtain this controller version you need? Thanks
That's enough info actually. The controller version is in the deployment manifest. Thanks!
@asc-adean looking at the code this appears to be a response to an API call from FreeNAS but not certain yet. Did you use the default class.yaml
as well? If not can you send over what you used?
Also, what's the name of your pool in FreeNAS? Specifically, is it tank
?
@asc-adean any luck?
I just upgraded the deployment image from 2.3 to 2.4 and now I am having this issue as well. I've modified the two storage class providers I have to match the newer class.yaml. I'm am now running FreeNAS 11.2U3 (Had this issue on 11.2U2 as well).
I0328 15:31:43.268383 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"testspace", Name:"newtest", UID:"080c2369-516e-11e9-9b2b-a01e0b0bb9d5", APIVersion:"v1", ResourceVersion:"33798872", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "freenas-test": EOF
One thing to note, the new class.yaml has Mountoptions as an empty map but kubectl apply complains unless it's an empty list. So I did change this to Mountoptions: []
error: error validating "freenas-test.yaml": error validating data: ValidationError(StorageClass.mountOptions): invalid type for io.k8s.api.storage.v1.StorageClass.mountOptions: got "map", expected "array"; if you choose to ignore these errors, turn validation off with --validate=false
Let me know if I can provide anything to help out. -g
Can you share the storage class used ?
According to https://godoc.org/k8s.io/api/storage/v1 , mountoptions
is of type[]string
so indeed, it will fail if you use a map instead...
So I just realized that class.yaml should be declared as empty slice ([]
) and not dict ({}
) :
https://github.com/nmaupu/freenas-provisioner/blob/master/deploy/class.yaml#L11
Sure here is one. I have tried various configuration changes. Including commenting out the mount options as well as using an empty list [], currently I was working on the assumption that it wanted something in that field so I added nolock. I have also tried a new secret file although the old default was the one that has been working fine for months with the older 2.3 image.
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: freenas-test
# annotations:
# storageclass.kubernetes.io/is-default-class: "true"
provisioner: freenas.org/nfs
allowVolumeExpansion: true
reclaimPolicy: Delete
mountOptions:
- nolock
parameters:
# namespace of the secret which contains FreeNAS server connection details
# default: kube-system
#serverSecretNamespace:
# name of the secret which contains FreeNAS server connection details
# default: freenas-nfs
# serverSecretName:
# the name of the parent dataset (or simply pool) where all resources will
# be created, it *must* exist before provisioner will work
# example: tank/k8s/mycluster
# default: tank
datasetParentName: Hitachi/test
# whether to enforce quotas for each dataset
# if enabled each newly provisioned dataset will set the appropriate quota
# per the PVC
# default: true
#datasetEnableQuotas:
# whether to reserve space when the dataset is create
# if enabled each newly provisioned dataset will set the appropriate value
# per the PVC
# default: true
#datasetEnableReservation:
# if enabled provisioner will create parent datasets for each namespace
# otherwise, all datasets will be provisioned in a flat manner
# default: true
#datasetEnableNamespaces:
# if datasetEnableNamespaces is enabled, sets a per-namespace quota
# example: 5M | 10G | 1T (M, Mi, MB, MiB, G, Gi, GB, GiB, T, Ti, TB, or TiB)
# default: 0 (no quota)
#datasetNamespaceQuota:
# if datasetEnableNamespaces is enabled, sets a per-namespace reservation
# this should not be greater than the datasetNamespaceQuota
# example: 5M | 10G | 1T (M, Mi, MB, MiB, G, Gi, GB, GiB, T, Ti, TB, or TiB)
# default: 0
#datasetNamespaceReservation:
# if enabled created datasets will adhere to reliable pattern
# if datasetNamespaces == true dataset pattern is: <datasetParentName>/<namespace>/<PVC Name>
# if datasetNamespaces == false dataset pattern is: <datasetParentName>/<namespace>-<PVC Name>
# if disabled, datasets will be created with a name pvc-<uid> (the name of the provisioned PV)
# default: true
#datasetDeterministicNames:
# if enabled and datasetDeterministicNames is enabled then dataset that
# already exist (pre-povisioned out of band) will be ratained by the
# provisioner during deletion of the reclaim process
# ignored if datasetDeterministicNames is disabled (collisions result in failure)
# default: true
#datasetRetainPreExisting:
# the following parameters determine permissions and ownership of the
# dataset mount directory (on FreeNAS) immediately upon creation
# default: 0777, root, wheel
#datasetPermissionsMode:
#datasetPermissionsUser:
#datasetPermissionsGroup:
# this determines what the 'server' property of the NFS share will be in
# in kubernetes, it's purpose is to provide flexibility between the control
# and data planes of FreeNAS
# default: uses the 'host' value from the secret
#shareHost:
# determines if newly created NFS shares will have the 'All Directories'
# option checked
# default: true
#shareAlldirs:
# Authorized hosts/networks (space-separated) allowed to access the shares
# default: ""
#shareAllowedHosts:
#shareAllowedNetworks:
# Determines root mapping
# cannot be used simultaneously with shareMapAll{User,Group}
# default: root:wheel
#shareMaprootUser:
#shareMaprootGroup:
# Determines user mapping for all access (not recommended)
# cannot be used simultaneously with shareMaproot{User,Group}
# default: ""
#shareMapallUser:
#shareMapallGroup:
# if enabled and datasetDeterministicNames is enabled then shares that
# already exist (pre-provisioned out of band) will be retained by the
# provisioner during deletion of the reclaim process
# ignored if datasetDeterministicNames is disabled (collisions result in failure)
# default: true
#shareRetainPreExisting:
Any thoughts on how I can debug this?
@totallyGreg can/are you running over http for the FreeNAS API? If so we can use tcpdump to observe traffic.
Weirdly Freenas is responding with 401 Unauthorized. The secret and the credentials therein are the same as what I setup last fall.
I am also able to hit the endpoint manually in my browser using those credentials.
- edited the weird line breaks that prevented the relevant parts from showing
16:37:38.300058 IP minix2.home.<domain>.us.46686 > freenasty.home.<domain>.us.http: Flags [P.], seq 2436945427:2436945617, ack 2470212457, win 229, options [nop,nop,TS val 3551132$
71 ecr 4189113551], length 190: HTTP: GET /api/v1.0/storage/dataset/Hitachi/test/ HTTP/1.1
E.....@.?.t.
..+
....^.P.@...<si...........
...o....GET /api/v1.0/storage/dataset/Hitachi/test/ HTTP/1.1
Host: freenasty.home.<domain>.us:80
User-Agent: Go-http-client/1.1
Authorization: Basic ####################=
Accept-Encoding: gzip
16:37:38.321268 IP freenasty.home.<domain>.us.http > minix2.home.<domain>.us.46686: Flags [P.],
seq 1:281, ack 190, win 1026, options [nop,nop,TS val 4189113572 ecr 3551132271], length 280: HTTP:
HTTP/1.1 401 Unauthorized
E..L..@.@.#w
...
..+.P.^.<si.@.............
.......oHTTP/1.1 401 Unauthorized
Server: nginx
Date: Thu, 04 Apr 2019 20:37:38 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
WWW-Authenticate: Basic Realm="django-tastypie"
Vary: Accept-Language, Cookie
Content-Language: en
I could be wrong but it looks like it's sending the base64 encoded password from the secret instead of the actual password and thus FreeNAS is responding with unauthorized.
@totallyGreg interesting, what's the kubernetes version/environment you're running?
1.13.4 as deployed by Rancher's RKE
k version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-26T00:04:52Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
I'm working on trying to understand how I can recompile the go executable with debug info so I can attach Squash to it to see exactly what's happening. Or simply add a few more informative error statements somewhere.
@totallyGreg end up discovering anything? I also use rke so we're pretty close. I haven't updated to the latest build yet though but I have some additional tweaks to make soon to the provisioner and I'll keep an eye out for this.
No, unfortunately I haven't had a chance to work on it for the last few weeks but I am motivated to figure it out since I can't add any more storage to my cluster (automatically) until I do. Yeah I don't understand where the EOF is coming from and my efforts to try and setup a go debug environment that could actually read the debug information wasn't very successful.
Is there any solution to this problem?
I haven't looked into it for a long time but may get a chance. I've actually started work on a csi
driver to replace this project longer term but that's probably a ways out before it's ready.
If you find something let me know or if you can share your config files etc that would be helpful.
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: freenas-nfs
# annotations:
# storageclass.kubernetes.io/is-default-class: "true"
provisioner: freenas.org/nfs
allowVolumeExpansion: true
reclaimPolicy: Delete
mountOptions: []
parameters:
# namespace of the secret which contains FreeNAS server connection details
# default: kube-system
#serverSecretNamespace:
# name of the secret which contains FreeNAS server connection details
# default: freenas-nfs
#serverSecretName:
# the name of the parent dataset (or simply pool) where all resources will
# be created, it *must* exist before provisioner will work
# example: tank/k8s/mycluster
# default: tank
# mnt/vl0
datasetParentName: vl0/k8s_storage
# whether to enforce quotas for each dataset
# if enabled each newly provisioned dataset will set the appropriate quota
# per the PVC
# default: true
#datasetEnableQuotas:
# whether to reserve space when the dataset is create
# if enabled each newly provisioned dataset will set the appropriate value
# per the PVC
# default: true
#datasetEnableReservation:
# if enabled provisioner will create parent datasets for each namespace
# otherwise, all datasets will be provisioned in a flat manner
# default: true
#datasetEnableNamespaces:
# if datasetEnableNamespaces is enabled, sets a per-namespace quota
# example: 5M | 10G | 1T (M, Mi, MB, MiB, G, Gi, GB, GiB, T, Ti, TB, or TiB)
# default: 0 (no quota)
#datasetNamespaceQuota:
# if datasetEnableNamespaces is enabled, sets a per-namespace reservation
# this should not be greater than the datasetNamespaceQuota
# example: 5M | 10G | 1T (M, Mi, MB, MiB, G, Gi, GB, GiB, T, Ti, TB, or TiB)
# default: 0
#datasetNamespaceReservation:
# if enabled created datasets will adhere to reliable pattern
# if datasetNamespaces == true dataset pattern is: <datasetParentName>/<namespace>/<PVC Name>
# if datasetNamespaces == false dataset pattern is: <datasetParentName>/<namespace>-<PVC Name>
# if disabled, datasets will be created with a name pvc-<uid> (the name of the provisioned PV)
# default: true
#datasetDeterministicNames:
# if enabled and datasetDeterministicNames is enabled then dataset that
# already exist (pre-povisioned out of band) will be ratained by the
# provisioner during deletion of the reclaim process
# ignored if datasetDeterministicNames is disabled (collisions result in failure)
# default: true
#datasetRetainPreExisting:
# the following parameters determine permissions and ownership of the
# dataset mount directory (on FreeNAS) immediately upon creation
# default: 0777, root, wheel
#datasetPermissionsMode:
#datasetPermissionsUser:
#datasetPermissionsGroup:
# this determines what the 'server' property of the NFS share will be in
# in kubernetes, it's purpose is to provide flexibility between the control
# and data planes of FreeNAS
# default: uses the 'host' value from the secret
#shareHost:
# determines if newly created NFS shares will have the 'All Directories'
# option checked
# default: true
#shareAlldirs:
# Authorized hosts/networks (space-separated) allowed to access the shares
# default: ""
#shareAllowedHosts:
#shareAllowedNetworks:
# Determines root mapping
# cannot be used simultaneously with shareMapAll{User,Group}
# default: root:wheel
#shareMaprootUser:
#shareMaprootGroup:
# Determines user mapping for all access (not recommended)
# cannot be used simultaneously with shareMaproot{User,Group}
# default: ""
#shareMapallUser:
#shareMapallGroup:
# if enabled and datasetDeterministicNames is enabled then shares that
# already exist (pre-provisioned out of band) will be retained by the
# provisioner during deletion of the reclaim process
# ignored if datasetDeterministicNames is disabled (collisions result in failure)
# default: true
#shareRetainPreExisting:
@lmorningstar116 can you send your secrets file over as well (redact the password) and give me some details about the version of FreeNAS you have running? Just spew out as much information about the env as you can including maybe what you can share of zpool status
etc.
I'll see what I can figure out..
@totallyGreg and @lmorningstar116 can you attempt to curl the api directly and see what you get?
Something like:
curl -v -u 'root:password' "http://<host>/api/v1.0/storage/dataset/Hitachi/test/"
curl -v -u 'root:password' "http://<host>/api/v1.0/storage/dataset/vl0/k8s_storage/"
Also, if possible can both of you share whether your root password has special characters and what they are? Not sure if some weird escaping thing is going on with the go http client..
any update on this? Running into the same problem
Can you send the output of the appropriate curl command above for your setup?
I'm 75% done writing a csi driver at this point too which will support resizing/snapshots/etc. Should be another couple weeks assuming I have some time to have it ready for some beta testing.
Cool! ALthought im using rancher, and I dont know if i have the same API, im going to assume i need the baremetal k8s output?
Thanks for the fast response btw.
Nah, this has nothing to do with k8s, it's directly to your FreeNAS install (can run the command from your workstation).
Rancher should work fine generally. All my clusters are rke.
Cant seem to login for some reason, im going to guess that the "datasetname" is "mnt/{datasetname}" ?
Yeah, send over maybe what you have in the config map for parent and I'll send the exact command...