Unable to load package list: Cannot read property 'includes' of null
After updating to 4.0.4 (with that latest docker build), I'm getting this error on our Kubernetes-hosted installation after logging in:

What info can I provide to help debug this? I'm a little lost as to where to even begin looking...
Is it this line? https://github.com/bufferoverflow/verdaccio-gitlab/blob/da9aba8ec88c2be4565057552d7913d3c9881de4/src/gitlab.js#L162
That looks weird, I'm checking
I have been doing some refactorings for v4 today and also tried running the whole gitlab - verdaccio setup in localhost both as standalone processes and docker containers. No problems, I wasn't able to reproduce your issue :-/
You'll have to give us more details about your setup:
- docker container versions for both verdaccio and / or gitlab
- running as helm chart?
- configuration details
- ...
We're running it in Kubernetes on GCP, using below overrides:
image:
repository: bufferoverflow/verdaccio-gitlab
tag: latest
pullPolicy: IfNotPresent
service:
annotations: {}
clusterIP: ""
## List of IP addresses at which the service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
port: 4873
type: ClusterIP
# nodePort: 31873
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
podAnnotations: {}
replicaCount: 1
resources:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 512Mi
ingress:
enabled: true
hosts:
- [redacted]
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-production
certmanager.k8s.io/acme-challenge-type: "dns01"
certmanager.k8s.io/acme-dns01-provider: "google-clouddns-provider"
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.allow-http: "false"
tls:
- secretName: [redacted]
hosts:
- [redacted]
configMap: |
# This is the config file used for the docker images.
# It allows all users to do anything, so don't use it on production systems.
#
# Do not configure host and port under `listen` in this file
# as it will be ignored when using docker.
# see https://github.com/verdaccio/verdaccio/blob/master/docs/docker.md#docker-and-custom-port-configuration
#
# Look here for more config file examples:
# https://github.com/verdaccio/verdaccio/tree/master/conf
#
# path to a directory with all packages
storage: /verdaccio/storage/data
plugins: /verdaccio/plugins
max_body_size: 50mb
web:
# WebUI is enabled as default, if you want disable it, just uncomment this line
# enable: false
title: Verdaccio
auth:
gitlab:
url: https://gitlab.com
authCache:
enabled: true
ttl: 300
publish: $developer
# auth:
# htpasswd:
# file: /verdaccio/storage/htpasswd
# Maximum amount of users allowed to register, defaults to "+infinity".
# You can set this to -1 to disable registration.
#max_users: 1000
# a list of other known repositories we can talk to
uplinks:
npmjs:
url: https://registry.npmjs.org/
packages:
'@*/*':
# scoped packages
access: $authenticated
publish: $authenticated
proxy: npmjs
gitlab: true
'**':
access: $all
publish: $authenticated
proxy: npmjs
gitlab: true
# packages:
# '@*/*':
# # scoped packages
# access: $all
# publish: $authenticated
# proxy: npmjs
# '**':
# # allow all users (including non-authenticated users) to read and
# # publish all packages
# #
# # you can specify usernames/groupnames (depending on your auth plugin)
# # and three keywords: "$all", "$anonymous", "$authenticated"
# access: $all
# # allow all known users to publish packages
# # (anyone can register by default, remember?)
# publish: $authenticated
# # if package is not available locally, proxy requests to 'npmjs' registry
# proxy: npmjs
# To use `npm audit` uncomment the following section
middlewares:
audit:
enabled: true
# log settings
logs:
- {type: stdout, format: pretty, level: http}
#- {type: file, path: verdaccio.log, level: info}
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires Persistence.Enabled: true
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
## Verdaccio data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 200Gi
volumes:
# - name: nothing
# emptyDir: {}
mounts:
# - mountPath: /var/nothing
# name: nothing
# readOnly: true
securityContext:
enabled: true
runAsUser: 100
fsGroup: 101
Is there any other info I can provide to help?
I am facing the exact same issues on GCP running verdaccio on a Kubernetes cluster.
At first, I was running into net::ERR_SPDY_PROTOCOL_ERROR errors combined with the error mentioned by @StevenLangbroek. I have now disabled http2 on the nginx-ingress-controller which cleared the SPDY_PROTOCOL_ERROR in favour of a 400 (Bad Request):

My verdaccio deployment has been working just fine around 1-2 weeks ago.
@dlouzan @bufferoverflow how can we help debug this further?
So I noticed that pipe character after configMap, when I parse this YAML to JSON, that means that the nodes below it, for which the indentation suggests they should be children of configMap, don't end up below it but on the same (top) level... Could that be the problem?
(we based our config on https://github.com/helm/charts/blob/master/stable/verdaccio/values.yaml btw, so if that's the problem I can submit a fix upstream)
Correction: this leads the entire configMap to be considered a string:

We have this bug as well. Any chance you can create a tag in docker for the previous version so we can use this in the meantime until the bug in latest is fixed?
@StevenLangbroek I haven't been able to reproduce this locally so I guess it must have something to do with Kubernetes or the Helm chart, or maybe Verdaccio 4 final on Kubernetes. IIRC what you see in the code above is expected, the configMap is included as a string.
@bufferoverflow I don't think I have rights on the docker hub, could you maybe create the tag for the previous version as @MumblesNZ suggested?
@dlouzan FYI we are using docker CE and docker-compose to spin it up and are getting this issue.
@MumblesNZ any chance you can share the docker-compose file or explain your configuration so that we can reproduce it? I wasn't able to reproduce it locally, and you saying you can trigger it via raw docker without kubernetes would help
Hey @dlouzan ,
Our Verdaccio config file (named docker.yml) looks like this:
storage: /verdaccio/storage/data
plugins: /verdaccio/plugins
listen:
- 0.0.0.0:4873
web:
enable: true
title: Demo NPM Registry
logo: "/opt/verdaccio-gitlab/DemoWhite.png"
primary_color: "#7297bd"
gravatar: true | false
scope: "@Demo"
sort_packages: asc | desc
auth:
gitlab:
url: https://gitlab.com
authCache:
enabled: true
ttl: 300
publish: $maintainer
uplinks:
npmjs:
url: https://registry.npmjs.org/
packages:
'@Demo/*':
# scoped packages
access: $authenticated
publish: $authenticated
proxy: npmjs
gitlab: true
'**':
access: $all
publish: $authenticated
proxy: npmjs
gitlab: true
logs:
- { type: stdout, format: pretty, level: info }
and our docker-compose file looks like this:
version: '3'
volumes:
verdaccio-storage:
services:
verdaccio:
image: bufferoverflow/verdaccio-gitlab
container_name: verdaccio
ports:
- "4873:4873"
volumes:
- verdaccio-storage:/verdaccio/storage
- ./docker.yml:/verdaccio/conf/config.yaml
We login on the command line, push a package to @Demo/test-repo. Our user has owner privileges on @Demo group & @Demo/test-repo repository on gitlab. When we login via the web dashboard, we experience the original error.
@dlouzan @bufferoverflow is there anything we can do to help figure out what's causing this?
@MumblesNZ @StevenLangbroek @bufferoverflow I have just tagged versions 2.0.0 and 2.2.0 in the docker hub, with some caveats:
- Since we were one of the first plugins to start using 4.x verdaccio functionality, we naively depended on the verdaccio version
verdaccio:4.x-next, which was similar tolatestbut for the 4.x branch - We have no reproducibility since there was some issue with the docker hub setup and we weren't automatically generating versions on tags, so I had to generate and push them now, but currently
verdaccio:4.x-nextpoints to the4.1branch 2.1.0was not pushed because it doesn't even pass all unit & functional tests with the now pulled4.x-next(which of course it did at the time)
Long story short, please check if one of the tagged versions solves your issues. I'm sorry for the mess :-/
I'll have to reserve a big quantum next week to take a deeper look at this and refactor the whole tests for 4.x and find this issue for once and for all.
We might be able to fully regenerate the docker versions if we carefully checkout the source code of verdaccio at the time and manually build them, but to be honest I'd rather dedicate the effort to fix the current bug.
@dlouzan thanks for that.
It's working fine for me with my setup as above for version 2.0.0. Error is still occurring in version 2.2.0.
@MumblesNZ Well, that at least gives us a hint that maybe it wasn't the migration to final 4.x in the verdaccio:4.x-next branch, but something we did in the verdaccio-gitlab codebase.
@StevenLangbroek I really hope that v2.0.0 also works temporarilly for you as it did for @MumblesNZ
@dlouzan we're fine for now, have a version-pinned one running stable. I'd just like to help figure out what's going on.
@dlouzan @bufferoverflow any word on this? can we help creating a repro?
Hunted down the source of this error, but it may not be the whole story. Essentially verdaccio's UI code doesn't handle errors all that well: https://github.com/verdaccio/ui/pull/112