docker-sonarqube
docker-sonarqube copied to clipboard
Crash with lastest docker version of sonarqube lts (8.9)
I'm using Sonarqube community lts and since the latest updates it is crashing at start.
I can reproduce the problem just running docker run --rm -it sonarqube:lts
(see docker log and lscpu below).
I'm running docker 20.10.17 on Ubuntu 20.04.5 LTS (no VM)
It looks similar as #544
Workaround found: downgrade to sonarqube:8.9.7-community
.
lsb-release
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
Crash log:
docker run --pull --rm -it sonarqube:lts 2022.09.07 16:28:36 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp 2022.09.07 16:28:36 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:33643] 2022.09.07 16:28:36 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch 2022.09.07 16:28:36 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running warning: no-jdk distributions that do not bundle a JDK are deprecated and will be removed in a future release 2022.09.07 16:28:39 INFO es[][o.e.n.Node] version[7.16.2], pid[39], build[default/tar/2b937c44140b6559905130a8650c64dbd0879cfb/2021-12-18T19:42:46.604893745Z], OS[Linux/5.4.0-125-generic/amd64], JVM[Eclipse Adoptium/OpenJDK 64-Bit Server VM/11.0.13/11.0.13+8] 2022.09.07 16:28:39 INFO es[][o.e.n.Node] JVM home [/opt/java/openjdk] 2022.09.07 16:28:39 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/opt/sonarqube/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false] 2022.09.07 16:28:40 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2022.09.07 16:28:40 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2022.09.07 16:28:40 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2022.09.07 16:28:40 INFO es[][o.e.p.PluginsService] loaded module [reindex] 2022.09.07 16:28:40 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2022.09.07 16:28:40 INFO es[][o.e.p.PluginsService] no plugins loaded 2022.09.07 16:28:40 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/ (overlay)]], net usable_space [112.5gb], net total_space [196.6gb], types [overlay] 2022.09.07 16:28:40 INFO es[][o.e.e.NodeEnvironment] heap size [512mb], compressed ordinary object pointers [true] 2022.09.07 16:28:40 INFO es[][o.e.n.Node] node name [sonarqube], node ID [4TbdfW-0R7uw8l7Ymd1pxg], cluster name [sonarqube], roles [data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest] 2022.09.07 16:28:46 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}] 2022.09.07 16:28:47 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings] 2022.09.07 16:28:47 INFO es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually 2022.09.07 16:28:48 INFO es[][o.e.n.Node] initialized 2022.09.07 16:28:48 INFO es[][o.e.n.Node] starting ... 2022.09.07 16:28:48 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:33643}, bound_addresses {127.0.0.1:33643} 2022.09.07 16:28:48 ERROR es[][o.e.b.ElasticsearchUncaughtExceptionHandler] uncaught exception in thread [main] org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to load metadata]; nested: CorruptIndexException[checksum failed (hardware problem?) : expected=2d5ead3e actual=f3e58ddd (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_0.fdt")))]; at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:157) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) ~[elasticsearch-cli-7.16.2.jar:7.16.2] at org.elasticsearch.cli.Command.main(Command.java:77) ~[elasticsearch-cli-7.16.2.jar:7.16.2] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:122) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-7.16.2.jar:7.16.2] Caused by: org.elasticsearch.ElasticsearchException: failed to load metadata at org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:197) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.node.Node.start(Node.java:1185) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:335) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:443) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:166) ~[elasticsearch-7.16.2.jar:7.16.2] ... 6 more Caused by: org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=2d5ead3e actual=f3e58ddd (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_0.fdt"))) at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:419) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:05] at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.write(Lucene50CompoundFormat.java:100) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:55] at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:5313) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:05] at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:457) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:05] at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:395) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:05] at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:476) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:05] at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:656) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:05] at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3911) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:05] at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3886) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:05] at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3865) ~[lucene-core-8.10.1.jar:8.10.1 2f24e6a49d48a032df1f12e146612f59141727a9 - mayyasharipova - 2021-10-12 15:13:05] at org.elasticsearch.gateway.PersistedClusterStateService$MetadataIndexWriter.flush(PersistedClusterStateService.java:611) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.gateway.PersistedClusterStateService$Writer.addMetadata(PersistedClusterStateService.java:884) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.gateway.PersistedClusterStateService$Writer.overwriteMetadata(PersistedClusterStateService.java:858) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.gateway.PersistedClusterStateService$Writer.writeFullStateAndCommit(PersistedClusterStateService.java:705) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.
(GatewayMetaState.java:515) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:170) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.node.Node.start(Node.java:1185) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:335) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:443) ~[elasticsearch-7.16.2.jar:7.16.2] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:166) ~[elasticsearch-7.16.2.jar:7.16.2] ... 6 more uncaught exception in thread [main] ElasticsearchException[failed to load metadata]; nested: CorruptIndexException[checksum failed (hardware problem?) : expected=2d5ead3e actual=f3e58ddd (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_0.fdt")))]; Likely root cause: org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=2d5ead3e actual=f3e58ddd (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_0.fdt"))) at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:419) at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.write(Lucene50CompoundFormat.java:100) at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:5313) at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:457) at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:395) at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:476) at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:656) at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3911) at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3886) at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3865) at org.elasticsearch.gateway.PersistedClusterStateService$MetadataIndexWriter.flush(PersistedClusterStateService.java:611) at org.elasticsearch.gateway.PersistedClusterStateService$Writer.addMetadata(PersistedClusterStateService.java:884) at org.elasticsearch.gateway.PersistedClusterStateService$Writer.overwriteMetadata(PersistedClusterStateService.java:858) at org.elasticsearch.gateway.PersistedClusterStateService$Writer.writeFullStateAndCommit(PersistedClusterStateService.java:705) at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState. (GatewayMetaState.java:515) at org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:170) at org.elasticsearch.node.Node.start(Node.java:1185) at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:335) at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:443) at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:166) at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:157) at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) at org.elasticsearch.cli.Command.main(Command.java:77) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:122) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) For complete error details, refer to the log at /opt/sonarqube/logs/sonarqube.log 2022.09.07 16:28:48 INFO es[][o.e.n.Node] stopping ... 2022.09.07 16:28:48 INFO es[][o.e.n.Node] stopped 2022.09.07 16:28:48 INFO es[][o.e.n.Node] closing ... 2022.09.07 16:28:48 INFO es[][o.e.n.Node] closed 2022.09.07 16:28:48 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 1 2022.09.07 16:28:48 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped 2022.09.07 16:28:48 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
lscpu
Architecture : x86_64
Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit
Boutisme : Little Endian
Address sizes: 40 bits physical, 48 bits virtual
Processeur(s) : 8
Liste de processeur(s) en ligne : 0-7
Thread(s) par cœur : 1
Cœur(s) par socket : 4
Socket(s) : 2
Nœud(s) NUMA : 1
Identifiant constructeur : GenuineIntel
Famille de processeur : 6
Modèle : 26
Nom de modèle : Intel(R) Xeon(R) CPU E5504 @ 2.00GHz
Révision : 5
Vitesse du processeur en MHz : 1855.864
BogoMIPS : 3990.14
Virtualisation : VT-x
Cache L1d : 256 KiB
Cache L1i : 256 KiB
Cache L2 : 2 MiB
Cache L3 : 8 MiB
Nœud NUMA 0 de processeur(s) : 0-7
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Drapaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology n
onstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm flush_l1d
I have the same issue, and indeed this is related with #544 and the zlib version referenced in the Dockerfile. Replacing 1.2.12-1 with 1.2.12-2 (and the checksum) may fix the issue.
I had to extend the official image for this to work.
FROM sonarqube:8.9.9-community
RUN set -eux; \
apk add --no-cache --virtual .build-deps zstd; \
rm -f /tmp/libz.tar; \
curl -LfsS https://archive.archlinux.org/packages/z/zlib/zlib-1%3A1.2.12-2-x86_64.pkg.tar.zst -o /tmp/libz.tar.zst; \
mkdir /tmp/libz; \
zstd -d /tmp/libz.tar.zst --output-dir-flat /tmp; \
tar -xf /tmp/libz.tar -C /tmp/libz; \
mv /tmp/libz/usr/lib/libz.so* /usr/glibc-compat/lib; \
apk del --purge .build-deps; \
rm -rf /tmp/*.apk /tmp/libz* /var/cache/apk/*;
Hi, is there a plan to fix this issue?
Hi all,
Thank you for sharing this issue with us! We're taking a look on this and we'll get back to this thread with updates.
Hi @mbecca, @metcox,
Unfortunately, we were unable to reproduce the issue. As @metcox mentioned, we suspect that it's related to the zlib version, but it seems that it appears only on specific environments and CPUs.
Could you please share some information, similar to the lscpu
shared by @stalb?
Hi @dimitris-kavvathas-sonarsource
sh-4.4$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Model name: Intel Core i7 9xx (Nehalem Class Core i7)
Stepping: 3
CPU MHz: 2494.218
BogoMIPS: 4988.43
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni ssse3 cx16 sse4_1 sse4_2 x2apic popcnt hypervisor lahf_lm cpuid_fault pti
sh-4.4$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
Hi @dimitris-kavvathas-sonarsource
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 42 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Model name: Intel(R) Xeon(R) CPU E7540 @ 2.00GHz
Stepping: 4
CPU MHz: 1995.000
BogoMIPS: 3990.00
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 18432K
NUMA node0 CPU(s): 0,1
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc cpuid pni ssse3 cx16 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer hypervisor lahf_lm pti ssbd ibrs ibpb stibp tsc_adjust arat flush_l1d arch_capabilities
Hello @mbecca and @metcox,
Thanks for your patience! We have gathered information from you and a few more users and indeed the issue lies with zlib
and appears only on specific CPUs.
Alpine seems to have backported a fix for zlib
. Could you please try again to run SonarQube with docker and let us know if it works for you?
In case you still experience issues, please run the image with an interactive bash shell and send us the information of the zlib
library version:
docker run --rm -it --entrypoint /bin/bash <sonarqube-image>
apk info zlib
A user in a community thread confirmed that the latest image of SonarQube fixes the issue for him. (tag sonarqube:9.8.0-community). In this version we are using the latest Alpine image.
Could you please try and let us know if it fixes the issue for you as well?
Hello, sorry for the late reply, I will not be able to test this before mid-January. However, I doubt it will be fixed. One of the differences in dockerfiles between 8-community and 9-community is the sideloading (not using apk) of a faulty Z_LIB version.
https://github.com/SonarSource/docker-sonarqube/blob/f8a5dbf9e4c1929a13395d4403f2c068d380d963/8/community/Dockerfile#L37-L42
We'll see
Hi and Happy New Year,
the issue is still there.
for the output you are requesting:
with sonarqube:8.9.9-community I've got:
$ docker run --rm -it --entrypoint /bin/bash sonarqube:8.9.9-community
bash-5.1# apk info zlib
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/main: No such file or directory
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/community: No such file or directory
zlib-1.2.12-r3 description:
A compression/decompression Library
zlib-1.2.12-r3 webpage:
https://zlib.net/
zlib-1.2.12-r3 installed size:
108 KiB
and with sonarqube:8.9.10-community I've got:
$ docker run --rm -it --entrypoint /bin/bash sonarqube:8.9.10-community
bash-5.1# apk info zlib
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/main: No such file or directory
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/community: No such file or directory
zlib-1.2.12-r3 description:
A compression/decompression Library
zlib-1.2.12-r3 webpage:
https://zlib.net/
zlib-1.2.12-r3 installed size:
108 KiB
and a bit more info
bash-5.1# apk info -L zlib
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/main: No such file or directory
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/community: No such file or directory
zlib-1.2.12-r3 contains:
lib/libz.so.1
lib/libz.so.1.2.12
I don't see any difference here, but if I do a checksum on /lib/libz.so* for both images, they are not the same. Anyway, it won't make any difference because the zlib library used is not the one provided by apk.
From what I understand from the dockerfile, the java version used for sonarqube 8 requires glibc. This dependency is resolved by installing glibc from https://github.com/sgerrand/alpine-pkg-glibc. This changes the path used for libraries linkage.
bash-5.1# cat /usr/glibc-compat/etc/ld.so.conf
# libc default configuration
/usr/local/lib
/usr/glibc-compat/lib
/usr/lib
/lib
As zlib.so is also explicitly added from https://archive.archlinux.org/packages/z/zlib/zlib-1%3A1.2.12-1-x86_64.pkg.tar.zst in /usr/glibc-compat/lib, it will be chosen first over the one in /lib
sonarqube:9 stops relying on this since https://github.com/SonarSource/docker-sonarqube/commit/403e0e1b8d3045b91924a2dca188678376f1bb61.
As the 8-community is a LTS, I get that such broad changes will not be applied here. So the minimal change would be to update the version of zlib defined in the dockerfile.
Since my first comment, several versions of zlib are available. No idea which one will work best.
Please note that the server I'm using for these tests will be taken down in the next few weeks, I don't know if the new one will reveal this kind of problem.
Hello @metcox.
Thanks for your very detailed answer!
I agree that updating the version of zlib
in the Dockerfile will probably fix the issue for 8.9.x.
However, since this is a problem that we haven't managed to reproduce and taking into account that we'll be rolling out a new LTS release very soon, I'm more inclined to not start working on any changes that we cannot confirm the fix for.
I think that the workaround of extending the image is good enough for the few cases that still have this problem.
Taking all this into account, I'll be closing this issue. Thanks again for your help on this!
Since curl is not installed in sonarqube:8.9-community
I had to install it in order to apply the workaround.
# mise a jour de zlib
RUN set -eux; \
apk add --no-cache --virtual .build-deps zstd curl; \
rm -f /tmp/libz.tar; \
curl -LfsS https://archive.archlinux.org/packages/z/zlib/zlib-1%3A1.2.12-2-x86_64.pkg.tar.zst -o /tmp/libz.tar.zst; \
mkdir /tmp/libz; \
zstd -d /tmp/libz.tar.zst --output-dir-flat /tmp; \
tar -xf /tmp/libz.tar -C /tmp/libz; \
mv /tmp/libz/usr/lib/libz.so* /usr/glibc-compat/lib; \
apk del --purge .build-deps; \
rm -rf /tmp/*.apk /tmp/libz* /var/cache/apk/*;