graal
graal copied to clipboard
Exception raised when calling java.lang.Runtime.availableProcessors with native image
Describe the issue Caused by: java.lang.NullPointerException at java.util.Objects.requireNonNull(Objects.java:208) at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:263) at java.nio.file.Path.of(Path.java:147) at java.nio.file.Paths.get(Paths.java:69) at com.oracle.svm.core.containers.CgroupUtil.lambda$readStringValue$0(CgroupUtil.java:57) at java.security.AccessController.executePrivileged(AccessController.java:145) at java.security.AccessController.doPrivileged(AccessController.java:569) at com.oracle.svm.core.containers.CgroupUtil.readStringValue(CgroupUtil.java:59) at com.oracle.svm.core.containers.CgroupSubsystemController.getStringValue(CgroupSubsystemController.java:66) at com.oracle.svm.core.containers.CgroupSubsystemController.getLongValue(CgroupSubsystemController.java:125) at com.oracle.svm.core.containers.cgroupv1.CgroupV1Subsystem.getLongValue(CgroupV1Subsystem.java:269) at com.oracle.svm.core.containers.cgroupv1.CgroupV1Subsystem.getCpuQuota(CgroupV1Subsystem.java:321) at com.oracle.svm.core.containers.CgroupMetrics.getCpuQuota(CgroupMetrics.java:71) at com.oracle.svm.core.ContainerInfo.getCpuQuota(ContainerInfo.java:41) at com.oracle.svm.core.Containers.activeProcessorCount(Containers.java:127) at java.lang.Runtime.availableProcessors(Runtime.java:241)
Steps to reproduce the issue Create a native image which invokes "java.lang.Runtime.availableProcessors"
Describe GraalVM and your environment:
- openjdk version "17.0.3" 2022-04-19
- OpenJDK Runtime Environment GraalVM CE 22.1.0 (build 17.0.3+7-jvmci-22.1-b06)
- OpenJDK 64-Bit Server VM GraalVM CE 22.1.0 (build 17.0.3+7-jvmci-22.1-b06, mixed mode, sharing)
- OS: GNU/Linux
- Architecture: x86_64
Hi, Thank you for reporting this, could you please provide a small reproducer for this
Same error when try to run a native docker image of a Quarkus application. I'm not sure why that happens, in my case maybe because I set the compiler version to Java 11. I fixed using the Quarkus Java 11 builder image command line flag mvn package -Pnative -Dquarkus.native.builder-image=quay.io/quarkus/ubi-quarkus-native-image:22.2-java11 I should try to update the whole project to Java 17.
I ran into the same stack trace. I wasn't able to put together a minimal repro, but I'm only seeing this when running in a containerized environment, and passing -H:-UseContainerSupport
to native-image
works around the crash
Running into this too, when trying to deploy a GraalVM (22.3.0) native application (built with Quarkus) via Docker on render.com. IIUC, this is triggered by unconditionally trying to retrieve CPU quotas for the current container, which may or may not be set (and apparently are not set on render.com).
This might be an equivalent to https://bugs.openjdk.org/browse/JDK-8272124 in OpenJDK, which is triggered by cgroup paths containing a colon.
Yes it's GraalVM's version of https://bugs.openjdk.org/browse/JDK-8272124 (at least very likely). The code is not very much up to date in that space and largely duplicates JDK code. To confirm, please paste /proc/self/cgroup
on those deployments if you can.
To confirm, please paste /proc/self/cgroup on those deployments if you can.
This is what I see there (sans some identifying UUIDs/SHAs):
13:blkio:/system.slice/containerd.service/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
12:hugetlb:/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
11:misc:/
10:devices:/system.slice/containerd.service/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
9:cpu,cpuacct:/system.slice/containerd.service/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
8:rdma:/
7:pids:/system.slice/containerd.service/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
6:perf_event:/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
5:net_cls,net_prio:/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
4:freezer:/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
3:memory:/system.slice/containerd.service/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
2:cpuset:/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
1:name=systemd:/system.slice/containerd.service/kubepods-burstable-pod<redacted>.slice:cri-containerd:<redacted>
0::/system.slice/containerd.service
Hmm, it might be JDK-8272124 or any number of other bugs. My attempts at reproducing this using the repro of JDK-8272124 still failed, though. That bug is present, as it reports a wrong limit, but it doesn't produce the NPE.
If somebody could paste /proc/cgroups
and /proc/self/mountinfo
as well, we might be able to reproduce in a specially crafted test.
Some evidence of JDK-8272124:
$ cat Test.java
public class Test {
public static void main(String[] args) {
System.out.println("cpus = " + Runtime.getRuntime().availableProcessors());
}
}
Produce a native image from it and compare Java vs native image runs with (expected 3 cpu cores, actually produces host number of CPU cores):
$ sudo podman run --cpu-period 100000 --cpu-quota=300000 --memory=200m --cgroup-manager=cgroupfs --cgroup-parent="/opt/foo/bar:baz" -v $(pwd):/test:z -v /disk/openjdk/17.0.5/:/opt/jdk:z --rm -ti fedora:37
[root@6f8808bd30e3 /]# /test/npe-colon-test
cpus = 12
[root@6f8808bd30e3 /]# /opt/jdk/bin/java -cp /test Test
cpus = 3
[root@6f8808bd30e3 /]# /opt/jdk/bin/java -XX:-UseContainerSupport -cp /test Test
cpus = 12
I'm currently running into this issue. Trying to deploy native image of Spring Boot 3 app to the render.com (free tier) and the same thing occurs.
@jerboaa /proc/cgroups
and /proc/self/mountinfo
taken from runtime env:
-
/proc/cgroups
#subsys_name hierarchy num_cgroups enabled cpuset 12 416 1 cpu 11 563 1 cpuacct 11 563 1 blkio 5 563 1 memory 4 579 1 devices 13 455 1 freezer 9 417 1 net_cls 6 416 1 perf_event 3 416 1 net_prio 6 416 1 hugetlb 2 416 1 pids 10 566 1 rdma 7 1 1 misc 8 1 1
-
/proc/self/mountinfo
20867 12686 0:2443 / / rw,relatime master:4579 - overlay overlay rw,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55827/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55826/fs,upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55828/fs,workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55828/work 20870 20867 0:2447 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw 20871 20867 0:2449 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755,inode64 20872 20871 0:2492 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 20873 20871 0:1999 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw 20902 20867 0:2009 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro 20903 20902 0:2493 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755,inode64 20904 20903 0:30 /system.slice/containerd.service/kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/systemd ro,nosuid,nodev,noexec,relatime master:9 - cgroup cgroup rw,xattr,name=systemd 20905 20903 0:33 /kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/hugetlb ro,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,hugetlb 20906 20903 0:34 /kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/perf_event ro,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,perf_event 20907 20903 0:35 /system.slice/containerd.service/kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,memory 20913 20903 0:36 /system.slice/containerd.service/kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,blkio 20914 20903 0:37 /kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/net_cls,net_prio ro,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,net_cls,net_prio 20915 20903 0:38 / /sys/fs/cgroup/rdma ro,nosuid,nodev,noexec,relatime master:20 - cgroup cgroup rw,rdma 20916 20903 0:39 / /sys/fs/cgroup/misc ro,nosuid,nodev,noexec,relatime master:21 - cgroup cgroup rw,misc 20917 20903 0:40 /kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/freezer ro,nosuid,nodev,noexec,relatime master:22 - cgroup cgroup rw,freezer 20918 20903 0:41 /system.slice/containerd.service/kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/pids ro,nosuid,nodev,noexec,relatime master:23 - cgroup cgroup rw,pids 20919 20903 0:42 /system.slice/containerd.service/kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/cpu,cpuacct ro,nosuid,nodev,noexec,relatime master:24 - cgroup cgroup rw,cpu,cpuacct 20920 20903 0:43 /kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/cpuset ro,nosuid,nodev,noexec,relatime master:25 - cgroup cgroup rw,cpuset 20921 20903 0:44 /system.slice/containerd.service/kubepods-burstable-pod2e4e1e71_09eb_46cd_9098_34bfc47dcb93.slice:cri-containerd:dd6a50875c727a5ec810ba865ffc21ee90d64fbab789d75e6e9e02445be0048a /sys/fs/cgroup/devices ro,nosuid,nodev,noexec,relatime master:26 - cgroup cgroup rw,devices 20922 20867 0:1923 / /etc/secrets ro,relatime - tmpfs tmpfs rw,size=589824k,inode64 20923 20867 259:1 /var/lib/kubelet/pods/2e4e1e71-09eb-46cd-9098-34bfc47dcb93/etc-hosts /etc/hosts rw,relatime - ext4 /dev/root rw,discard 20924 20871 259:1 /var/lib/kubelet/pods/2e4e1e71-09eb-46cd-9098-34bfc47dcb93/containers/user-container/aaa750d6 /dev/termination-log rw,relatime - ext4 /dev/root rw,discard 20925 20867 259:1 /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/a26acbfa1b90eddb8ece7fa2374462699ec6110ca3c8f79efd8d2ca002294c47/hostname /etc/hostname rw,relatime - ext4 /dev/root rw,discard 20964 20867 259:1 /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/a26acbfa1b90eddb8ece7fa2374462699ec6110ca3c8f79efd8d2ca002294c47/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/root rw,discard 20967 20871 0:1987 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k,inode64 12688 20870 0:2447 /bus /proc/bus ro,nosuid,nodev,noexec,relatime - proc proc rw 12689 20870 0:2447 /fs /proc/fs ro,nosuid,nodev,noexec,relatime - proc proc rw 12690 20870 0:2447 /irq /proc/irq ro,nosuid,nodev,noexec,relatime - proc proc rw 12691 20870 0:2447 /sys /proc/sys ro,nosuid,nodev,noexec,relatime - proc proc rw 12692 20870 0:2447 /sysrq-trigger /proc/sysrq-trigger ro,nosuid,nodev,noexec,relatime - proc proc rw 12693 20870 0:2494 / /proc/acpi ro,relatime - tmpfs tmpfs ro,inode64 12694 20870 0:2449 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755,inode64 12695 20870 0:2449 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755,inode64 12696 20870 0:2449 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755,inode64 12707 20870 0:2495 / /proc/scsi ro,relatime - tmpfs tmpfs ro,inode64 12708 20902 0:2496 / /sys/firmware ro,relatime - tmpfs tmpfs ro,inode64
I have the same problem, the file /proc/self/cgroup
not contained 'cpuacct,cpu' or 'cpu, cpuacct' , but only 'cpuacct,cpu,cpuset', can graalvm compatible with this environment ?
https://github.com/oracle/graal/blob/release/graal-vm/22.3/substratevm/src/com.oracle.svm.core.containers/src/com/oracle/svm/core/containers/cgroupv1/CgroupV1Subsystem.java
private static void setSubSystemControllerPath(CgroupV1Subsystem subsystem, String[] entry) {
String controllerName;
String base;
CgroupV1SubsystemController controller = null;
CgroupV1SubsystemController controller2 = null;
controllerName = entry[1];
base = entry[2];
if (controllerName != null && base != null) {
switch (controllerName) {
case "memory":
controller = subsystem.memoryController();
break;
**_//please add this line_**
case "cpuacct,cpu,cpuset":
controller = subsystem.cpuController();
controller2 = subsystem.cpuAcctController();
break;
case "cpuset":
controller = subsystem.cpuSetController();
break;
case "cpu,cpuacct":
case "cpuacct,cpu":
controller = subsystem.cpuController();
controller2 = subsystem.cpuAcctController();
break;
case "cpuacct":
controller = subsystem.cpuAcctController();
break;
case "cpu":
controller = subsystem.cpuController();
break;
case "blkio":
controller = subsystem.blkIOController();
break;
// Ignore subsystems that we don't support
default:
break;
}
}
My /proc/self/cgroup
10:hugetlb:/
9:memory:/
8:net_cls:/
7:perf_event:/
6:blkio:/
5:freezer:/
4:pids:/
3:devices:/
2:cpuacct,cpu,cpuset:/ <====== PLEASE PAY ATTENTION TO THIS LINE
1:name=systemd:/
Not sure which version of native image you are using, but this line: https://github.com/oracle/graal/blob/release/graal-vm/22.3/substratevm/src/com.oracle.svm.core.containers/src/com/oracle/svm/core/containers/cgroupv1/CgroupV1Subsystem.java#L134
should split your join controller combo into cpuacct, cpu, cpuset
and use that path. Are there no symlinks from cpuacct
to cpuacct,cpu,cpuset
in /sys/fs/cgroup
?
$ ll /sys/fs/cgroup/cpuacct
lrwxrwxrwx. 1 root root 11 Mar 30 09:44 /sys/fs/cgroup/cpuacct -> cpu,cpuacct
createSubSystemController only create cpu controller but don't set path which cause the NPE
My Graalvm Version is graalvm-ce-java19-linux-amd64-22.3.1
。
Architecture: x86_64
OS: Linux
The NPE problem occurs when execute bin/gu -v
or any other gu
command ,which blocked the installing of native-image(I was going to install native image using gu install native-image
)。
I find the NPE because the CgroupV1SubsystemController path is null (com.oracle.svm.core.containers.cgroupv1.CgroupV1SubsystemController#path), the path is set when execute com.oracle.svm.core.containers.cgroupv1.CgroupV1Subsystem#setSubSystemControllerPath
,but my linux /proc/self/cgroup
does not contain cpuacct
, cpuacct,cpu
, cpu,cpuacct
, or cpu
, but only cpuacct,cpu,cpuset
/proc/self/cgroup
If setSubSystemControllerPath support cpuacct,cpu,cpuset
may resolve this problem
$ cat /proc/self/mountinfo | grep cgroup
...
28 25 0:23 / /sys/fs/cgroup/cpuset,cpu,cpuacct rw,n ... xxxx ... - cgroup cgroup rw,cpuacct,cpu,cpuset
...
$ls -al /sys/fs/cgroup/cpuacct
lrwxrwxrwx 1 root root 18 Feb 28 2020 /sys/fs/cgroup/cpuacct -> cpuset,cpu,cpuacct
$java -version
openjdk version "19.0.2" 2023-01-17
OpenJDK Runtime Enviroment GraalVM CE 22.3.1 (build 19.0.2.+7-jvmci-22.3-b12)
OpenJDK 64-Bit Server VM GraalVM CE 22.3.1 (......)
This same with #6382
This is a fix. Resolves https://github.com/oracle/graal/pull/6381
I've got a very similar issue reported here. In my case I get the same NullPointerException when calling java.lang.Runtime.maxMemory
(Oracle GraalVM JDK) or java.lang.Runtime.availableProcessors
(GraalVM CE JDK) in a container environment. Locally everything works fine.
Have same issue with below, going in loops between issues, cannot find any workaround ?
Kubernetes with docker container. graalvm-jdk-17.0.8+9.1 Linux 5.4.0-156-generic #173-Ubuntu x86_64 Linux
Exception in thread "main" java.lang.IllegalArgumentException: Unable to instantiate factory class [org.springframework.boot.autoconfigure.BackgroundPreinitializer] for factory type [org.springframework.context.ApplicationListener]
at org.springframework.core.io.support.SpringFactoriesLoader$FailureHandler.lambda$throwing$0(SpringFactoriesLoader.java:651)
at org.springframework.core.io.support.SpringFactoriesLoader$FailureHandler.lambda$handleMessage$3(SpringFactoriesLoader.java:675)
at org.springframework.core.io.support.SpringFactoriesLoader.instantiateFactory(SpringFactoriesLoader.java:231)
at org.springframework.core.io.support.SpringFactoriesLoader.load(SpringFactoriesLoader.java:206)
at org.springframework.core.io.support.SpringFactoriesLoader.load(SpringFactoriesLoader.java:160)
at org.springframework.boot.SpringApplication.getSpringFactoriesInstances(SpringApplication.java:463)
at org.springframework.boot.SpringApplication.getSpringFactoriesInstances(SpringApplication.java:459)
at org.springframework.boot.SpringApplication.<init>(SpringApplication.java:276)
at org.springframework.boot.SpringApplication.<init>(SpringApplication.java:254)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1295)
at my.Application.main(Application.java:46)
Caused by: java.lang.InternalError: java.lang.reflect.InvocationTargetException
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.Metrics.systemMetrics(Metrics.java:67)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.Container.metrics(Container.java:44)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.ContainerInfo.<init>(ContainerInfo.java:34)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.Containers.activeProcessorCount(Containers.java:88)
at [email protected]/java.lang.Runtime.availableProcessors(Runtime.java:337)
at org.springframework.boot.autoconfigure.BackgroundPreinitializer.<clinit>(BackgroundPreinitializer.java:68)
at [email protected]/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
at [email protected]/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
at org.springframework.core.io.support.SpringFactoriesLoader$FactoryInstantiator.instantiate(SpringFactoriesLoader.java:382)
at org.springframework.core.io.support.SpringFactoriesLoader.instantiateFactory(SpringFactoriesLoader.java:228)
... 9 more
Caused by: java.lang.reflect.InvocationTargetException
at [email protected]/java.lang.reflect.Method.invoke(Method.java:568)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.Metrics.systemMetrics(Metrics.java:63)
... 18 more
Caused by: java.lang.ExceptionInInitializerError
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.CgroupSubsystemFactory.create(CgroupSubsystemFactory.java:78)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.CgroupMetrics.getInstance(CgroupMetrics.java:164)
... 20 more
Caused by: java.lang.NullPointerException
at [email protected]/java.util.Objects.requireNonNull(Objects.java:208)
at [email protected]/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:263)
at [email protected]/java.nio.file.Path.of(Path.java:147)
at [email protected]/java.nio.file.Paths.get(Paths.java:69)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.CgroupUtil.lambda$readStringValue$0(CgroupUtil.java:57)
at [email protected]/java.security.AccessController.executePrivileged(AccessController.java:147)
at [email protected]/java.security.AccessController.doPrivileged(AccessController.java:569)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.CgroupUtil.readStringValue(CgroupUtil.java:59)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.CgroupSubsystemController.getStringValue(CgroupSubsystemController.java:66)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.CgroupSubsystemController.getLongValue(CgroupSubsystemController.java:125)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.cgroupv1.CgroupV1Subsystem.getLongValue(CgroupV1Subsystem.java:269)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.cgroupv1.CgroupV1Subsystem.getHierarchical(CgroupV1Subsystem.java:215)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.cgroupv1.CgroupV1Subsystem.setSubSystemControllerPath(CgroupV1Subsystem.java:203)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.cgroupv1.CgroupV1Subsystem.initSubSystem(CgroupV1Subsystem.java:111)
at org.graalvm.nativeimage.builder/com.oracle.svm.core.containers.cgroupv1.CgroupV1Subsystem.<clinit>(CgroupV1Subsystem.java:47)
... 22 more
The only workaround I know of is to build the image with -H:-UseContainerSupport
(which disables container memory limit detection entirely for the image). This should be fixed properly in GraalVM for JDK 22. See #7246
I still get the same exception with the -H:-UseContainerSupport
parameter. Am I using it wrong? I can't find it in the GraalVM documentation.
native-image -jar target/myapp-1.0-SNAPSHOT-jar-with-dependencies.jar -H:-UseContainerSupport -H:ConfigurationFileDirectories=graalvm/tracing-agent -o target/app --no-fallback -H:+AddAllCharsets --enable-preview --native-image-info -H:+ReportExceptionStackTraces -H:+StaticExecutableWithDynamicLibC