OracleDatabase: Data persistence does not work.
here are the steps to reproduce the issue docker run --name oracle19c -p 1521:1521 -p 5500:5500 -e ORACLE_PDB=orcl -e ORACLE_PWD=PssW0rd -e ORACLE_MEM=4000 -v /mnt/d/Docker/volumes/oracle:/opt/oracle/oradata -d oracle/database:19.3.0-ee
Wait until the database is fully created. then, run a test connection from the host machine.
sqlplus SYS/PssW0rd@localhost:1521/ORCLCDB as Sysdba alter session set container="ORCL";
everything works as expected.
then shut the container down; then remove it.
docker container stop oracle19c docker rm oracle19c
now run the container again docker run --name oracle19c -p 1521:1521 -p 5500:5500 -e ORACLE_PDB=orcl -e ORACLE_PWD=PssW0rd -e ORACLE_MEM=4000 -v /mnt/d/Docker/volumes/oracle:/opt/oracle/oradata -d oracle/database:19.3.0-ee
now try to run a test in the host machine again.
sqlplus SYS/PssW0rd@localhost:1521/ORCLCDB as Sysdba ERROR: ORA-01017: invalid username/password; logon denied
Why is the sys password not persisted after the first run?
First time using this and I accidentally close the issue.
I believe your issue is you removed the docker container rather than just stopping it.
To maintain persistence you would need to use;
docker stop oracle19c
Same issue here. After I created some users and databases in Oracle 18c XE database running in container, I stopped and removed the container (I wasn't needing it for a while yet I needed the disk space). Then when I re-created the container, all of my pluggable databases were gone, incl. the XEPDB1 pluggable database that came with the image.
I can see XEPDB1 in SQL Developer, but it's CLOSED, and I cannot OPEN it.
Moreover, I stored the database files for my own pluggable database in the volume share /opt/oracle/oradata. Why does Oracle Database 18c XE now claim they were stored at /opt/oracle/product/18c/dbhomeXE/dbs/...?
The Oracle base remains unchanged with value /opt/oracle
#####################################
########### E R R O R ###############
DATABASE SETUP WAS NOT SUCCESSFUL!
Please check output for further info!
########### E R R O R ###############
#####################################
The following output is now a tail of the alert.log:
Crash Recovery excluding pdb 2 which was cleanly closed.
2021-05-12T09:20:43.043896+00:00
Errors in file /opt/oracle/diag/rdbms/xe/XE/trace/XE_ora_111.trc:
ORA-01157: cannot identify/lock data file 26 - see DBWR trace file
ORA-01110: data file 26: '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_DATA.DBF'
2021-05-12T09:20:43.074219+00:00
Errors in file /opt/oracle/diag/rdbms/xe/XE/trace/XE_ora_111.trc:
ORA-01157: cannot identify/lock data file 26 - see DBWR trace file
ORA-01110: data file 26: '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_DATA.DBF'
ORA-1157 signalled during: ALTER DATABASE OPEN...
2021-05-12T09:20:44.352713+00:00
Errors in file /opt/oracle/diag/rdbms/xe/XE/trace/XE_mz00_134.trc:
ORA-01110: data file 26: '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_DATA.DBF'
ORA-01565: error in identifying file '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_DATA.DBF'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
2021-05-12T09:20:44.566950+00:00
Errors in file /opt/oracle/diag/rdbms/xe/XE/trace/XE_mz00_134.trc:
ORA-01110: data file 27: '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_INDEX.DBF'
ORA-01565: error in identifying file '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_INDEX.DBF'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
2021-05-12T09:20:44.733097+00:00
Errors in file /opt/oracle/diag/rdbms/xe/XE/trace/XE_mz00_134.trc:
ORA-01110: data file 28: '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_TRACE.DBF'
ORA-01565: error in identifying file '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_TRACE.DBF'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
2021-05-12T09:20:44.908035+00:00
Errors in file /opt/oracle/diag/rdbms/xe/XE/trace/XE_mz00_134.trc:
ORA-01110: data file 29: '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_TRACE_INDEX.DBF'
ORA-01565: error in identifying file '/opt/oracle/product/18c/dbhomeXE/dbs/MCL_TRACE_INDEX.DBF'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
Checker run found 4 new persistent data failures
2021-05-12T09:24:07.973357+00:00
XEPDB1(3):ALTER PLUGGABLE DATABASE XEPDB1 REFRESH
XEPDB1(3):ORA-65261 signalled during: ALTER PLUGGABLE DATABASE XEPDB1 REFRESH...
2021-05-12T09:24:19.951737+00:00
XEPDB1(3):ALTER PLUGGABLE DATABASE XEPDB1 OPEN READ WRITE
XEPDB1(3):ORA-65054 signalled during: ALTER PLUGGABLE DATABASE XEPDB1 OPEN READ WRITE...
From the readme.md file:
-v /opt/oracle/oradata
The data volume to use for the database.
Has to be writable by the Unix "oracle" (uid: 54321) user inside the container!
If omitted the database will not be persisted over container recreation.
Apparently, this statement implies a false assertion: If omitted the database will not be persisted over container recreation. is making people conclude: If used the database will be persisted over container recreation.
I would say that this limitation is rather annoying in case of docker-compose - if you forget to put --no-recreate in the command, it may decide to recreate the container by default, so you end up with no choice but to create your local database anew with all your data lost.
Is this just a matter of file hierarchy? I.e. could this be fixed by making some other folders in /opt/oracle persistent?
Unfortunately there's no progress here. My Docker Desktop installed an update today, and afterwards the Oracle container won't come up. So I have to drop the volume and create everything from scratch.
I have the same issue with image container-registry.oracle.com/database/enterprise:19.3.0.0.
I use the following command:
docker run -d -p 1521:1521 -e DB_PASSWD=password123456% -e ENABLE_ARCHIVELOG=true -v /c/oracleDb:/opt/oracle/oradata --name OracleDb container-registry.oracle.com/database/enterprise:19.3.0.0
When I create a new database instance, it shows up in the specified folder C:\OracleDB. I can also connect to the created database instance. However, after the container is restarted, I am not able to reconnect to the created database instance. Then, I manually delete the database instance from C:\OracleDB and start from scratch.
@hschink DB_PASSWD is not the correct env var. It is ORACLE_PWD
@hschink DB_PASSWD is not the correct env var. It is ORACLE_PWD
@yunus-qureshi This is currently working on my machine. As written above, the setup works but loses the connection to the database instance as soon as the container is restarted.
Is there any fix for this? I currently have a volume filled with data that I cant use because the recreated container thinks my schemes are in his local /opt/oracle/product/19c/dbhome_1/dbs. Would really like to get around setting it all up again.
Docker Log if anyone is interested:
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
The Oracle base remains unchanged with value /opt/oracle
#####################################
########### E R R O R ###############
DATABASE SETUP WAS NOT SUCCESSFUL!
Please check output for further info!
########### E R R O R ###############
#####################################
The following output is now a tail of the alert.log:
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
2022-06-15T12:14:27.350981+00:00
Errors in file /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_mz00_114.trc:
ORA-01110: data file 15: '/opt/oracle/product/19c/dbhome_1/dbs/CMND_DOCKER'
ORA-01565: error in identifying file '/opt/oracle/product/19c/dbhome_1/dbs/CMND_DOCKER'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
@TheUltragon paste the full docker cmd line and the entire docker log output
Docker Log: https://gist.github.com/TheUltragon/316beb76bbaa47edd07c2bfa3d7639b9
Docker Command:
docker run --name OracleDbContainer -p 1521:1521 -p 5500:5500 -e ORACLE_SID=ORCLCDB -e ORACLE_PDB=ORCLPDB1 -v OracleDbVolume:/opt/oracle/oradata oracle/database:19.3.0-ee
I created the schemes usings this script: https://gist.github.com/TheUltragon/78c72c65ddd985803705fa8f65d1440e
For some reason, there is a file in my volume at _data\dbconfig\ORCLCDB\oratab referencing the oracle home directory with ORCLCDB:/opt/oracle/product/19c/dbhome_1:N. In the referenced folder, there seems to be most of my database stored. (as it is already 6GB in size without any data inserted yet).
Now the big question: What is the point of mounting a Volume to opt/oracle/oradata like suggested in pretty much every guide, if my database gets stored somewhere else?
@yunus-qureshi
Same Log for me. I'm using docker-compose like this:
version: '2.4'
services:
devdb:
image: oracle/database:19.3.0-ee
mem_limit: 2G
environment:
- ORACLE_SID=ORCL
- ORACLE_PDB=MYPDB
- ORACLE_PWD=123_myPass
- ORACLE_EDITION=enterprise
volumes:
- oradata:/opt/oracle/oradata
- d:\docker\oracle19\dumps:/ORCLDMP
ports:
- 11521:1521
- 15500:5500
volumes:
oradata:
external: false
Any updates on this?
It might be the ownership problem, so look at https://github.com/oracle/docker-images/blob/main/OracleDatabase/SingleInstance/FAQ.md#cannot-create-directory-error-when-using-volumes and https://medium.com/oracledevs/making-oracle-database-persistent-with-docker-bb186a08e965
For me the problem was, that I built non-CBD database (with changed configuration in dbca.rsp.tmpl file: createAsContainerDatabase=false and numberOfPDBs=0 and build using the buildContainerImage.sh script). Apart from the directory ownership problem, the Oracle prebuild image (CBD) seems to be working fine.
Problem still exists with 23C FREE when you remove the container. Stopping/Starting work fine. As soon as the container is removed and recreated again, the container will run indefinitely as unhealthy and oracle will fail to start properly.
Run command:
podman run --name oradb -p 1521:1521 \
-v /var/lib/oracledb/test:/opt/oracle/oradata:Z \
--user 54321:54321 \
-e ORACLE_PWD=Testpa$$word \
-e ORACLE_CHARACTERSET=AL32UTF8 \
container-registry.oracle.com/database/free:23.3.0.0
Host perms for mounted dir:
# ls -al /var/lib/oracledb/test
total 0
drwxr-xr-x. 2 54321 54321 6 Sep 21 10:58 .
drwxr-xr-x. 4 root root 30 Sep 21 10:58 ..
Error in container after removing (podman stop; podman rm) and recreating the container with the same run command:
The Oracle base remains unchanged with value /opt/oracle
#####################################
########### E R R O R ###############
DATABASE SETUP WAS NOT SUCCESSFUL!
Please check output for further info!
########### E R R O R ###############
#####################################
The following output is now a tail of the alert.log:
XDB initialized.
ALTER PLUGGABLE DATABASE FREEPDB1 SAVE STATE
Completed: ALTER PLUGGABLE DATABASE FREEPDB1 SAVE STATE
2023-08-02T07:04:05.971617+00:00
ALTER SYSTEM SET control_files='/opt/oracle/oradata/FREE/control01.ctl' SCOPE=SPFILE;
2023-08-02T07:04:06.004362+00:00
ALTER SYSTEM SET local_listener='' SCOPE=BOTH;
ALTER PLUGGABLE DATABASE FREEPDB1 SAVE STATE
Completed: ALTER PLUGGABLE DATABASE FREEPDB1 SAVE STATE
Error from client:
ORA-12514: Cannot connect to database. Service FREE is not registered with the
listener at host REDACTED port 1521
Deleting the contents of the host mount (/var/lib/oracledb/test), allows for the container to start properly after creating the DB for the first time again and files are populated in (/var/lib/oracledb/test)
My initial thought was use of the :Z option was uniquely labeling the volume and preventing subsequent containers from accessing it, but I'm unable to reproduce this on OEL8. Perhaps try it with the :z option instead and see if that works? Maybe an SELinux setting?
Create the data directory:
sudo mkdir -p /oradata/test
sudo chown -R opc:154320 /oradata/test
sudo chmod 774 /oradata/test
The group ID assigned above is the subgid used by the container process:
# cat /etc/subuid
opc:100000:65536
# cat /etc/subgid
opc:100000:65536
Create the container (run as a detached process and without the port mapping or environment variables for simplicity):
# podman run -d --name oradb \
> --user 54321:54321 \
> -v /oradata/test:/opt/oracle/oradata:Z \
> container-registry.oracle.com/database/free:23.3.0.0
# podman logs -f oradb
Last lines of the log show the database created successfully:
SQL> SQL> Disconnected from Oracle Database 23c Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.3.0.23.09
The Oracle base remains unchanged with value /opt/oracle
The Oracle base remains unchanged with value /opt/oracle
#########################
DATABASE IS READY TO USE!
#########################
The following output is now a tail of the alert.log:
FREEPDB1(3):ALTER DATABASE DEFAULT TABLESPACE "USERS"
FREEPDB1(3):Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
ALTER PLUGGABLE DATABASE FREEPDB1 SAVE STATE
Completed: ALTER PLUGGABLE DATABASE FREEPDB1 SAVE STATE
2023-09-25T21:02:08.974224+00:00
ALTER SYSTEM SET control_files='/opt/oracle/oradata/FREE/control01.ctl' SCOPE=SPFILE;
2023-09-25T21:02:08.999944+00:00
ALTER SYSTEM SET local_listener='' SCOPE=BOTH;
ALTER PLUGGABLE DATABASE FREEPDB1 SAVE STATE
Completed: ALTER PLUGGABLE DATABASE FREEPDB1 SAVE STATE
All files are present:
# sudo ls -la
total 8
drwxrwxr--. 4 opc 154320 55 Sep 25 21:02 .
drwxrwxr--. 3 opc opc 18 Sep 25 20:42 ..
drwxr-xr-x. 3 154320 154320 18 Sep 25 21:02 dbconfig
drwxr-x---. 4 154320 154320 4096 Sep 25 20:57 FREE
-rw-r--r--. 1 154320 154320 26 Sep 25 21:02 .FREE.created
# sudo ls -la dbconfig/
total 0
drwxr-xr-x. 3 154320 154320 18 Sep 25 21:02 .
drwxrwxr--. 4 opc 154320 55 Sep 25 21:02 ..
drwxr-xr-x. 2 154320 154320 117 Sep 25 21:02 FREE
# sudo ls -la dbconfig/FREE/
total 24
drwxr-xr-x. 2 154320 154320 117 Sep 25 21:02 .
drwxr-xr-x. 3 154320 154320 18 Sep 25 21:02 ..
-rw-r-----. 1 154320 154320 448 Sep 25 21:02 listener.ora
-rw-r-----. 1 154320 154320 2048 Sep 25 20:57 orapwFREE
-rw-r--r--. 1 154320 154320 779 Sep 25 21:02 oratab
-rw-r-----. 1 154320 154320 2560 Sep 25 21:02 spfileFREE.ora
-rw-r-----. 1 154320 154320 69 Sep 25 21:02 sqlnet.ora
-rw-r-----. 1 154320 154320 690 Sep 25 21:02 tnsnames.ora
# sudo ls -la FREE
total 2362192
drwxr-x---. 4 154320 154320 4096 Sep 25 20:57 .
drwxrwxr--. 4 opc 154320 55 Sep 25 21:02 ..
-rw-r-----. 1 154320 154320 18759680 Sep 25 21:04 control01.ctl
-rw-r-----. 1 154320 154320 18759680 Sep 25 21:04 control02.ctl
drwxr-x---. 2 154320 154320 104 Sep 25 21:02 FREEPDB1
drwxr-x---. 2 154320 154320 85 Sep 25 20:57 pdbseed
-rw-r-----. 1 154320 154320 209715712 Sep 25 21:04 redo01.log
-rw-r-----. 1 154320 154320 209715712 Sep 25 21:01 redo02.log
-rw-r-----. 1 154320 154320 209715712 Sep 25 21:01 redo03.log
-rw-r-----. 1 154320 154320 587210752 Sep 25 21:01 sysaux01.dbf
-rw-r-----. 1 154320 154320 1111498752 Sep 25 21:01 system01.dbf
-rw-r-----. 1 154320 154320 20979712 Sep 25 21:01 temp01.dbf
-rw-r-----. 1 154320 154320 47194112 Sep 25 21:01 undotbs01.dbf
-rw-r-----. 1 154320 154320 5251072 Sep 25 21:01 users01.dbf
Remove the container and recreate it:
# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1279eb7e38d9 container-registry.oracle.com/database/free:23.3.0.0 /bin/bash -c $ORA... 10 minutes ago Exited (130) 5 seconds ago (healthy) oradb
# podman rm -f oradb
oradb
# podman run -d --name oradb \
> --user 54321:54321 \
> -v /oradata/test:/opt/oracle/oradata:Z \
> container-registry.oracle.com/database/free:23.3.0.0
8d7c382dda151301b794855790293a2266f10385067d24c8c34cde5d265e4952
# podman logs -f oradb
Starting Oracle Net Listener.
Oracle Net Listener started.
Starting Oracle Database instance FREE.
Oracle Database instance FREE started.
The Oracle base remains unchanged with value /opt/oracle
#########################
DATABASE IS READY TO USE!
#########################
The following output is now a tail of the alert.log:
QPI: qopiprep.bat file present
2023-09-25T21:13:35.127227+00:00
PDB$SEED(2):Opening pdb with Resource Manager plan: DEFAULT_PLAN
FREEPDB1(3):Autotune of undo retention is turned on.
2023-09-25T21:13:36.069822+00:00
Using default pga_aggregate_limit of 2048 MB
2023-09-25T21:13:36.158657+00:00
FREEPDB1(3):Opening pdb with Resource Manager plan: DEFAULT_PLAN
Completed: Pluggable database FREEPDB1 opened read write
Completed: ALTER DATABASE OPEN
2023-09-25T21:13:37.882422+00:00
===========================================================
Dumping current patch information
===========================================================
No patches have been applied
===========================================================
The database starts as expected.
podman info etc. from the system:
$ podman info
host:
arch: amd64
buildahVersion: 1.29.0
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.1.6-1.module+el8.8.0+21045+adcb6a64.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.6, commit: 31a72124adb6095b6be85b27e3e481313a1cea96'
cpuUtilization:
idlePercent: 69.12
systemPercent: 3.74
userPercent: 27.14
cpus: 2
distribution:
distribution: '"ol"'
variant: server
version: "8.8"
eventLogger: file
hostname: podman
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.15.0-104.119.4.2.el8uek.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 3523620864
memTotal: 16449220608
networkBackend: cni
ociRuntime:
name: runc
package: runc-1.1.4-1.0.1.module+el8.8.0+21119+51f68ed8.x86_64
path: /usr/bin/runc
version: |-
runc version 1.1.4
spec: 1.0.2-dev
go: go1.19.10
libseccomp: 2.5.2
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_SYS_CHROOT,CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-2.module+el8.8.0+21045+adcb6a64.x86_64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.2
swapFree: 4284923904
swapTotal: 4294963200
uptime: 0h 28m 26.00s
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- container-registry.oracle.com
- docker.io
store:
configFile: /home/opc/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/opc/.local/share/containers/storage
graphRootAllocated: 38069878784
graphRootUsed: 23567138816
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 1
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/opc/.local/share/containers/storage/volumes
version:
APIVersion: 4.4.1
Built: 1695376079
BuiltTime: Fri Sep 22 09:47:59 2023
GitCommit: ""
GoVersion: go1.19.10
Os: linux
OsArch: linux/amd64
Version: 4.4.1
# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f006d2ea6011 container-registry.oracle.com/database/free:23.3.0.0 /bin/bash -c $ORA... 48 seconds ago Up 48 seconds (starting) oradb
# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
container-registry.oracle.com/database/free 23.3.0.0 39cabc8e6db0 3 weeks ago 9.19 GB
# uname -a
Linux podman 5.15.0-104.119.4.2.el8uek.x86_64 #2 SMP Fri Aug 18 20:16:10 PDT 2023 x86_64 x86_64 x86_64 GNU/Linux
No luck with :z, but that led me to running it without relabeling: podman run --security-opt label=disable ..., so something with SELinux on Fedora CoreOS to dig into:
# podman info
host:
arch: amd64
buildahVersion: 1.31.2
cgroupControllers:
- cpuset
- cpu
- io
- memory
- hugetlb
- pids
- rdma
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.7-2.fc38.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.7, commit: '
cpuUtilization:
idlePercent: 83.57
systemPercent: 4.13
userPercent: 12.3
cpus: 2
databaseBackend: boltdb
distribution:
distribution: fedora
variant: coreos
version: "38"
eventLogger: journald
freeLocks: 2047
hostname: hostname
idMappings:
gidmap: null
uidmap: null
kernel: 6.4.15-200.fc38.x86_64
linkmode: dynamic
logDriver: journald
memFree: 482484224
memTotal: 4090777600
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.7.0-1.fc38.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.7.0
package: netavark-1.7.0-1.fc38.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.7.0
ociRuntime:
name: crun
package: crun-1.8.7-1.fc38.x86_64
path: /usr/bin/crun
version: |-
crun version 1.8.7
commit: 53a9996ce82d1ee818349bdcc64797a1fa0433c4
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20230823.ga7e4bfb-1.fc38.x86_64
version: |
pasta 0^20230823.ga7e4bfb-1.fc38.x86_64
Copyright Red Hat
GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.1-1.fc38.x86_64
version: |-
slirp4netns version 1.2.1
commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.3
swapFree: 2039476224
swapTotal: 2045243392
uptime: 0h 4m 50.00s
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.io
store:
configFile: /usr/share/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 1
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphRootAllocated: 21406658560
graphRootUsed: 9439293440
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageCopyTmpDir: /var/tmp
imageStore:
number: 1
runRoot: /run/containers/storage
transientStore: false
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.6.2
Built: 1693251588
BuiltTime: Mon Aug 28 12:39:48 2023
GitCommit: ""
GoVersion: go1.20.7
Os: linux
OsArch: linux/amd64
Version: 4.6.2
# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30600ccd8aa7 container-registry.oracle.com/database/free:latest /bin/bash -c $ORA... 5 minutes ago Up 5 minutes (healthy) oracledb
Thanks @oraclesean