Questions about migrations due to minio updates
I've been using opencti for about 2 years now and my current minio version is "2022-05-19T18-20-59Z".
However, at some point, the minio version specified in the OpenCTI docker_compose.yml does not update properly when simply modifying docker_compose.
After changing the version number in docker_compose.yml, the following error occurs when running the container
ERROR Unable to use the drive /data: Drive /data: found backend type fs, expected xl or xl-single - to migrate to a supported backend visit https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html: Invalid arguments specified
So I went to the site and checked, and it said to create a new SNSD (Single-Node Single-Drive), and then copy the existing data to the SNSD using the mc mirror function. https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html
The problem is that the mc mirror function is ridiculously slow.
The picture below shows the mc mirror execution screen
It starts off fast, but as it goes on, the speed drops to a few tens of KB.
I have only 9GB of data, and it takes over 2 days to complete the move.
And if you look at the screenshot above, you can see that there are errors in the middle of the process, so some data was not transferred properly.
In the end, I was only able to copy 7GB as the putty connection was disconnected, and I ended up using the old minio version.
I think that the OpenCTI developers must have had this issue when they bumped up the minio version of docker-compose.
What should I do, should I use the existing minio version(2022)?
Any clear guidelines would be appreciated.
We currently use the minio client for our backup. I don't know if there have been any problems with this migration in the past.
To perform the mirroring, we currently configure both source and target aliases:
mc alias set source <source_user> <source_password>
mc alias set target <target_user> <target_password>
create the bucket on the target:
mc mb target/<opencti_bucket>
copy the data:
mc mirror --overwrite --remove --retry source/<opencti_bucket>/opencti target/<opencti_bucket>/opencti
We don't use the --watch option in our case, but we've had problems on some files resolved with --retry.
@pierremahot
Thank you for detailed instruction! I have some question. When I typed "mc alias ls", a source named "local" existed (presumably with existing minio data stored in it),
I created a new target named "local_new" with the "mc alias set" command.
If I enter all the commands you mentioned, will minio connect to the newly created "local_new" by itself?
Or do I need to modify something in docker_compose that relates to minio?
Currently, the docker_compose on my system is set to the default settings specified in the link below. (without version) https://github.com/OpenCTI-Platform/docker/blob/master/docker-compose.yml
Your answer would be appreciated :)
@misohouse In my idea, you need both minio's (the old one and the new one) running at the same time to connect to each of them and do mirror command. Running the mc command could be anywhere, just something that accesses both minio (the old and the new). You may need to add the old minio to the docker compose file pointing to the correct volume with the old data and also ensure that the new version uses a newly created volume.
@pierremahot
I'm not sure if the commands I ran are correct
First, I checked the bucket path against the local_new alias I had created, and then did an mc mirror.
(Oddly enough, this took less than a second)
After that, I changed the docker-compose for minio to the following and ran docker, but still got the same error
(Before change)
minio:
image: minio/minio:RELEASE.2022-05-19T18-20-59Z
volumes:
- s3data:/data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
(After change)
minio:
image: minio/minio:RELEASE.2024-01-16T16-07-38Z # change here
volumes:
- s3data:/data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
So I started over with mc alias set
But I still get the same error
My guess is that I need to change docker-compose, but I have no idea how to do it
I don't know if the data in the bucket was moved well, or if there was no data to move because I set the URL to https://localhost:9000 when I did the mc set alias in the first place.
Why was the data I was trying to move so large when I ran it with the command I originally posted the question with? Was it the difference between specifying a bucket and specifying an actual local intra-system path?
The figure below shows the size of the files in the actual local system path (which I initially thought was the bucket)
Too many things are questionable :(
I would appreciate a solution.
I think it's not a good idea to copy data from the file system because it include all the meta data need to the filesystem of minio. The goal is to use the s3 protocole to do the mirroring and mirror only relevant data and not minio filesystem metadata. you can try to add a new minio system a side of the old one adding the new service into the docker compose:
newminio:
image: minio/minio:RELEASE.2024-01-16T16-07-38Z
volumes:
- news3data:/data
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
...
...
volumes:
esdata:
s3data:
redisdata:
amqpdata:
news3data:
then from the new minio container you can execute the alias and mirror:
docker compose up -d
source .env
docker compose exec newminio mc alias set newminio http://localhost:9000 ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD}
docker compose exec newminio mc alias set oldminio http://minio:9000 ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD}
docker compose exec newminio mc mb newminio/opencti
docker compose exec newminio mc mirror --overwrite --remove --retry oldminio/opencti/opencti newminio/opencti/opencti
then you need to disable the old minio and use the new one:
# minio:
# image: minio/minio:RELEASE.2022-05-19T18-20-59Z
# volumes:
# - s3data:/data
# ports:
# - "9000:9000"
# environment:
# MINIO_ROOT_USER: ${MINIO_ROOT_USER}
# MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
# command: server /data
# healthcheck:
# test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
# interval: 30s
# timeout: 20s
# retries: 3
# restart: always
minio: # newminio before
image: minio/minio:RELEASE.2024-01-16T16-07-38Z
volumes:
- news3data:/data
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
...
...
volumes:
esdata:
#s3data:
redisdata:
amqpdata:
news3data:
docker compose up -d
if you want and all the data is good you can delete the old s3 volume
@pierremahot
Thanks to your advice, I was able to solve the problem.
I'm managing my docker through portainer, so I solved the problem a little differently than you suggested.
First, I created a new minio in docker-compose in portainer (Note: specifying the port caused a conflict with the existing minio, so I omitted it)
minio_new:
image: minio/minio:RELEASE.2024-01-16T16-07-38Z
volumes:
- news3data:/data
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
.
.
.
volumes:
esdata:
s3data:
redisdata:
amqpdata:
news3data:
Then I moved the files that existed in the existing bucket with the command below
(Note : "local" is existing minio alias name, you can see the "mc alias ls" command.)
mc mirror --remove --retry --overwrite local/opencti-bucket /var/lib/docker/volumes/opencti_news3data/_data/opencti-bucket/
The difference with your method is that I didn't specify the alias when executing the mc mirror command, but directly executed the mc mirror command against the path where the newly created minio-related data is stored. The reason for this is as follows.
- if I create a new alias as localhost:9000, the existing minio data will be linked to whatever alias I create.
- commands with minio:9000 will not work due to connectivity issues.
- there is already an opencti-bucket created with the name local, so there is no need to create an additional alias.
After executing the above commands, I pressed the "Stop this stack" button in portainer, changed the "version, directory path" of the existing minio contents, and commented out the newly created minio contents.
minio: ########## existing minio
image: minio/minio:RELEASE.2024-01-16T16-07-38Z ########## must change!!!
volumes:
- news3data:/data ########## must change!!!
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
###### It must be commented out.
#minio_new:
# image: minio/minio:RELEASE.2024-01-16T16-07-38Z
#volumes:
# - news3data:/data
#ports:
# - "9000:9000"
#environment:
# MINIO_ROOT_USER: ${MINIO_ROOT_USER}
# MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
#command: server /data
#healthcheck:
# test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
# interval: 30s
# timeout: 20s
# retries: 3
#restart: always
.
.
.
volumes:
esdata:
#s3data: ###### It must be commented out.
redisdata:
amqpdata:
news3data:
I did this because if I don't use the existing minio contents, the opencti container won't recognize minio.
After that, click the "start this stack" button in the portainer and you can see that the latest version of minio is running without any errors.
However, the container looks unhealthy, probably because it's not the latest version. (The error doesn't say much)
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2024-01-16T16-07-38Z (go1.21.6 linux/amd64)
Status: 1 Online, 0 Offline.
S3-API: http://172.25.0.2:9000 http://127.0.0.1:9000
Console: http://172.25.0.2:41377 http://127.0.0.1:41377
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
You are running an older version of MinIO released 3 weeks before the latest release
Update: Run `mc admin update`
For now, minio is up and running, and the opencti containers are running fine, but I'll keep checking to see if there are any issues I haven't yet discovered, such as data loss.
Thank you for your detailed help.
If there is anything I've done that might be problematic, I'd appreciate any feedback.
@misohouse To explain my info:
- I intentionally didn't put the port in the new minio at first startup to not ave conflict on the port. an alternative solution would be to change the input port to not have conflict.
9001:9000The aim was to run the commands in the container so there was no need to have access to the port outside the machine. - The new minio service being also present in the same default network of docker compose as the other service the internal dns resolution of docker compose would have worked to reach
minio:9000. - the opencti bucket is normally created at startup of the opencti-platform container and not by minio so normally with a fresh startup on the new minio there shouldn't be an opencti bucket on the minio since opencti points to minio not new_minio (as we can see here https://github.com/OpenCTI-Platform/docker/blob/6020e70a14ddf53236d3e2327638ae4b6cefac1f/docker-compose.yml#L61).
The thing that bothers me about the way it's been executed is that you write via the OS to the location of the files in the new minio. I've already seen differences in behaviour when using the os instead of an s3 repo. For example, the os doesn't support filenames longer than 512bytes, whereas s3 is 1024bytes, which can break mirroring. Also, the files are not actually written by minio itself, but are written by the minio client to a local path on the OS, which means it can't do what it's supposed to do on its own, potentially concerning metadata or other minio storage optimisations. That's why I recommended having the two services in parallel so as to rely solely on the s3 protocol between the two minios.
I don't really know if your way of doing things is problematic. In the end, if all the files are accessible with s3 or the minio console, it might be OK.
the problem with healthcheck may be linked to minio's need to have at least 3 nodes to ensure data redundancy.
for information concerning the health the minio may give you hints on what's going on at :9000/minio/health/live of your server.
for more information on minio you can access the minio console. To do this you need to add a new port to expose, 41377 as indicated in your logs. like below:
ports:
- "9000:9000"
- "41377:41377"
Then redeploy.
The console will be available at the 41377 of your server.
To connect to the console, use the configured credentials:
- MINIO_ROOT_USER
- MINIO_ROOT_PASSWORD
@pierremahot Thank you for your fast feedback! :)
I wanted to do what you said, but I didn't have any problems until the process of creating a new minio via docker-compose in portainer, and then I encountered problems in the process and couldn't do it.
Here are the errors for each process.
1. docker compose exec newminio mc alias set newminio http://localhost:9000 ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD} -> In the ssh console window, type "mc alias set newminio http://localhost:9000 ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD}" The command created a new alias named newminio. -> (Problem) Maybe because I made it point to http://localhost:9000, it seems to have similar internal properties to the existing alias local (opencti-bucket already exists).
2. docker compose exec newminio mc alias set oldminio http://minio:9000 ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD} -> I entered the same command as above to create an alias named oldminio in the console -> (Problem) initially http://minio:9000 failed to create with an error at this point -> I decided that I already had an alias named local anyway, so I could move on
3. docker compose exec newminio mc mb newminio/opencti -> Executed the command "mc mb newminio/opencti" in the console
4. docker compose exec newminio mc mirror --overwrite --remove --retry oldminio/opencti/opencti newminio/opencti/opencti -> Executed the command "mc mirror --overwrite --remove --retry local/opencti/opencti newminio/opencti/opencti" in the console -> (Problem) An error occurred because the bucket named opencti/opencti does not exist in the local alias.
- (Conclusion) Since an alias named local exists, and a directory with the name bucket exists in the new minio, why not just mc mirror from local's bucket to the path of the bucket in the new minio? I thought
I'm not sure if I misunderstood or if my environment is different.
I agree with you that there could be a possibility of errors due to differences between the OS and S3 as you mentioned.
I'm just going to try it as it is now and see what happens.
Is there any command I did incorrectly in 1-4 above?
Hello @misohouse
I make some schema to clarify.
For newminio we can also use, when setting the alias, the service resolution name newminio :
docker compose exec newminio mc alias set newminio http://newminio:9000 ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD}
the mc command need to be executed in the newminio containeur to resolve both service name minio and newminio.
the localhost can be confusing because if the command is executed on the serveur il will work because of the exposed port.
localhost on the server and localhost in the container doesn't mean the same thing except when the port is exposed.
keep in mind that variable will be replace by the serveur os with the source .env command that will temporary add to the environnement the variable MINIO_ROOT_USER and MINIO_ROOT_PASSWORD so the execution of the docker compose exec command need those variable. you can replace it with the real values too.
The configuration (newminio oldminio) of the too enpoints will be praticly the same but not targeting the same minio
Also by default the local alias can be configured but to be sure we can recreate a other alias to have more explicit command at the end.
I don't know portainer well so it may introduce some change.
@pierremahot In my environment, the mc alias only works if the service at the URL that goes into the alias setting as an argument is actually running.
Therefore, commands with the URL http://minio.com:9000 will not be executed
To run this, I need to open a port service on port 9000 with the name minio.com, but I'm not sure if this should be done in portainer.
So I decided that oldminio was the existing minio, and used the URL of the alias named local among the currently existing alias names.
As I mentioned last week, the service is working fine for now, but when I download the export/imported file, I get the following error :'(
{"category":"APP","errors":[{"attributes":{"genre":"TECHNICAL","http_status":500,"referer":"http://192.168.200.121:8080/dashboard/analyses/reports/626861de-bb99-4267-b892-a5831615b632/files"},"message":"Http call interceptor fail","name":"UNKNOWN_ERROR","stack":"UNKNOWN_ERROR: Http call interceptor fail\n at error (/opt/opencti/build/src/config/errors.js:8:10)\n at UnknownError (/opt/opencti/build/src/config/errors.js:76:47)\n at fn (/opt/opencti/build/src/http/httpPlatform.js:455:18)\n at eie.handle_error (/opt/opencti/build/node_modules/express/lib/router/layer.js:71:5)\n at trim_prefix (/opt/opencti/build/node_modules/express/lib/router/index.js:326:13)\n at done (/opt/opencti/build/node_modules/express/lib/router/index.js:286:9)\n at Function.process_params (/opt/opencti/build/node_modules/express/lib/router/index.js:346:12)\n at done (/opt/opencti/build/node_modules/express/lib/router/index.js:280:10)\n at next (/opt/opencti/build/node_modules/express/lib/router/route.js:136:14)\n at /opt/opencti/build/src/http/httpPlatform.js:211:7\n at processTicksAndRejections (node:internal/process/task_queues:95:5)"},{"attributes":{"filename":"import/Report/626861de-bb99-4267-b892-a5831615b632/2023-12-22T09:13:01+09:00_626861de-bb99-4267-b892-a5831615b632.txt","genre":"BUSINESS","http_status":500,"user_id":"74d07ecc-fd2c-47eb-b13c-f3ea7e6b2768"},"message":"Load file from storage fail","name":"UNSUPPORTED_ERROR","stack":"UNSUPPORTED_ERROR: Load file from storage fail\n at error (/opt/opencti/build/src/config/errors.js:8:10)\n at UnsupportedError (/opt/opencti/build/src/config/errors.js:83:51)\n at loadFile (/opt/opencti/build/src/database/file-storage.js:205:11)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at /opt/opencti/build/src/http/httpPlatform.js:194:20"},{"message":"UnknownError","name":"NotFound","stack":"NotFound: UnknownError\n at de_NotFoundRes (/opt/opencti/build/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:6103:21)\n at de_HeadObjectCommandError (/opt/opencti/build/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:4742:19)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at /opt/opencti/build/node_modules/@smithy/middleware-serde/dist-cjs/index.js:35:20\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-signing/dist-cjs/index.js:184:18\n at /opt/opencti/build/node_modules/@smithy/middleware-retry/dist-cjs/index.js:320:38\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:97:20\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:120:14\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-logger/dist-cjs/index.js:33:22\n at loadFile (/opt/opencti/build/src/database/file-storage.js:169:20)\n at /opt/opencti/build/src/http/httpPlatform.js:194:20"}],"level":"error","message":"Http call interceptor fail","timestamp":"2024-02-13T01:21:52.464Z","version":"5.12.29"}
I'll try to find a solution as best I can.
I'll also take a look at the minio migration method you mentioned and try it again for my system.
An alternative solution can be to mirror file localy from the oldminio On your pc download the minio client
mc set alias minio https://<server>:9000 ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD}
mc mirror --remove --retry --overwrite minio/opencti ./miniofiles
Then stop old minio and start a the new minio without data and mirror back files If your newminio has the same crendential you don't have to reset a new alias
mc mb minio/opencti
mc mirror --remove --retry --overwrite ./miniofiles minio/opencti
@pierremahot
I tried the method you suggested, but I'm still getting the error.
Currently, I can access opencti, but when I download the file I exported, I get the error below, and then I get to the opencti dashboard page.
{"category":"APP","errors":[{"attributes":{"genre":"TECHNICAL","http_status":500,"referer":"http://192.168.200.121:8080/dashboard/analyses/reports/626861de-bb99-4267-b892-a5831615b632/files"},"message":"Http call interceptor fail","name":"UNKNOWN_ERROR","stack":"UNKNOWN_ERROR: Http call interceptor fail\n at error (/opt/opencti/build/src/config/errors.js:8:10)\n at UnknownError (/opt/opencti/build/src/config/errors.js:76:47)\n at fn (/opt/opencti/build/src/http/httpPlatform.js:455:18)\n at eie.handle_error (/opt/opencti/build/node_modules/express/lib/router/layer.js:71:5)\n at trim_prefix (/opt/opencti/build/node_modules/express/lib/router/index.js:326:13)\n at done (/opt/opencti/build/node_modules/express/lib/router/index.js:286:9)\n at Function.process_params (/opt/opencti/build/node_modules/express/lib/router/index.js:346:12)\n at done (/opt/opencti/build/node_modules/express/lib/router/index.js:280:10)\n at next (/opt/opencti/build/node_modules/express/lib/router/route.js:136:14)\n at /opt/opencti/build/src/http/httpPlatform.js:211:7\n at processTicksAndRejections (node:internal/process/task_queues:95:5)"},{"attributes":{"filename":"export/Report/626861de-bb99-4267-b892-a5831615b632/2023-12-26T05:37:46.175Z_TLP:ALL_(ExportTTPsFileNavigator)_Report-220414_Lazarus_simple.false","genre":"BUSINESS","http_status":500,"user_id":"74d07ecc-fd2c-47eb-b13c-f3ea7e6b2768"},"message":"Load file from storage fail","name":"UNSUPPORTED_ERROR","stack":"UNSUPPORTED_ERROR: Load file from storage fail\n at error (/opt/opencti/build/src/config/errors.js:8:10)\n at UnsupportedError (/opt/opencti/build/src/config/errors.js:83:51)\n at loadFile (/opt/opencti/build/src/database/file-storage.js:205:11)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at /opt/opencti/build/src/http/httpPlatform.js:194:20"},{"message":"UnknownError","name":"NotFound","stack":"NotFound: UnknownError\n at de_NotFoundRes (/opt/opencti/build/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:6103:21)\n at de_HeadObjectCommandError (/opt/opencti/build/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:4742:19)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at /opt/opencti/build/node_modules/@smithy/middleware-serde/dist-cjs/index.js:35:20\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-signing/dist-cjs/index.js:184:18\n at /opt/opencti/build/node_modules/@smithy/middleware-retry/dist-cjs/index.js:320:38\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:97:20\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:120:14\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-logger/dist-cjs/index.js:33:22\n at loadFile (/opt/opencti/build/src/database/file-storage.js:169:20)\n at /opt/opencti/build/src/http/httpPlatform.js:194:20"}],"level":"error","message":"Http call interceptor fail","timestamp":"2024-02-21T02:06:18.750Z","version":"5.12.29"}
I keep trying different methods, but this is the best I can do for now...
It would have been nice if the opencti manual had actual instructions on how to update minio for those who have been using minio in the past.
It's too tough... :'(
Do you have access to the minio console? This could help you see what's wrong and check that the files are correct after mirroring. You could also try getting the file with the mc client to check if it's working correctly.
I think we are close to the solution on our side our backup and restore process have the same process the data comes from a backup using another s3 technology bucket and we restore the data to the minio bucket without any problem. The s3 just needs to have the files in the right paths.
The minio migration is documented on the minio io side, we depend on them for this part. But this is purely s3 technology, you can also use any other s3 solution, opencti just needs a bucket.
@pierremahot
Oh, I solved the problem today!
I found it just before I got home from work, so I'm sorry I didn't respond.
Here's how I did it
1. Create 2 containers with portainer, an existing minio and a new minio.
- The port of the new minio can't be specified (because it overlaps with port 9000 of the old minio)
(Important)
2. Creating the id,pw of the new minio different from the old minio. - This was my original mistake.
- I set the credentials the same as the existing minio, so when I did "mc alias set", I couldn't create an alias for the new minio.
3. Cloning the existing minio to a local folder with mc mirror command.
- This is the method you replyed above.
- The reason I couldn't "mc mirror" the existing minio directly to the new minio is because portainer can't assign port 9000 to 2 minios at the same time.
4. Stop the entire container in portainer
5. Run the entire container on portainer, but in the case of minio, only run 1 new minio container.
6. Replicate the data cloned to the local folder to the new minio with "mc mirror" command.
I wrote it off the top of my head at home, so it's a little rambling.
I'll improve it when I get back to work next week.
Your answers have really helped me a lot. Thank you so much :)
==========================================
But there's one thing that's bothering me.
The problem is that minio is running in an "unhealthy" state.
I'm not getting any errors on the OpenCTI platform, but it's still bothering me.
When I checked the minio container logs, I saw a warning message that "some value (I can't remember) was set to 0 and I might lose data", I think it means "Erasure Coding" is not applied
Could this be the problem?
@misohouse You can obtain more informations by performing a 'curl' on :9000/minio/health/live, which will give you information on the state of health and the source of the problem. The minio console can also help you obtain information from the health menu. For the console to work in a predictable way, you need to change the command to set the console port and expose it as follows:
minio:
image: minio/minio:RELEASE.2024-01-16T16-07-38Z
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
on my opencti stack localy my health status respond 200 OK but got the same problem.
I have look at it and the issue is that the docker image doesn't have curl (may due to security issues or other optimisation of docker image size) installed any more so the command to get the health status fail every time.
https://github.com/minio/minio/issues/18373
harshavardhana commented on Nov 2, 2023 Yes we moved to UBI micro that does not ship curl you can use this instead
I have tested this it's working on my side :
healthcheck:
test: timeout 5s bash -c ':> /dev/tcp/127.0.0.1/9000' || exit 1
interval: 5s
retries: 1
start_period: 5s
timeout: 5s
@pierremahot
I'm currently building a new redhat server, running OpenCTI, and using it without any issues (see photo below).
Would it be okay if I don't add the healthcheck part you mentioned to the docker-compose.yml?
@misohouse should be ok, the health check ensures that the container is automatically restarted if the health check fails, so keep this in mind.