Opencti api is not reachable.
My problem is that: I'm trying to send data to my OpenCTI server from an external server. This data is in Stix format. But in my script, try to connect to my OpenCTI server and I have this problem:
from pycti import OpenCTIApiClient
# -----MAIN------
if __name__ == "__main__":
# Variables
api_url = "https://xxx.xxxx.xxx.xxx:80/" #IP
api_token = "72327164-0b35-482b-b5d6-a5a3f76b845f" #connector_import_file_stix_id token /opencti-docker/.env
# OpenCTI initialization
opencti_api_client = OpenCTIApiClient(api_url, api_token)
Error:
INFO:root:Listing Threat-Actors with filters null.
Traceback (most recent call last):
File "Main.py", line 14, in <module>
opencti_api_client = OpenCTIApiClient(api_url, api_token)
File "/usr/local/lib/python3.8/dist-packages/pycti/api/opencti_api_client.py", line 187, in __init__
raise ValueError(
ValueError: OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...
Killed
Also when I run sudo docker ps, I always see that the taxi container is ALWAYS restarting itself, it is normal? How can fix that?
macia.salva@macia:/opencti-docker$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fda6c2cb6569 opencti/worker:5.3.7 "python3 worker.py" 5 minutes ago Up 2 minutes opencti-docker_worker_1
50093c606ec1 opencti/connector-import-file-stix:5.3.7 "/entrypoint.sh" 5 minutes ago Up 3 minutes opencti-docker_connector-import-file-stix_1
3b37883968b4 opencti/connector-taxii2:5.3.10 "/entrypoint.sh" 5 minutes ago Restarting (137) 4 seconds ago opencti-docker_connector-taxii2_1
1384c6b093b4 opencti/connector-export-file-csv:5.3.7 "/entrypoint.sh" 5 minutes ago Up 3 minutes opencti-docker_connector-export-file-csv_1
dd925fd8985f opencti/connector-import-document:5.3.7 "/entrypoint.sh" 5 minutes ago Up 3 minutes opencti-docker_connector-import-document_1
0e500a0a2ada opencti/connector-export-file-txt:5.3.7 "/entrypoint.sh" 5 minutes ago Up 3 minutes opencti-docker_connector-export-file-txt_1
5e47c400283b opencti/connector-export-file-stix:5.3.7 "/entrypoint.sh" 5 minutes ago Up 3 minutes opencti-docker_connector-export-file-stix_1
819b356e635a opencti/platform:5.3.7 "/sbin/tini -- node …" 5 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp opencti-docker_opencti_1
a50a31c72817 rabbitmq:3.10-management "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 4369/tcp, 5671-5672/tcp, 15671-15672/tcp, 15691-15692/tcp, 25672/tcp opencti-docker_rabbitmq_1
3753db773f4c docker.elastic.co/elasticsearch/elasticsearch:7.17.4 "/bin/tini -- /usr/l…" 5 minutes ago Up 5 minutes 9200/tcp, 9300/tcp opencti-docker_elasticsearch_1
18051af5bffe redis:7.0.0 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 6379/tcp opencti-docker_redis_1
b6c4d9f092a3 minio/minio:RELEASE.2022-05-19T18-20-59Z "/usr/bin/docker-ent…" 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:9000->9000/tcp opencti-docker_minio_1
Also, when I try to view logs of that container, I receive the same error:
macia.salva@macia:/opencti-docker$ sudo docker-compose logs 3b37883968b4
WARNING: Some services (worker) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
ERROR: No such service: 3b37883968b4
macia.salva@macia:/opencti-docker$ sudo docker logs 3b37883968b4
INFO:root:Listing Threat-Actors with filters null.
Traceback (most recent call last):
File "/opt/opencti-taxii2/taxii2.py", line 318, in <module>
raise e
File "/opt/opencti-taxii2/taxii2.py", line 315, in <module>
taxii2Connector = Taxii2Connector()
File "/opt/opencti-taxii2/taxii2.py", line 31, in __init__
self.helper = OpenCTIConnectorHelper(config)
File "/usr/local/lib/python3.10/site-packages/pycti/connector/opencti_connector_helper.py", line 605, in __init__
self.api = OpenCTIApiClient(
File "/usr/local/lib/python3.10/site-packages/pycti/api/opencti_api_client.py", line 187, in __init__
raise ValueError(
ValueError: OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...
Killed
I have seen on issue 49 this:
restart: always
networks:
- opencti-default
networks:
opencti-default:
external:
name: opencti-default
Do you put this code on docker-compose.yml? Or do you add this configuration on another .yml?
When I do a sudo docker network ls I see this:
NETWORK ID NAME DRIVER SCOPE
96e44fc20fbc bridge bridge local
7d143d7f3fb4 docker_gwbridge bridge local
179fc1a9c349 host host local
khb0xa5hueuq ingress overlay swarm
9867ff3e693a none null local
b92c20916768 opencti-docker_default bridge local
My docker-compose.yml:
version: '3'
services:
redis:
image: redis:7.0.0
restart: always
volumes:
- redisdata:/data
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
# Comment out the line below for single-node
- discovery.type=single-node
- xpack.security.enabled=false
# Uncomment line below below for a cluster of multiple nodes
#- cluster.name=docker-cluster
#- xpack.ml.enabled=false
#- "ES_JAVA_OPTS=-Xms${ELASTIC_MEMORY_SIZE} -Xmx${ELASTIC_MEMORY_SIZE}"
restart: always
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
minio:
image: minio/minio:RELEASE.2022-05-19T18-20-59Z
volumes:
- s3data:/data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
rabbitmq:
image: rabbitmq:3.10-management
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
volumes:
- amqpdata:/var/lib/rabbitmq
restart: always
opencti:
image: opencti/platform:5.3.7
environment:
- NODE_OPTIONS=--max-old-space-size=8096
- APP__PORT=80
- APP__BASE_URL=${OPENCTI_BASE_URL}
- APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
- APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
- APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
- APP__APP_LOGS__LOGS_LEVEL=error
- REDIS__HOSTNAME=redis
- REDIS__PORT=6379
- ELASTICSEARCH__URL=http://elasticsearch:9200
- MINIO__ENDPOINT=minio
- MINIO__PORT=9000
- MINIO__USE_SSL=false
- MINIO__ACCESS_KEY=${MINIO_ROOT_USER}
- MINIO__SECRET_KEY=${MINIO_ROOT_PASSWORD}
- RABBITMQ__HOSTNAME=rabbitmq
- RABBITMQ__PORT=5672
- RABBITMQ__PORT_MANAGEMENT=15672
- RABBITMQ__MANAGEMENT_SSL=false
- RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
- RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}
- SMTP__HOSTNAME=${SMTP_HOSTNAME}
- SMTP__PORT=25
- PROVIDERS__LOCAL__STRATEGY=LocalStrategy
ports:
- "80:80"
depends_on:
- redis
- elasticsearch
- minio
- rabbitmq
restart: always
worker:
image: opencti/worker:5.3.7
environment:
- OPENCTI_URL=http://opencti:80
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- WORKER_LOG_LEVEL=info
depends_on:
- opencti
deploy:
mode: replicated
replicas: 3
restart: always
connector-export-file-stix:
image: opencti/connector-export-file-stix:5.3.7
environment:
- OPENCTI_URL=http://opencti:80
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_STIX_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileStix2
- CONNECTOR_SCOPE=application/json
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-csv:
image: opencti/connector-export-file-csv:5.3.7
environment:
- OPENCTI_URL=http://opencti:80
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_CSV_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileCsv
- CONNECTOR_SCOPE=text/csv
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-txt:
image: opencti/connector-export-file-txt:5.3.7
environment:
- OPENCTI_URL=http://opencti:80
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_TXT_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileTxt
- CONNECTOR_SCOPE=text/plain
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-import-file-stix:
image: opencti/connector-import-file-stix:5.3.7
environment:
- OPENCTI_URL=http://opencti:80
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_IMPORT_FILE_STIX_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportFileStix
- CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
- CONNECTOR_SCOPE=application/json,text/xml
- CONNECTOR_AUTO=true # Enable/disable auto-import of file
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-import-document:
image: opencti/connector-import-document:5.3.7
environment:
- OPENCTI_URL=http://opencti:80
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_IMPORT_DOCUMENT_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportDocument
- CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
- CONNECTOR_SCOPE=application/pdf,text/plain,text/html
- CONNECTOR_AUTO=true # Enable/disable auto-import of file
- CONNECTOR_ONLY_CONTEXTUAL=false # Only extract data related to an entity (a report, a threat actor, etc.)
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
- IMPORT_DOCUMENT_CREATE_INDICATOR=true
restart: always
depends_on:
- opencti
connector-taxii2:
image: opencti/connector-taxii2:5.3.10
environment:
- OPENCTI_URL=http://opencti:80
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=e32fbdbe-5a84-4da3-956b-b72522b6c2bf
- CONNECTOR_TYPE=EXTERNAL_IMPORT
- CONNECTOR_NAME=TAXII2
- CONNECTOR_SCOPE=ipv4-addr,ipv6-addr,vulnerability,domain,url,file-sha256,file-md5,file-sha1
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_UPDATE_EXISTING_DATA=true
- CONNECTOR_LOG_LEVEL=debug
- TAXII2_DISCOVERY_URL=http://opencti:8080/taxii2/api-bases/ # Required
#- TAXII2_CERT_PATH=ChangeMe # Optional (.pem)
- TAXII2_USERNAME=prueba # Required
- TAXII2_PASSWORD=prueba
- TAXII2_V21=true # Is TAXII v2.1
- TAXII2_COLLECTIONS=*.* # Required
- TAXII2_INITIAL_HISTORY=24 # Required, in hours
- TAXII2_INTERVAL=100 # Required, in hours
- TAXII2_VERIFY_SSL=true
- TAXII2_CREATE_INDICATORS=true # Generate indicators for ingested observables
- TAXII2_CREATE_OBSERVABLES=true # Generate observables for ingested indicators
restart: always
depends_on:
- opencti
volumes:
esdata:
s3data:
redisdata:
amqpdata:
Regards !
I have the same issue too...I'm trying to use OpenCTI by using Portainer to manage OpenCTI containers and, except for the connector-taxii2, my docker-compose.yml is the same as yours. Also, i'm using the elasticsearch:8.4.2 image, redis:7.0.5 image and the 5.3.16 version of opencti/platform, opencti/worker and connectors images
hey @MaciaKing and @salvodt97
- TAXII2_DISCOVERY_URL=http://opencti:8080/taxii2/api-bases/ # Required
If opencti reachable on the port 8080? Seems more like a networking issue, since the other connectors seems to work.
Regards
same issue here
figured it out, your connector version numbers have to match that of opencti - in this case your taxii2 connector should be v5.3.7
image: opencti/connector-taxii2:5.3.10 should be image: opencti/connector-taxii2:5.3.7
same issue here
Check your version numbers so they all match
@bigverm23 all 5.3.11, also tried 5.4.1
@bigverm23 all 5.3.11, also tried 5.4.1
share your compose. That was my exact issue.
---
version: '3'
services:
redis:
image: redis:7.0.5
restart: always
volumes:
- redisdata:/data
networks:
- opencti-default
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.4.1
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
- xpack.ml.enabled=false
- xpack.security.enabled=true
- "ES_JAVA_OPTS=-Xms${ELASTIC_MEMORY_SIZE} -Xmx${ELASTIC_MEMORY_SIZE}"
- ELASTIC_USERNAME=${ELASTIC_USERNAME}
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
ports:
- 9200:9200
- 9300:9300
restart: always
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
networks:
- opencti-default
minio:
image: minio/minio:RELEASE.2022-08-26T19-53-15Z
volumes:
- s3data:/data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
networks:
- opencti-default
rabbitmq:
image: rabbitmq:3.10-management
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
volumes:
- amqpdata:/var/lib/rabbitmq
ports:
- 5673:5672
- 15673:15672
restart: always
networks:
- opencti-default
test_debug:
image: opencti/platform:5.4.1
environment:
- NODE_OPTIONS=--max-old-space-size=8096
- APP__PORT=8080
- APP__BASE_URL=${OPENCTI_BASE_URL}
- APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
- APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
- APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
- APP__APP_LOGS__LOGS_LEVEL=error
- REDIS__HOSTNAME=redis
- REDIS__PORT=6379
- ELASTICSEARCH__URL=http://${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}@elasticsearch:9200
entrypoint: /bin/sh
tty: true
networks:
- opencti-default
opencti:
image: opencti/platform:5.4.1
environment:
- NODE_OPTIONS=--max-old-space-size=8096
- APP__PORT=8080
- APP__BASE_URL=${OPENCTI_BASE_URL}
- APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
- APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
- APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
- APP__APP_LOGS__LOGS_LEVEL=error
- REDIS__HOSTNAME=redis
- REDIS__PORT=6379
- ELASTICSEARCH__URL=http://${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}@elasticsearch:9200
- MINIO__ENDPOINT=minio
- MINIO__PORT=9000
- MINIO__USE_SSL=false
- MINIO__ACCESS_KEY=${MINIO_ROOT_USER}
- MINIO__SECRET_KEY=${MINIO_ROOT_PASSWORD}
- RABBITMQ__HOSTNAME=rabbitmq
- RABBITMQ__PORT=5672
- RABBITMQ__PORT_MANAGEMENT=15672
- RABBITMQ__MANAGEMENT_SSL=false
- RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
- RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}
- SMTP__HOSTNAME=${SMTP_HOSTNAME}
- SMTP__PORT=25
- PROVIDERS__LOCAL__STRATEGY=LocalStrategy
volumes:
- amqpdata:/var/lib/rabbitmq
ports:
- "9080:9080"
depends_on:
- redis
- elasticsearch
- minio
networks:
- opencti-default
restart: always
worker:
image: opencti/worker:5.4.1
environment:
- OPENCTI_URL=${OPENCTI_URL}
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- WORKER_LOG_LEVEL=info
depends_on:
- opencti
deploy:
mode: replicated
replicas: 1
restart: always
networks:
- opencti-default
volumes:
esdata:
s3data:
redisdata:
amqpdata:
networks:
opencti-default:
name: opencti-default
I created an additional service: test_debug
In that container, I am able to curl http://elasticsearch:9200
But in opencti, still gets Opencti API is not reachable
Change
ports: - "9080:9080" To
ports: - "9080:8080"
You don't need the networks declaration either since they are all on default
Also don't believe you need any ports exposed for rabbitmq
You are viewing logs of other containers (connector-taxii2) complaining about OpenCTI being unavailable. How I debugged:
- Start by viewing its logs:
docker compose logs opencti
{
"category": "APP",
"errors": [
{
"attributes": { "genre": "TECHNICAL", "http_status": 500 },
"message": "Search engine seems down",
"name": "CONFIGURATION_ERROR",
"stack": "CONFIGURATION_ERROR: Search engine seems down\n at error (/opt/opencti/build/src/config/errors.js:8:10)\n at ConfigurationError (/opt/opencti/build/src/config/errors.js:64:53)\n at /opt/opencti/build/src/database/engine.js:211:15\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at searchEngineVersion (/opt/opencti/build/src/database/engine.js:207:22)\n at searchEngineInit (/opt/opencti/build/src/database/engine.js:254:27)\n at checkSystemDependencies (/opt/opencti/build/src/initialization.js:127:3)\n at platformStart (/opt/opencti/build/src/boot.js:228:5)"
},
{
"message": "getaddrinfo ENOTFOUND elasticsearch",
"name": "ConnectionError",
"stack": "ConnectionError: getaddrinfo ENOTFOUND elasticsearch\n at ClientRequest.onError (/opt/opencti/build/node_modules/@opensearch-project/opensearch/lib/Connection.js:129:16)\n at ClientRequest.emit (node:events:518:28)\n at Socket.socketErrorListener (node:_http_client:495:9)\n at Socket.emit (node:events:518:28)\n at emitErrorNT (node:internal/streams/destroy:169:8)\n at emitErrorCloseNT (node:internal/streams/destroy:128:3)\n at processTicksAndRejections (node:internal/process/task_queues:82:21)"
}
],
"level": "error",
"message": "Search engine seems down",
"timestamp": "2024-01-20T17:20:24.238Z",
"version": "5.12.20"
}
- Our docker setup automagically creates DNS entries for containers (used with
http://elasticsearch:9200).getaddrinfo ENOTFOUNDsays the DNS entry does not exist. Possible causes:- Service is not defined. This is not the case: copy-pasting (ensuring no typos), we can see the
elasticsearchservice is in fact defined. Additionallydocker compose psshows it existing (underSERVICE). - The service needs to be up for the DNS name to exist.
docker compose psshowed us thatelasticsearchisRestarting (1) 13 seconds ago+docker compose logs elasticsearchmakes it blatantly clear: - Docker itself is malfunctioning or misconfigured.
- Service is not defined. This is not the case: copy-pasting (ensuring no typos), we can see the
4294967296bytes can't be reserved in a virtual machine with 4G of RAM. This was the root cause of my problem.- The memory requirement can be changed from elastic environment. This setup uses an abstraction
ELASTIC_MEMORY_SIZE, inside.env. Adjusting the claim helps.