prefect
prefect copied to clipboard
Orion: Running flows in Docker results in an error
Description
I ran into an error when I tried to run the example flow in Docker. The flow:
from prefect import flow
from prefect.deployments import DeploymentSpec
from prefect.flow_runners import DockerFlowRunner
@flow
def my_flow():
print("Hello from Docker!")
DeploymentSpec(
name="example",
flow=my_flow,
flow_runner=DockerFlowRunner()
)
the error I received after creating deployment, work queue and an agent:
prefect agent start '5d76cc30-165e-4086-af9e-5fec6b4d7614'
/home/ubuntu/.local/lib/python3.8/site-packages/prefect/context.py:360: UserWarning: Temporary environment is overriding key(s): PREFECT_API_URL
with temporary_environ(
Starting agent connected to http://127.0.0.1:4200/api...
___ ___ ___ ___ ___ ___ _____ _ ___ ___ _ _ _____
| _ \ _ \ __| __| __/ __|_ _| /_\ / __| __| \| |_ _|
| _/ / _|| _|| _| (__ | | / _ \ (_ | _|| .` | | |
|_| |_|_\___|_| |___\___| |_| /_/ \_\___|___|_|\_| |_|
Agent started!
12:28:38.236 | INFO | prefect.agent - Submitting flow run '1be26278-ed55-4ed5-9dab-5be873047c5e'
12:28:38.287 | INFO | prefect.flow_runner.docker - Flow run 'crystal-waxbill' has container settings = {'image': 'prefecthq/prefect:2.0a13-python3.8', 'ne
twork': None, 'command': ['python', '-m', 'prefect.engine', '1be26278-ed55-4ed5-9dab-5be873047c5e'], 'environment': {'PREFECT_API_URL': 'http://host.docker.i
nternal:4200/api'}, 'auto_remove': False, 'labels': {'io.prefect.flow-run-id': '1be26278-ed55-4ed5-9dab-5be873047c5e'}, 'extra_hosts': {'host.docker.internal
': 'host-gateway'}, 'name': 'crystal-waxbill', 'volumes': []}
12:28:38.588 | INFO | prefect.agent - Completed submission of flow run '1be26278-ed55-4ed5-9dab-5be873047c5e'
12:28:38.599 | INFO | prefect.flow_runner.docker - Flow run container 'crystal-waxbill' has status 'running'
12:28:39.913 | ERROR | prefect.engine - Engine execution of flow run '1be26278-ed55-4ed5-9dab-5be873047c5e' exited with unexpected exception
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/anyio/_core/_sockets.py", line 127, in try_connect
stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)
File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 1518, in connect_tcp
await get_running_loop().create_connection(StreamProtocol, host, port,
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 1025, in create_connection
raise exceptions[0]
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 1010, in create_connection
sock = await self._connect_sock(
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 924, in _connect_sock
await self.sock_connect(sock, address)
File "/usr/local/lib/python3.8/asyncio/selector_events.py", line 496, in sock_connect
return await fut
File "/usr/local/lib/python3.8/asyncio/selector_events.py", line 528, in _sock_connect_cb
raise OSError(err, f'Connect call failed {address}')
ConnectionRefusedError: [Errno 111] Connect call failed ('172.17.0.1', 4200)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/httpcore/_exceptions.py", line 8, in map_exceptions
yield
File "/usr/local/lib/python3.8/site-packages/httpcore/backends/asyncio.py", line 101, in connect_tcp
stream: anyio.abc.ByteStream = await anyio.connect_tcp(
File "/usr/local/lib/python3.8/site-packages/anyio/_core/_sockets.py", line 184, in connect_tcp
raise OSError('All connection attempts failed') from cause
OSError: All connection attempts failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
yield
File "/usr/local/lib/python3.8/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "/usr/local/lib/python3.8/site-packages/httpcore/_async/connection_pool.py", line 253, in handle_async_request
raise exc
File "/usr/local/lib/python3.8/site-packages/httpcore/_async/connection_pool.py", line 237, in handle_async_request
response = await connection.handle_async_request(request)
File "/usr/local/lib/python3.8/site-packages/httpcore/_async/connection.py", line 86, in handle_async_request
raise exc
File "/usr/local/lib/python3.8/site-packages/httpcore/_async/connection.py", line 63, in handle_async_request
stream = await self._connect(request)
File "/usr/local/lib/python3.8/site-packages/httpcore/_async/connection.py", line 111, in _connect
stream = await self._network_backend.connect_tcp(**kwargs)
File "/usr/local/lib/python3.8/site-packages/httpcore/backends/auto.py", line 23, in connect_tcp
return await self._backend.connect_tcp(
File "/usr/local/lib/python3.8/site-packages/httpcore/backends/asyncio.py", line 101, in connect_tcp
stream: anyio.abc.ByteStream = await anyio.connect_tcp(
File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.8/site-packages/httpcore/_exceptions.py", line 12, in map_exceptions
raise to_exc(exc)
httpcore.ConnectError: All connection attempts failed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/prefect/engine.py", line 951, in <module>
enter_flow_run_engine_from_subprocess(flow_run_id)
File "/usr/local/lib/python3.8/site-packages/prefect/engine.py", line 134, in enter_flow_run_engine_from_subprocess
return anyio.run(retrieve_flow_then_begin_flow_run, flow_run_id)
File "/usr/local/lib/python3.8/site-packages/anyio/_core/_eventloop.py", line 56, in run
return asynclib.run(func, *args, **backend_options)
File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 233, in run
return native_run(wrapper(), debug=debug)
File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 228, in wrapper
return await func(*args)
File "/usr/local/lib/python3.8/site-packages/prefect/client.py", line 81, in with_injected_client
return await fn(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/prefect/engine.py", line 194, in retrieve_flow_then_begin_flow_run
flow_run = await client.read_flow_run(flow_run_id)
File "/usr/local/lib/python3.8/site-packages/prefect/client.py", line 1132, in read_flow_run
response = await self.get(f"/flow_runs/{flow_run_id}")
File "/usr/local/lib/python3.8/site-packages/prefect/client.py", line 333, in get
response = await self._client.get(route, **kwargs)
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1729, in get
return await self.request(
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1506, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1593, in send
response = await self._send_handling_auth(
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1621, in _send_handling_auth
response = await self._send_handling_redirects(
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1658, in _send_handling_redirects
response = await self._send_single_request(request)
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1695, in _send_single_request
response = await transport.handle_async_request(request)
File "/usr/local/lib/python3.8/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.8/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: All connection attempts failed
12:28:40.307 | INFO | prefect.flow_runner.docker - Flow run container 'crystal-waxbill' has status 'exited'
The status of the flow run after the error changes to Pending
.
Reproduction / Example
prefect orion start
prefect deployment create ./example-deployment.py
prefect deployment run my-flow/example
prefect deployment inspect my-flow/example
prefect work-queue create -d <DEPLOYMENT-ID> test
prefect agent start <WORK-QUEUE-ID>
Hi! Thanks for the well written issue.
We're tracking an update to the tutorial internally at https://github.com/PrefectHQ/orion/issues/1123 This is similar to https://github.com/PrefectHQ/prefect/issues/4963 and https://github.com/PrefectHQ/prefect/pull/5182
The issue here is that the requests from the container being denied because the API is bound to 127.0.0.1 instead of 0.0.0.0. If you run prefect orion start --host 0.0.0.0
, the container will be able to reach the API. This is a difference in container networking on Linux (this is not an issue on macOS).
We solved this in V1 of Prefect by including the container in a shared network with the server, but the server is not running in a container right now so we cannot apply the same fix. I'm not sure what the long-term solution is, it'd be nice to be able to use it without exposing your API via 0.0.0.0.
I believe you can also bind the Docker host IP 172.17.0.1
and it will allow connections from containers but won't allow connections from other sources.
@madkinsz im following the tutorial, running the orion server as you mentioned with prefect orion start --host 0.0.0.0
But im still getting the error
14:18:15.232 | ERROR | prefect.engine - Engine execution of flow run 'b6dbd47e-07ea-4207-bdb6-fad3dbc8cfb3' exited with unexpected exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/anyio/_core/_sockets.py", line 164, in try_connect
stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)
File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 1691, in connect_tcp
await get_running_loop().create_connection(
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1065, in create_connection
raise exceptions[0]
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1050, in create_connection
sock = await self._connect_sock(
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 961, in _connect_sock
await self.sock_connect(sock, address)
File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 500, in sock_connect
return await fut
File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 535, in _sock_connect_cb
raise OSError(err, f'Connect call failed {address}')
ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 4200)
Is there any updates to this issue? Im testing implementing Prefect at my company but if this simple tutorial doesnt work, It makes me doubt of the stability of the project. The issue you reference does not exist (or is a private repo)
@manugarri Can you please share the output of prefect version
? It'd also be great to see a docker inspect
of the failed container.
Of course!, the version is 2.6.1
, here is the verbose output:
$ prefect version
Version: 2.6.1
API version: 0.8.2
Python version: 3.9.12
Git commit: d9dd4443
Built: Fri, Oct 14, 2022 2:01 PM
OS/Arch: linux/x86_64
Profile: local
Server type: hosted
Great thanks! @tpdorsey has the tutorial been tested on Linux or just macOS?
@manugarri if you can include the details about the container that'd great. Specifically, I'm interested in whether or not it is using a host
network mode.
@madkinsz not sure how to check that, im just running prefect tutorial, and set up the infrastructure block as defined in the tutorial
im running this on my local machine, which is running ubuntu under WSL2 ,with docker studio.
I dont think the container gets to be built before the agent throwing the exception. Running docker -ps shows no prefect container running. I do see the prefect image so at least the build process started somehow.
@madkinsz I have not tested the tutorial in a Linux environment nor has it been tested under Ubuntu running in WSL2, which is a different case.
@manugarri Do you see the container with docker container ls --all
? I do not believe ps
will show stopped containers
@madkinsz you are right! I see the container , here is the output
Here is the output of docker inspect
:
[
{
"Id": "9fad2ea5d8b4312f727db97b4380e2920fa08183f5b7c4ba99bb8b5fa5482776",
"Created": "2022-10-19T14:18:09.67233962Z",
"Path": "/usr/bin/tini",
"Args": [
"-g",
"--",
"/opt/prefect/entrypoint.sh",
"python",
"-m",
"prefect.engine"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 1,
"Error": "",
"StartedAt": "2022-10-19T14:18:09.83037895Z",
"FinishedAt": "2022-10-19T14:18:15.3644034Z"
},
"Image": "sha256:4c52ba801ea01a497765756c9f0b8374eb44d7836e8efc21403a6646db01bc35",
"ResolvConfPath": "/var/lib/docker/containers/9fad2ea5d8b4312f727db97b4380e2920fa08183f5b7c4ba99bb8b5fa5482776/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/9fad2ea5d8b4312f727db97b4380e2920fa08183f5b7c4ba99bb8b5fa5482776/hostname",
"HostsPath": "/var/lib/docker/containers/9fad2ea5d8b4312f727db97b4380e2920fa08183f5b7c4ba99bb8b5fa5482776/hosts",
"LogPath": "/var/lib/docker/containers/9fad2ea5d8b4312f727db97b4380e2920fa08183f5b7c4ba99bb8b5fa5482776/9fad2ea5d8b4312f727db97b4380e2920fa08183f5b7c4ba99bb8b5fa5482776-json.log",
"Name": "/delectable-sambar",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "host",
"PortBindings": null,
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": [
"host.docker.internal:host-gateway"
],
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/5fee4621d57fdd273831768ab9c5979e1bf03a694e6e361f29685c27d90619a2-init/diff:/var/lib/docker/overlay2/d07ce534552c3e5a93c79ca3d928263a401779f1c14b91400d08429753a0437b/diff:/var/lib/docker/overlay2/e86e31dd974505033277f4bd40b9aa1c48cf190144a969c6d34292d394666967/diff:/var/lib/docker/overlay2/e18aa2c10cc55a8ef09345923f09aa0cbb7c1651bec9a40b9b206ed1f79c21e7/diff:/var/lib/docker/overlay2/fb9439c9a6ba6e0797d2c214b0e4c19a9efa87fd1c82aa601ded028dd8821a4a/diff:/var/lib/docker/overlay2/b1dbd01e775d0be26f46f37cf73f8c03f11fa48eab0e308b2055175455dce8a3/diff:/var/lib/docker/overlay2/846e23022ae84e6f438a82cad523da7dc72dd73d6c19e01f2b925889a0407486/diff:/var/lib/docker/overlay2/f44b1aa8c0a30baee02d3260c88c743372e094c4ba78c0d0acbf5fc1e6c1d8b4/diff:/var/lib/docker/overlay2/97cbe5afa044dc2e013ba0301f153c9f2981c7eef6eb516c0492640580a8adf8/diff:/var/lib/docker/overlay2/08f8e48bb9b36898a4ada76186bd19d52e8227191a43ebb884e23ce04eca500e/diff:/var/lib/docker/overlay2/ab93d3e58b927a43f94ba734931915222c3359f00779b03020608ef4d4856126/diff:/var/lib/docker/overlay2/e0e725e2eba6a692327c0c75108bb24e6dc3082fcd67f31a4930c766599a5f5a/diff:/var/lib/docker/overlay2/86f2b9e3fe58c9e4781e5e80b1e8702fa3a92cb428d9e84ed8f05fe09474ed09/diff:/var/lib/docker/overlay2/ccdd715d825e529afe1f699522c57242680bc7eae4efcd69171d5e6b38858d67/diff",
"MergedDir": "/var/lib/docker/overlay2/5fee4621d57fdd273831768ab9c5979e1bf03a694e6e361f29685c27d90619a2/merged",
"UpperDir": "/var/lib/docker/overlay2/5fee4621d57fdd273831768ab9c5979e1bf03a694e6e361f29685c27d90619a2/diff",
"WorkDir": "/var/lib/docker/overlay2/5fee4621d57fdd273831768ab9c5979e1bf03a694e6e361f29685c27d90619a2/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "docker-desktop",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PREFECT_API_URL=http://127.0.0.1:4200/api",
"PREFECT__FLOW_RUN_ID=b6dbd47e07ea4207bdb6fad3dbc8cfb3",
"EXTRA_PIP_PACKAGES=s3fs",
"PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568",
"PYTHON_VERSION=3.9.15",
"PYTHON_PIP_VERSION=22.0.4",
"PYTHON_SETUPTOOLS_VERSION=58.1.0",
"PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/5eaac1050023df1f5c98b173b248c260023f2278/public/get-pip.py",
"PYTHON_GET_PIP_SHA256=5aefe6ade911d997af080b315ebcb7f882212d070465df544e1175ac2be519b4",
"LC_ALL=C.UTF-8"
],
"Cmd": [
"python",
"-m",
"prefect.engine"
],
"Image": "prefecthq/prefect:2.6.1-python3.9",
"Volumes": null,
"WorkingDir": "/opt/prefect",
"Entrypoint": [
"/usr/bin/tini",
"-g",
"--",
"/opt/prefect/entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"desktop.docker.io/wsl-distro": "Ubuntu-20.04",
"io.prefect.python-version": "3.9.15",
"io.prefect.version": "2.6.1",
"maintainer": "[email protected]",
"org.label-schema.name": "prefect",
"org.label-schema.schema-version": "= 1.0",
"org.label-schema.url": "https://www.prefect.io/",
"org.opencontainers.image.created": "2022-10-14T19:03:23.069Z",
"org.opencontainers.image.description": "The easiest way to coordinate your dataflow",
"org.opencontainers.image.licenses": "Apache-2.0",
"org.opencontainers.image.revision": "d9dd4443777f7f01a9e16c662feeade1c6092b5a",
"org.opencontainers.image.source": "https://github.com/PrefectHQ/prefect",
"org.opencontainers.image.title": "prefect",
"org.opencontainers.image.url": "https://github.com/PrefectHQ/prefect",
"org.opencontainers.image.version": "2.6.1-python3.9",
"prefect.io/flow-run-id": "b6dbd47e-07ea-4207-bdb6-fad3dbc8cfb3",
"prefect.io/flow-run-name": "delectable-sambar",
"prefect.io/version": "2.6.1"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "764885f8fee77d7ec64db61f3ae281bd247f2d845dbdedc93a7b645e8086150b",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/default",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"host": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "dab0a4e28d335a96a021c9a598097b17f79debd160eae6cd8394100d34688542",
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": null
}
}
}
}
]
Great thanks! Here it looks like we are correctly setting the network mode and host gateway
"NetworkMode": "host",
...
"ExtraHosts": [
"host.docker.internal:host-gateway"
],
so it should be able to communicate with your API on locahost. Can you also share the output of docker version
? Perhaps the requirements for networking on Linux have changed or you are on a version that does not support the "host" network.
Sure!
$ docker version
Client: Docker Engine - Community
Cloud integration: v1.0.23
Version: 20.10.14
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 24 01:48:21 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Desktop
Engine:
Version: 20.10.14
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 87a90dc
Built: Thu Mar 24 01:46:14 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.11
GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
That version looks good, I'm pretty confused as we test containers on Linux in CI.
Would you try setting network_mode
to "bridge"
in your Docker infrastructure block?
Thanks for your patience!
That worked! thanks a ton! Why is that if I may ask ?
By default, we use the "host" network mode on Linux when we detect that you're running an API on localhost because it makes it easier for users to get started. In this mode, your container can access anything on the network outside the container. Because of this, we do not change your API URL, it should just work. It looks like something's gone wrong with that, but we haven't gotten other reports of it and it's working fine in our tests so it'll take some investigation to determine what.
If you use the "bridge" network mode, the container is isolated and cannot talk to localhost. When we detect use of "bridge" with a local API, we need to convert your API URL to use "host.docker.internal" which lets it communicate outside of the container. This is a little more complicated, so we don't do it by default when the "host" mode is available.
This issue is stale because it has been open 30 days with no activity. To keep this issue open remove stale label or comment.
This issue was closed because it has been stale for 14 days with no activity. If this issue is important or you have more to add feel free to re-open it.