cloud-sdk-docker
cloud-sdk-docker copied to clipboard
Attempting to re-establish stream for both 470 and 471
I am running this emulator using testcontainers. If I have a couple tests running in parallel, I run into the following obscure error
--- Logging error ---
Traceback (most recent call last):
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/logging/__init__.py", line 1086, in emit
stream.write(msg + self.terminator)
ValueError: I/O operation on closed file.
Call stack:
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/grpc/_channel.py", line 1731, in channel_spin
call_completed = event.tag(event)
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/grpc/_channel.py", line 244, in handle_event
callback()
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/bidi.py", line 459, in _on_call_done
self._reopen()
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/bidi.py", line 494, in _reopen
_LOGGER.info("Re-established stream")
Message: 'Re-established stream'
Arguments: ()
Exception in thread Thread-LeaseMaintainer:
Traceback (most recent call last):
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 76, in error_remapped_callable
return callable_(*args, **kwargs)
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/grpc/_channel.py", line 1176, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/grpc/_channel.py", line 1005, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:55801: Failed to connect to remote host: Connection refused"
debug_error_string = "UNKNOWN:Error received from peer {created_time:"2024-04-09T12:03:52.857744+02:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:55801: Failed to connect to remote host: Connection refused"}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/retry/retry_unary.py", line 144, in retry_target
result = target()
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/timeout.py", line 120, in func_with_timeout
return func(*args, **kwargs)
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 78, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.ServiceUnavailable: 503 failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:55801: Failed to connect to remote host: Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/leaser.py", line 201, in maintain_leases
expired_ack_ids = self._manager._send_lease_modacks(
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/streaming_pull_manager.py", line 1047, in _send_lease_modacks
self._dispatcher.modify_ack_deadline(items, ack_deadline)
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/dispatcher.py", line 358, in modify_ack_deadline
_, requests_to_retry = self._manager.send_unary_modack(
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/streaming_pull_manager.py", line 726, in send_unary_modack
self._client.modify_ack_deadline(
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/pubsub_v1/services/subscriber/client.py", line 1487, in modify_ack_deadline
rpc(
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/gapic_v1/method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/retry/retry_unary.py", line 293, in retry_wrapped_func
return retry_target(
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/retry/retry_unary.py", line 153, in retry_target
_retry_error_helper(
File "/Users/johndoe/miniconda3/envs/my-service/lib/python3.9/site-packages/google/api_core/retry/retry_base.py", line 221, in _retry_error_helper
raise final_exc from source_exc
google.api_core.exceptions.RetryError: Timeout of 60.0s exceeded, last exception: 503 failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:55801: Failed to connect to remote host: Connection refused
It keeps attempting to re-establish the stream, and I have to manually abort the tests run. I am experiencing this issue on 470 and 471. When reverting to 469, the issue disappears.
Hello, do you still have the issue on the latest gcloud v490?
Resolving this as we have not heard from the issue creator any more. Feel free to reopen or a create new issue if the issue reoccurs.