aws-cli
aws-cli copied to clipboard
No fallback to IPv4 when IPv6 unreachable
Describe the bug
awscli does not fall back to IPv4 when it is unable to connect to an IPv6 API endpoint.
Regression Issue
- [ ] Select this option if this issue appears to be a regression.
Expected Behavior
Upon executing aws s3 ls, awscli should fall back to the legacy IP protocol if the AWS S3 endpoint is unreachable over IPv6.
Current Behavior
aws s3 ls hangs indefinitely.
[ec2-user@ip-10-250-0-166 ~]$ aws --version
aws-cli/2.23.11 Python/3.9.21 Linux/6.1.131-143.221.amzn2023.x86_64 source/x86_64.amzn.2023
[ec2-user@ip-10-250-0-166 ~]$
[ec2-user@ip-10-250-0-166 ~]$ aws s3 ls --debug
2025-04-13 17:18:25,875 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.23.11 Python/3.9.21 Linux/6.1.131-143.221.amzn2023.x86_64 source/x86_64.amzn.2023
2025-04-13 17:18:25,875 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['s3', 'ls', '--debug']
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_s3 at 0x7fa0b4e03550>
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_ddb at 0x7fa0b4fb7e50>
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.configure.configure.ConfigureCommand'>>
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x7fa0b509f3a0>
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x7fa0b50a85e0>
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function alias_opsworks_cm at 0x7fa0b4e1a790>
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_history_commands at 0x7fa0b4f83c10>
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.devcommands.CLIDevCommand'>>
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_waiters at 0x7fa0b4e12940>
2025-04-13 17:18:25,884 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method AliasSubCommandInjector.on_building_command_table of <awscli.alias.AliasSubCommandInjector object at 0x7fa0b4d4c520>>
2025-04-13 17:18:25,884 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python3.9/site-packages/awscli/data/cli.json
2025-04-13 17:18:25,886 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_types at 0x7fa0b4ec8700>
2025-04-13 17:18:25,886 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function no_sign_request at 0x7fa0b4ec89d0>
2025-04-13 17:18:25,886 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_verify_ssl at 0x7fa0b4ec8940>
2025-04-13 17:18:25,886 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_read_timeout at 0x7fa0b4ec8af0>
2025-04-13 17:18:25,886 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_connect_timeout at 0x7fa0b4ec8a60>
2025-04-13 17:18:25,886 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <built-in method update of dict object at 0x7fa0b4d59f00>
2025-04-13 17:18:25,887 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.23.11 Python/3.9.21 Linux/6.1.131-143.221.amzn2023.x86_64 source/x86_64.amzn.2023
2025-04-13 17:18:25,887 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['s3', 'ls', '--debug']
2025-04-13 17:18:25,887 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_timestamp_parser at 0x7fa0b4e03c10>
2025-04-13 17:18:25,887 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function register_uri_param_handler at 0x7fa0b6d16d30>
2025-04-13 17:18:25,888 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_binary_formatter at 0x7fa0b4d94b80>
2025-04-13 17:18:25,888 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function no_pager_handler at 0x7fa0b557cb80>
2025-04-13 17:18:25,888 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_assume_role_provider_cache at 0x7fa0b52794c0>
2025-04-13 17:18:25,890 - MainThread - botocore.utils - DEBUG - IMDS ENDPOINT: http://169.254.169.254/
2025-04-13 17:18:25,891 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function attach_history_handler at 0x7fa0b4f83af0>
2025-04-13 17:18:25,892 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_json_file_cache at 0x7fa0b5024b80>
2025-04-13 17:18:25,892 - MainThread - botocore.hooks - DEBUG - Event building-command-table.s3: calling handler <function add_waiters at 0x7fa0b4e12940>
2025-04-13 17:18:25,892 - MainThread - botocore.hooks - DEBUG - Event building-command-table.s3: calling handler <bound method AliasSubCommandInjector.on_building_command_table of <awscli.alias.AliasSubCommandInjector object at 0x7fa0b4d4c520>>
2025-04-13 17:18:25,893 - MainThread - botocore.hooks - DEBUG - Event building-command-table.s3_ls: calling handler <function add_waiters at 0x7fa0b4e12940>
2025-04-13 17:18:25,893 - MainThread - botocore.hooks - DEBUG - Event building-command-table.s3_ls: calling handler <bound method AliasSubCommandInjector.on_building_command_table of <awscli.alias.AliasSubCommandInjector object at 0x7fa0b4d4c520>>
2025-04-13 17:18:25,893 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.ls.paths: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7fa0b42b1070>
2025-04-13 17:18:25,893 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.ls.anonymous: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7fa0b42b1070>
2025-04-13 17:18:25,893 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.ls.page-size: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7fa0b42b1070>
2025-04-13 17:18:25,894 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.ls.human-readable: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7fa0b42b1070>
2025-04-13 17:18:25,894 - MainThread - botocore.hooks - DEBUG - Event process-cli-arg.custom.ls: calling handler <awscli.argprocess.ParamShorthandParser object at 0x7fa0b7703880>
2025-04-13 17:18:25,894 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.ls.summarize: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7fa0b42b1070>
2025-04-13 17:18:25,894 - MainThread - botocore.hooks - DEBUG - Event process-cli-arg.custom.ls: calling handler <awscli.argprocess.ParamShorthandParser object at 0x7fa0b7703880>
2025-04-13 17:18:25,894 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.ls.request-payer: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7fa0b42b1070>
2025-04-13 17:18:25,894 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.ls.bucket-name-prefix: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7fa0b42b1070>
2025-04-13 17:18:25,894 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.ls.bucket-region: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7fa0b42b1070>
2025-04-13 17:18:25,894 - MainThread - botocore.utils - DEBUG - IMDS ENDPOINT: http://169.254.169.254/
2025-04-13 17:18:25,896 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTP connection (1): 169.254.169.254:80
2025-04-13 17:18:25,897 - MainThread - urllib3.connectionpool - DEBUG - http://169.254.169.254:80 "PUT /latest/api/token HTTP/1.1" 200 56
2025-04-13 17:18:25,898 - MainThread - urllib3.connectionpool - DEBUG - Resetting dropped connection: 169.254.169.254
2025-04-13 17:18:25,900 - MainThread - urllib3.connectionpool - DEBUG - http://169.254.169.254:80 "GET /latest/meta-data/placement/availability-zone/ HTTP/1.1" 200 10
2025-04-13 17:18:25,900 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: env
2025-04-13 17:18:25,901 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role
2025-04-13 17:18:25,901 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role-with-web-identity
2025-04-13 17:18:25,901 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: sso
2025-04-13 17:18:25,901 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: shared-credentials-file
2025-04-13 17:18:25,901 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: custom-process
2025-04-13 17:18:25,901 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: config-file
2025-04-13 17:18:25,901 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: ec2-credentials-file
2025-04-13 17:18:25,901 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: boto-config
2025-04-13 17:18:25,902 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: container-role
2025-04-13 17:18:25,902 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: iam-role
2025-04-13 17:18:25,902 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTP connection (1): 169.254.169.254:80
2025-04-13 17:18:25,903 - MainThread - urllib3.connectionpool - DEBUG - http://169.254.169.254:80 "PUT /latest/api/token HTTP/1.1" 200 56
2025-04-13 17:18:25,903 - MainThread - urllib3.connectionpool - DEBUG - Resetting dropped connection: 169.254.169.254
2025-04-13 17:18:25,905 - MainThread - urllib3.connectionpool - DEBUG - http://169.254.169.254:80 "GET /latest/meta-data/iam/security-credentials/ HTTP/1.1" 200 11
2025-04-13 17:18:25,905 - MainThread - urllib3.connectionpool - DEBUG - Resetting dropped connection: 169.254.169.254
2025-04-13 17:18:25,906 - MainThread - urllib3.connectionpool - DEBUG - http://169.254.169.254:80 "GET /latest/meta-data/iam/security-credentials/konekti-ssm HTTP/1.1" 200 1582
2025-04-13 17:18:25,908 - MainThread - botocore.credentials - DEBUG - Found credentials from IAM Role:
host:s3.us-east-1.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20250413T171825Z
x-amz-security-token
Reproduction Steps
- Break IPv6 connectivity to AWS API endpoint such as s3.us-east-1.amazonaws.com
- Execute
aws s3 ls --debug
Command hangs indefinitely.
Possible Solution
No response
Additional Information/Context
No response
CLI version used
aws-cli/2.23.11
Environment details (OS name and version, etc.)
Amazon Linux 2023, ami-089146b56f5af20cf (us-east-1 AMI)
I am a little bit confused about the error description.
s3.us-east-1.amazonaws.com has no ipv6 support, one has to use s3.dualstack.us-east-1.amazonaws.com to get ipv6 in this case 😕.
Do you have maybe just no ipv4 connection in your VPC?
I cannot rule out that I am misinterpreting what I am observing.
Here is what I did today to reproduce. I created a SG that blocks all IPv6 outbound. I am not using gateway or interface S3 endpoints (which I recognize don't support IPv6).
[ssm-user@ip-10-0-13-93 ~]$ host s3.dualstack.us-east-1.amazonaws.com
s3.dualstack.us-east-1.amazonaws.com has address 52.217.203.232
s3.dualstack.us-east-1.amazonaws.com has address 3.5.20.48
s3.dualstack.us-east-1.amazonaws.com has address 52.217.173.16
s3.dualstack.us-east-1.amazonaws.com has address 16.182.69.232
s3.dualstack.us-east-1.amazonaws.com has address 16.15.193.189
s3.dualstack.us-east-1.amazonaws.com has address 3.5.21.44
s3.dualstack.us-east-1.amazonaws.com has address 52.217.235.240
s3.dualstack.us-east-1.amazonaws.com has address 52.216.32.208
s3.dualstack.us-east-1.amazonaws.com has IPv6 address 2600:1fa0:8157:b2f0:10b6:4ad0::
s3.dualstack.us-east-1.amazonaws.com has IPv6 address 2600:1fa0:81cb:94b0:36e7:e9c0::
s3.dualstack.us-east-1.amazonaws.com has IPv6 address 2600:1fa0:81cb:9660:36e7:ecc0::
s3.dualstack.us-east-1.amazonaws.com has IPv6 address 2600:1fa0:811b:a820:34d8:3c80::
s3.dualstack.us-east-1.amazonaws.com has IPv6 address 2600:1fa0:80ec:c7c0:34d9:31d6::
s3.dualstack.us-east-1.amazonaws.com has IPv6 address 2600:1fa0:81af:8da8:34d8:2170::
s3.dualstack.us-east-1.amazonaws.com has IPv6 address 2600:1fa0:80fb:e540:34d9:c9b0::
s3.dualstack.us-east-1.amazonaws.com has IPv6 address 2600:1fa0:81cf:8869:36e7:e288::
[ssm-user@ip-10-0-13-93 ~]$ aws s3 ls --endpoint-url https://s3.dualstack.us-east-1.amazonaws.com
This hangs indefinitely. I believe this should fall back to IPv4. This EC2 instance can reach https://s3.us-east-1.amazonaws.com over IPv4.
@jeffbrl I think it would be great if you could update the debug log in your issue with the correct using the dualstack endpoint. It currently just reflects the "normal" output without dualstack usage you have a problem with.
Hello @jeffbrl, thanks for reaching out. I have replicated the same issue where I have dualstack enabled by setting aws configure set use_dualstack_endpoint true, EC2 have no outgoing IPv6, then ran the command aws s3 ls --debug. It did not hang but it always took 8 minutes for the ls command to work.
Looking at the --debug logs as an example below
2025-05-13 21:15:15,937 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): s3.dualstack.us-east-1.amazonaws.com:443
2025-05-13 21:23:16,364 - MainThread - urllib3.connectionpool - DEBUG - https://s3.dualstack.us-east-1.amazonaws.com:443 "GET / HTTP/1.1" 200 None
At 21:15, initiating of connection starts and at 21:23, the connection has been established.
Every time I have done the request aws s3 ls --debug, it always works or gets connected after 8 mins. From the logs, it can be seen that CLI uses urllib3 for connection https://github.com/urllib3/urllib3/blob/main/src/urllib3/connection.py#L82. Then urllib3 uses cpython library https://github.com/python/cpython/blob/main/Lib/socket.py#L828-L865 where cpyton list all available IPs of an endpoint has. Since the endpoint that we are using is host s3.dualstack.us-west-2.amazonaws.com https://docs.aws.amazon.com/AmazonS3/latest/API/ipv6-access.html#ipv6-access-test-compatabilty which it has 16 IPs (8 IPv6's and 8 IPv4's), cpython list all of the IPv6 first then IPv4. Then it tries them one by one.
Looking at the default timeout for trying connection with AWS CLI https://github.com/boto/botocore/blob/develop/botocore/httpsession.py#L80 , the default timeout is 60 seconds. For testing, I have tried with time aws s3 ls --cli-connect-timeout 5 --debug https://awscli.amazonaws.com/v2/documentation/api/latest/reference/index.html so for each connection timeout, its only 5 seconds on each try so after 40 second it works.
@adev-code Thank you for the detailed write-up. Waiting 8 mins seems very undesirable. No user will wait 8 mins. I didn't. Should awscli not rely on the default timeout?
Thanks for the response. if the default timeout is too long, it can be shorten and set to a custom time as mentioned above.
@adev-code you are asking users to anticipate bad network behavior and change a default time out to work around? I am being sincere in asking this. I don't do snark.
For DualStack endpoints, it does fall back to IPv4. It just takes a while due to the default connection timeout per IP (which is 60 seconds per IP). To decrease the amount of time the CLI retries on each IPv6 endpoints, connection timeout can be decreased as per the user.
I'm not understanding the argument that behavior that would be unacceptable in a browser is OK for a utility like awscli. But I get that happy eyeballs-like behavior may be a non-trivial feature to implement. If AWS is not interested in investing engineering resources in such a feature, please close this comment thread.
Closing thread.
This issue is now closed. Comments on closed issues are hard for our team to see. If you need more assistance, please open a new issue that references this one.