Fatal error: Needed to prompt for a connection or sudo password (host: cnode1), but input would be ambiguous in parallel mode
I'm trying to set up cstar_perf on EC2 My env have only one cassandra node, cnode1 Although all the following combinations do work without a prompt
ssh cnode1 hostname
ssh root@cnode1 hostname
ssh ec2-user@cnode1 hostname
I'm still getting "Needed to prompt.." error when trying to run cstar_perf_bootstrap apache/cassandra-2.1
Any idea?
Thanks!
Full trace below.
$ cstar_perf_bootstrap apache/cassandra-2.1
INFO:bootstrap:Bringing up apache/cassandra-2.1 cluster...
INFO:benchmark:### Config: ###
{'ant_tarball': 'http://www.apache.org/dist/ant/binaries/apache-ant-1.8.4-bin.tar.bz2',
'block_devices': [u'/dev/xvdb', u'/dev/xvdc', u'/dev/xvdd', u'/dev/xvde'],
'blockdev_readahead': u'256',
'cluster_name': 'cstar_perf Y56VVL9VHQ',
'commitlog_directory': u'/mnt/d1/commitlog',
'data_file_directories': [u'/mnt/d2/data', u'/mnt/d3/data', u'/mnt/d4/data'],
'env': '',
'flush_directory': '/var/lib/cassandra/flush',
'git_repo': 'git://github.com/apache/cassandra.git',
'hosts': {u'cnode1': {u'hostname': u'cnode1',
u'internal_ip': u'172.31.14.24',
u'seed': True}},
'log_dir': '~/fab/cassandra/logs',
u'name': u'example1',
'num_tokens': 256,
'override_version': None,
'partitioner': 'murmur3',
'revision': 'apache/cassandra-2.1',
'saved_caches_directory': u'/mnt/d2/saved_caches',
'seeds': [u'172.31.14.24'],
'use_jna': True,
'use_vnodes': True,
'user': u'ec2_user'}
[cnode1] Executing task 'set_device_read_ahead'
[cnode1] run: blockdev --setra 256 /dev/xvdb
[cnode1] run: blockdev --setra 256 /dev/xvdc
[cnode1] run: blockdev --setra 256 /dev/xvdd
[cnode1] run: blockdev --setra 256 /dev/xvde
[cnode1] Executing task 'destroy'
Fatal error: Needed to prompt for a connection or sudo password (host: cnode1), but input would be ambiguous in parallel mode
Aborting.
Needed to prompt for a connection or sudo password (host: cnode1), but input would be ambiguous in parallel mode
Fatal error: One or more hosts failed while executing task 'destroy'
Aborting.
One or more hosts failed while executing task 'destroy'
Are you running an ssh-agent, or do you just have unencrypted keys in ~/.ssh? Can you post the ~/.cstar_perf/cluster_config.json ?
You can try adding this to the top of tool/cstar_perf/tool/fab_cassandra.py:
import logging
logging.basicConfig(level=logging.DEBUG)
And then try running from that same directory:
fab --show=debug -f fab_cassandra.py whoami
That should give us some more info on what's going wrong.
Thanks for the prompt response. It does look like ssh related issue
Are you running an ssh-agent, or do you just have unencrypted keys in ~/.ssh?
Yes, I'm using ssh-agent. I connected to the stress node with ssh -A
Can you post the ~/.cstar_perf/cluster_config.json ?
cat ~/.cstar_perf/cluster_config.json
{
"commitlog_directory": "/mnt/d1/commitlog",
"data_file_directories": [
"/mnt/d2/data",
"/mnt/d3/data",
"/mnt/d4/data"
],
"block_devices": [
"/dev/xvdb",
"/dev/xvdc",
"/dev/xvdd",
"/dev/xvde"
],
"blockdev_readahead": "256",
"hosts": {
"cnode1": {
"internal_ip": "172.x.x.x",
"hostname": "cnode1",
"seed": true
}
},
"user": "ec2_user",
"name": "example1",
"saved_caches_directory": "/mnt/d2/saved_caches"
}
[real internal EC2 IP is masked]
You can try adding this to the top of tool/cstar_perf/tool/fab_cassandra.py:
I have this file at /usr/lib/python2.7/site-packages/cstar_perf/tool/fab_cassandra.py
And then try running from that same directory: fab --show=debug -f fab_cassandra.py whoami
fab --show=debug -f /usr/lib/python2.7/site-packages/cstar_perf/tool/fab_cassandra.py whoami
Using fabfile '/usr/lib/python2.7/site-packages/cstar_perf/tool/fab_cassandra.py'
Commands to run: whoami
Parallel tasks now using pool size of 1
[cnode1] Executing task 'whoami'
job queue appended cnode1.
job queue closed.
Job queue starting.
Popping 'cnode1' off the queue and starting it
[cnode1] run: /bin/bash -l -c "whoami"
DEBUG:paramiko.transport:starting thread (client mode): 0x280cc90L
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_6.4)
DEBUG:paramiko.transport:kex algos:[u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group-exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman-group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ecdsa-sha2-nistp256'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'[email protected]', u'[email protected]', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'[email protected]'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'[email protected]', u'[email protected]', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'[email protected]'] client mac:[u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'hmac-md5', u'hmac-sha1', u'[email protected]', u'[email protected]', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-ripemd160', u'[email protected]', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'[email protected]', u'hmac-md5', u'hmac-sha1', u'[email protected]', u'[email protected]', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-ripemd160', u'[email protected]', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'[email protected]'] server compress:[u'none', u'[email protected]'] client lang:[u''] server lang:[u''] kex follows?False
DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
DEBUG:paramiko.transport:using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Adding ssh-rsa host key for ip-172-x-x-x.us-west-2.compute.internal: 41da3de58bc4a2d71f332a55468d5b42
DEBUG:paramiko.transport:Trying SSH agent key 24cxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
DEBUG:paramiko.transport:Trying SSH agent key 1a0xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
DEBUG:paramiko.transport:Trying SSH agent key 112xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
DEBUG:paramiko.transport:Trying discovered key c21xxxxxxxxxxxxxxxxxxxxxxxxxxxxx in /home/ec2-user/.ssh/id_rsa
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
Fatal error: Needed to prompt for a connection or sudo password (host: cnode1), but input would be ambiguous in parallel mode
Aborting.
Needed to prompt for a connection or sudo password (host: cnode1), but input would be ambiguous in parallel mode
DEBUG:paramiko.transport:EOF in transport thread
Job queue found finished proc: cnode1.
Job queue has 0 running.
Job queue finished.
Fatal error: One or more hosts failed while executing task 'whoami'
None
Aborting.
One or more hosts failed while executing task 'whoami'
None
[real internal EC2 IP and keys are masked]