aws-ethereum-miner icon indicating copy to clipboard operation
aws-ethereum-miner copied to clipboard

Successful stack creation but no worker showing up

Open hermitico opened this issue 3 years ago • 9 comments

Hi all,

Most of the times I create a stack I wont be able to see the worker in the dashboard. Even after waiting for hours. Why is this happening? Is that the worker allocation is limited?

hermitico avatar Nov 27 '21 03:11 hermitico

What does the Activity tab in the Auto Scaling Group details say? What is the reason for failing the allocation? If it's a resource limit you may need to request a service limit increase.

Also some instance types are not currently available in Spot pricing in the cheapest regions. Couple of options:

  1. Use one of the slightly more expensive regions - us-west-1, eu-*, etc.
  2. Select a different instance type - e.g. p3.2xlarge, g5.xlarge, etc.
  3. Use on-demand instances (although they are a lot more expensive)

Hope some of it helps :)

mludvig avatar Nov 27 '21 05:11 mludvig

The allocation is successful but often I don't see them on the dashboard as a worker associated to my ETH address. Let's say that I see the worker one time out of five executions.

hermitico avatar Nov 27 '21 12:11 hermitico

Yes , sorry, it's resource limit related. What is the next best cost effective instance type?

hermitico avatar Nov 27 '21 13:11 hermitico

If it's resource / service limit raise a support ticket to increase the service limit allowance. You'll need to raise it for one or more of:

  • All G and VT Spot Instance Requests
  • All P Spot Instance Requests
  • Running On-Demand G and VT instances
  • Running On-Demand P instances

Note that the limits are in terms of vCPU - for example a g5.xlarge has 4 vCPUs, p3.2xlarge as 8 vCPU, the fastest p4d.24xlarge has 96 vCPUs, etc. The limits are per-region, you'll have to increase them everywhere you're planning to run your instances.

mludvig avatar Nov 27 '21 22:11 mludvig

You can now run tools/increase-quotas.sh g4dn or p3 and it will raise the support requests for service limits increase in all the regions where g4dn (or p3) is available.

An optional parameter is vCPU, e.g. 192. The default is 64.

You can run it from the AWS CloudShell with all the privileges of your account.

mludvig avatar Dec 10 '21 07:12 mludvig

Hi Michael,

I tried running the script, it works but AWS is not willing to increase the limits for me. Is there a way to mine in a straightforward way using your system on other cloud computing service?

hermitico avatar Dec 31 '21 12:12 hermitico

They’ve got capacity issues with GPU instances, I had limit increase declined in one of my accounts as well. Sometimes they approve a small increase in less popular regions (outside US) but no guarantee.

I’m planning to do a research on Azure and GCP, stay tuned :)

mludvig avatar Dec 31 '21 21:12 mludvig

Cool! That would be great. Thanks

hermitico avatar Jan 01 '22 03:01 hermitico

I have had limit increase requests to 32 vCPUs denied because apparently I did not have enough usage. I am trying again with 8 vCPUs in regions where g4ad.xlarge is available and if that doesn't work I will try for 4 vCPUs.

jxu avatar Jan 09 '22 06:01 jxu