ec2-github-runner
ec2-github-runner copied to clipboard
Add storage as an option.
By default it takes the instance type's default storage capacity. Can we add an option to define the storage capacity.
@nocodehoarder, do you have some special use case where it's not enough to have the default storage capacity? May I ask you to describe it in more detail so I understand your use case better?
AFAIK the default root volume size is inherited from the AMI. So if you need a larger root volume, you need to provide EBS block device mappings at launch, see https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html.
Sorry for the delayed response. @machulav - As the services in the company have grown. the number of images we publish and run and test have grown and we need to increase storage to run a variety of tests. So the ask. We use your action to run tests which are integration as well as end to end. It has been a great help integrating with GHA workflows. @jpalomaki - thx for the details. Will go through it.
@machulav @nocodehoarder Given that there are lots of EBS block device options:
BlockDeviceMappings: [
{
DeviceName: 'STRING_VALUE',
Ebs: {
DeleteOnTermination: true || false,
Encrypted: true || false,
Iops: 'NUMBER_VALUE',
KmsKeyId: 'STRING_VALUE',
OutpostArn: 'STRING_VALUE',
SnapshotId: 'STRING_VALUE',
Throughput: 'NUMBER_VALUE',
VolumeSize: 'NUMBER_VALUE',
VolumeType: standard | io1 | io2 | gp2 | sc1 | st1 | gp3
},
NoDevice: 'STRING_VALUE',
VirtualName: 'STRING_VALUE'
},
/* more items */
]
would it make sense to allow the user to pass these in as an optional JSON array input? Minimal workflow usage example could be:
block-device-mappings: >-
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"VolumeSize": 100
}
}
]
@machulav Another option could be to leverage EC2 launch templates (this action could just reference a template using a single optional input, as opposed to adding several inputs for various instance settings, including this one)?
A workaround I found to be easier is to just introduce a step to expand the root volume size in your workflow. The first step in my job that runs on the EC2 is
- name: Expand root volume
run: |
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
INSTANCE_ID=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/instance-id)
VOLUME_ID=$(aws ec2 describe-volumes --filters Name=attachment.instance-id,Values=$INSTANCE_ID | jq -r .Volumes[0].Attachments[0].VolumeId)
echo $INSTANCE_ID
echo $VOLUME_ID
aws ec2 modify-volume --volume-id $VOLUME_ID --size 256
sleep 15 # let it update
growpart /dev/nvme0n1 1
lsblk
xfs_growfs -d /
df -hT
This successfully resizes the EBS volume to 256 GB, expands the partition, and extends the logical file system to use the new space.