vagrant-aws
vagrant-aws copied to clipboard
vagrant-aws does not attach the instance storage (for m3.xlarge)
I create an m3.xlarge instance (which comes with 2X40GB SSD) and when I do fdsik -l I do NOT see 40GB , there at all, I only see one device with 10 GB . this issue does not exist for m3.large and only happens when I try to create m3.xlarge .
here is the fdsik -l output :
[root@ip-172-31-25-194 ~]# fdisk -l
Disk /dev/sda1: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sda1 doesn't contain a valid partition table
[root@ip-172-31-25-194 ~]#
but If I create the instance not using vagrant and I use amazon console fdisk shows me this :
Disk /dev/sda1: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sda1 doesn't contain a valid partition table
Disk /dev/sdb: 40.2 GB, 40256929792 bytes
255 heads, 63 sectors/track, 4894 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 40.2 GB, 40256929792 bytes
255 heads, 63 sectors/track, 4894 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
the ami I am using is: ami-09d43a60
here is my Vagrantfile:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "dummy"
config.vm.box_url="https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
config.omnibus.chef_version = "11.16.0"
_NODE_NAME="wl2"
config.vm.provider :aws do |aws,override|
aws.access_key_id = "BLAHBLAH"
aws.secret_access_key = "BLAHBLAH"
aws.keypair_name="BLAHBLAH_keypair"
aws.ami="ami-09d43a60"
aws.security_groups="BLAHBBLAH"
aws.instance_type="m3.xlarge"
aws.region="us-east-1"
aws.tags["Name"]= _NODE_NAME
override.ssh.username="root"
override.ssh.private_key_path = "~/.chef/aws-keys/BLAHBLAH_keypair.pem"
config.omnibus.chef_version ="11.16.0"
config.ssh.pty= true
end
Your Vagrantfile doesn't include any block_device_mapping sections. These are required for vagrant-aws to request your instance storage. Just like the AWS console, if you don't request the ephemeral storage, it won't be available to you when the instance starts up.
@davidski as pointed out in the issue, it acts diffrent for m3.large and m3.xlarge, why it attachs the ssd instance storage to m3.large but not to m3.xlarge ? in oher word, why I do NOT need to use block_device_mapping for m3.lage but I do need that for m3.xlarge?
Similarly, trying to work with hi1.4xlarge's I was experiencing the confusing and infuriating behavior of it allocating one instance store volume (as opposed to the two it was supposed to) and doing this whether or not I included an incomplete block_device_mapping (to increase the size of the root)
The default AWS VMs start with 8GB even if the instance types allow more. I have found you need to provide device mapping instructions if you want an increase in disk.
For reference the AWS instructions for device mapping are here:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-blockdev-mapping.html
Here is an excerpt from my Vagrantfile showing how to set up a 32GB device on the VM. Note that the ‘DeviceName’ can differ depending upon your AMI. You sometimes need to manually set up an instance and then view its details on EC2 so you can get the correct ‘DeviceName’.
aws.block_device_mapping = [
{
'DeviceName' => "/dev/sda",
'VirtualName' => "root",
'Ebs.VolumeSize' => 32,
'Ebs.DeleteOnTermination' => true
}
]
Once you've specified your mapping you need to resize the disk using
sudo resize2fs /dev/xvda1
This can be done in a chef recipe. I created a local aws-init cookbook to help during EC2 VM provisioning.
What's needed here is a way to represent the device mapping for ephemeral storage. Ec2 supports it.
@ghasolutions describes how to add EBS devices. That is well understood. This issue is about ephemeral storage.
I am successfully "attaching" ephemeral storage as follows:
aws.block_device_mapping = [
{
'DeviceName' => '/dev/sdb',
'VirtualName' => 'ephemeral0',
},
{
'DeviceName' => '/dev/sdc',
'VirtualName' => 'ephemeral1',
}
]
Apparently the specific VirtualName
of ephemeralX
is required, X
= 0-3
.
Whether this should be implicit in vagrant-aws is debatable.
aws noob here. I think that this issue could be at least documented a bit better in the main readme of the plugin - it took me a couple of hours of reading the aws docs just to get to the point where I grok the difference between 'root device volume' and 'instance store', and the fact that instance types that allow ssd storages still need a root volume on ebs.
Maybe add a section with a couple of examples with different ebs configs? Also, the ebs_optimized config option is not mentioned in the docs...