netbox
netbox copied to clipboard
Add Virtual Machine Types
Environment
- Python version: 3.6.11
- NetBox version: 2.10.4
Proposed Functionality
Would be handy to be able to track different types of Virtual Machines like Device Types.
Already using Platform to track the Virtual Machine OS, but would also like to track the vm/instance type without having to use custom fields.
Use Case
Tracking a mix of on-prem devices and AWS EC2 instances, and would like to track the EC2 instance type as can be done for make/model of on-prem devices.
Database Changes
New Virtual Machine Type model, or an abstract model that can be shared between Devices and Virtual Machines for Types
External Dependencies
N/A
This is where you use clusters and cluster types.
Is there something you can't accomplish using these two?
Cluster Types is a possibility, but exponentially increases the number of Clusters needed to track Site + Tenant + Group and now Type of Virtual Machine.
This will also be an issue with https://github.com/netbox-community/netbox/issues/5303 when Virtual Machines can be created without designating a Cluster.
But your use case
and would like to track the EC2 instance type
specifically entails clusters. I think it's fair to require users who need to track VM types to define clusters for this purpose.
Besides instance specs (fixed cpu/mem assignments are great examples of pre-filled VMTypes imo) there is the interface definitions as well. I'd like to dictate/prefill what users should have as their interface name(s) for a VM.
So a VM Type would contain the interface with name 'vNIC0', a tag and 2G ram and 1vcpu as an example. This can be applied to cluster types esx, hyperv, aws to administer the underlying hypervisor cluster.0
But your use case
and would like to track the EC2 instance type
specifically entails clusters. I think it's fair to require users who need to track VM types to define clusters for this purpose.
Hmm maybe there's a better way to organize things, but I'm not sure tracking instance type with separate clusters makes much sense.
ie. with clusters for app1-frontend
, app1-backend
, etc. then I would need to fragment existing clusters further to track instance type into say app1-frontend-ec2-t3-small
or app1-ec2-t3-small-frontend
I'm also not sure how to easily track this for clusters with mixed instance types (ie. while in the process of migrating a cluster between instance types, or scaled up instance types added to a cluster for load balancing). Seems like this would make it difficult to track instance types used between clusters as well for tracking cumulative instance types used.
The more I think about this it seems like this would be useful for other vm providers too ie. if I'm running a vmware cluster I may have pre-defined templates with certain resources or configuration types for small/medium/large vm's and I wouldn't necessarily have separate clusters for each of those.
In my case, I have a number of virtual appliances (e.g., DNS appliances) that all have the same basic interface structure, vCPU, mem, etc. I'd like to have a VM template (like a device type is today) that I can use to initially drive the object's creation. For now, we use tags and roles for some of this, but it does nothing when it comes to inheriting interfaces and VM properties.
From the comments here, I think we're conflating two different functions of device types:
- Indicating the device manufacturer and model.
- Automatically populating components on newly-created devices.
Re-purposed for VMs, the first use case would store the VM instance classification (e.g. t2.small
or t3.large
). The second function would automatically create interfaces on a new VM. I don't think it makes sense to combine these two functions, because you would likely not restrict every VM of a certain classification to have the same exact interfaces.
Re-reading @jrbeilke's stated use case above it sounds like this FR is just focusing on the classification of VMs. Is that accurate?
If so, I think it could make sense to introduce a VirtualMachineType class to define these classifications. A new VM could inherit its CPU, memory, and disk attributes automatically upon creation, and VMs could be filtered by their assigned type. However, I'm not sure that it makes sense to templatize VM interfaces as well.
In my case, I have virtual appliances that I spin up. They will always have the same interfaces, and I'd like to have them pre-populated based on the type template. So I do have, what I think, is a valid reason to have VMs of a certain type to have the same interfaces (and other "hardware" parameters like you state here).
@xorrkaz What I'm getting at is that e.g. the t3.large
type is decoupled from the "VM with interfaces X, Y, and Z" definition: You'll very likely have different iterations of t3.large
with different interfaces defined on them. Thus, I don't think it makes sense to overload a VirtualMachineType model to inform interface creation in the way we do DeviceType.
Yes, in that case, I agree. But the ability to create VMs from a template/type still has some value. I guess you're suggesting another issue for that?
I'd suggest defining a custom script to provision VMs in that manner. That way, you're free to automatically provision interfaces, IP addresses, VLANs, and whatever else you may need.
Yeah, we're working on a custom plugin to do device/VM adds. This would just be another layer of convenience.
Thought 1: as of Netbox v2.11, custom fields can now be enabled in listing columns. This means that a custom field may now be a much more usable way to store attributes such as instance type (e.g. "t2.small").
Thought 2: I don't like the idea of auto-inheriting initial values for vCPUs and memory from a "VirtualMachineType". It means that if the type is changed (e.g. "t2.small" to "t2.large"), the CPU and memory values reported will be inconsistent with it. And there's no sensible default for disk space anyway.
Thought 3: Templating of VM interfaces would be useful - or at least, creating one initial interface automatically, since a VM with no interfaces is not very useful. Maybe just an initial interface in the /virtualization/virtual-machines/add/
page.
For creating VMs via a custom script, there is a sample here.
NetBox version 3.1.0
We are using VM's with CEPH store or local store in our own OpenNebula clouds. For now we have to use custom field for separate store types. We would like to use VM types or inventory items for VMs like as Device.
Virtual Machine Type that would be pretty handy at least for my environment is:
- Qemu
- LXC
- Docker
About the Docker and LXC types, I think we could have Containers views, like virtualization/containers/. On my Proxmox Plugin, It was always weird to me having to register LXC and Docker containers as Virtual Machines on Netbox.
Virtual Machine Type that would be pretty handy at least for my environment is:
- Qemu
- LXC
- Docker
About the Docker and LXC types, I think we could have Containers views, like virtualization/containers/. On my Proxmox Plugin, It was always weird to me having to register LXC and Docker containers as Virtual Machines on Netbox.
I feel like tracking containers is more of a separate FR on its own. I do agree that tracking containers as a separate sort of thing would make sense, because containers don't really have number of vCPUs to care about (I guess they might have a RAM value if you can limit the consumption of a container?), so putting them in as "VMs" doesn't really seem logical.
I can definitely see the value of tracking a "Type" of VM when using Cloud systems like AWS / Azure which have clearly defined names for each "size and flavor" of VM. But, this field needs to be optional, and ideally there'd be a way to hide it, because it serves no real purpose for most people with an "on-prem" cluster like Proxmox/Nutanix/Hyper-V... what would we even put there?
containers don't really have number of vCPUs to care about
Sure they do: you can limit the number of vCPUs in a cgroup - see e.g. here (as well as RAM, as you observe).
But I agree that instance types are not relevant non-cloud environments.
containers don't really have number of vCPUs to care about
Sure they do: you can limit the number of vCPUs in a cgroup - see e.g. here (as well as RAM, as you observe).
🤐 My almost total inexperience with containers is showing... I support one Docker app but I do nothing other than install it and let it run 😛 Much more of a "full-on" VM person thus far. I appreciate the education!
I'd like to spend some more time digging into the two related but distinct use cases I cited above, to ensure that whatever approach we take here doesn't preclude a natural implementation of the remaining functionality in the future.
VM type is also would be useful for virtual appliances like load balancers, firewall, etc All this appliances have multiple interfaces, ip addresses, etc.
We're going to punt on this for v3.4 as it doesn't seem quite fully baked yet. It might help to open a complementary FR to separate out the two discrete use cases I call out above.
With virtual machines (or rather containers) I have another problem. Let's say we have a cluster C1 in it we have devices D1 and D2 o now I would like to be able to add V1 (Active) on on C1/D1 and V1 (Offline) C1/D2. Currently Netbox does not allow this because the VM name must be unique for the entire cluster.
This option would be useful, as it allows you to quickly determine where you have what disabled containers if they are migrated between devices.
In fact, for me it would be sufficient to be able to assign a container directly to a device, (unique container name per device) without having to create a cluster in Netbox and add devices to it. Containers like LXC or Docker can actually be run and moved directly between devices without creating clusters, and it seems like a good idea to me to allow this in Netbox.
I don't know if this is the right place to report this, if there is a better place please point it out.
[Aside: it would be much more polite to open a new discussion, rather than hijack an existing ticket]
In fact, for me it would be sufficient to be able to assign a container directly to a device, (unique container name per device) without having to create a cluster in Netbox and add devices to it.
That feature already exists in current versions of Netbox: it was added in v3.3.0. You can associate a VM directly with a site and optionally an individual device, without specifying a cluster. What version are you using?
I'm using the latest version 3.4.6 and I can't assign a VM to a device without indicating a cluster. I have also checked this on https://netbox.tld.pl/ and this is also not possible (I attach a screenshot of the error) With you it is possible? perhaps on some earlier version it worked, what version do you have? It is possible to assign a VM to a site, but then it is not possible to point the assignment to a device to I need it very much (it allows a quick search on which device the container is currently on)
@tomasz-c please open a separate issue/discussion, as your comments are unrelated to this FR.
@tomasz-c please open a separate issue/discussion, as your comments are unrelated to this FR.
Thank you, I have reported it here: https://github.com/netbox-community/netbox/issues/12024
This could be a good candidate for 3.6
Would be handy to be able to track different types of Virtual Machines like Device Types.
+1 for this exact feature.
reading through the comments above here's my two cents.
-
using an item for virtual machines like device types makes sense as it enables a few items like as mentioned above. templating being the main item that comes to mind, which will aid in automating creation.
-
as mentioned above "that's what clusters are for. re instance types". I use the "clusters" feature for managing cluster software. i.e. kubernetes and would have been proxmox (however am going with kubevirt). Using the cluster feature like this has allowed config management (ansible/awx, foreman) to pull in the config for the deployed component. This works well as i can assign VM and devices to a cluster.
-
I also use the virtual machine feature for docker containers, to which they are assigned to a kubernetes cluster.
Yes there's an argument to be made for it not being a virtual machine, however still can have software installed in it and must still be tracked as having installed software. like a physical machine or kvm VM.
Sure they do: you can limit the number of vCPUs in a cgroup - see e.g. here (as well as RAM, as you observe).
But I agree that instance types are not relevant non-cloud environments.
- When I create a docker container as a VM, the assigned vcpu and memory becomes it's "limit" having a "VM Type" like "devices Type" would allow it to be just like an "instance type", As the creation of the "instance" will in my case be automated from the "VM Type"
Personally, the "device type" and proposed "VM type" if ever created are the same thing. extending the "Device Type" to cover VM's would be a solution as the only difference between physical devices (specifically physical machines (computers)) and Virtual machines is the first word (physical/virtual). Yes there are additional fields required for a virtual machine. These extra fields could be hidden behind a checkbox.