k8s-vagrant-multi-node
k8s-vagrant-multi-node copied to clipboard
Network selection
Is this a bug report or feature request?
- Feature Request
How to reproduce it (minimal and precise):
- If you want containers to access some external network, it becomes quite hard to accommodate that after K8S has been set up.
Environment:
- Bionic 18.04
- 4.15.0-54
-
make versions
output:
=== BEGIN Version Info ===
Repo state: 62d403b37d433db7d3eed6d8a98136837441aadb (dirty? NO)
make: /usr/bin/make
kubectl: /usr/bin/kubectl
grep: /bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.5
vboxmanage version:
6.0.10r132072
=== END Version Info ===
Feature Request
Consider the possibility of either allowing additional NICs, or choosing the type of 2nd network (eth1), to more easily access other networks from VMs (and Pod Network).
Currently MASTER_IP
uses eth1 and NODE_IP_NW
can't "collide" with host network (although it wouldn't necesarily collide if it was a bridged network), so one can't have Pods on default network.
Are there any similar features already existing:
Manual tinkering with Vagrant files.
What should the feature do:
One of the following with the option to use NODE_IP_NW
on that network:
- Allow the selection of eth1 NIC type (intnet or bridged, for example)
- Add the third NIC (eth2) using bridged adapter (maybe make that default VM route)
What would be solved through this feature:
Access from and to other networks, both on hypervisor, but also external. Currently if one has existing services on intnet
or vboxnet15
and vbox37
and this project has to pick one vboxnet, it becomes necessary to install multiple clusters or edit Vagrant networks which either in this VM or existing VMs.
Does this have an impact on existing features:
I can't think of anything that stands out. If Pod Network was bridged, we'd have to ask for a range of unallocated IPs (NODE_IP_NW
, documentation) and maybe ping-probe the range for availability before deployment.
@scaleoutsean I'll look into adding option 2 you mentioned soon.