Question: Possible for more than one openmptcprouter instance to use the same VPS?
Is it possible for more than one openmptcprouter instance to use the same VPS? I'm envisioning a scenario where I have more than one Raspberry Pi running openmptcprouter for separate networks but they are configured with the same VPS.
Not possible with current VPS script configuration.
It mqy be possible to implement this using containers (ie. the docker host would have an MPTCP-enabled kernel) OVH seems to be doing the same for their OverTheBox solutio. To start testing this it would be nive to have an official containerized VPS. It could be useful also for testing updates of the various components (apart the kernel)
There is already a shadowsocks image, so shouldn't be really difficult to create a OpenMPTCProuter VPS image.
Indeed, but it's based on Alpine, so it may change lots of things compared to the actual debian-based VPS setup. I was first thinking of something like starting from a debian docer image to either use your script or replicate it in the Dockerfile. No idea what would be the easiest / cleanest.
2 choices:
- Create a Dockerfile based on Debian9 that execute the script (after some changes on it)
- Create needed Dockerfile for each applications used and then use docker-compose
I will make choice 2 : a container for each application. Choice 1 is really dirty and running systemd in a container is more than dirty :) For now I made Dockerfile for glorytun TCP, glorytun UDP and soon MLVPN. I need to modify shadowsocks Dockerfile to add nocrypto support. Then I will need to make all working in docker-compose.
Yes, one for all is very dirty, no doubt! And with docker compose it's very easy to run complex container deployments with a single command. Eager to test this!
Hi,
I was reading this and have a few ideas/tips on packaging (sorry if you already know them all !) Dockerizing the app is a great choice, in addition to docker-compose someone could then make Kubernetes definitions for it (plain .yml or even a Helm package) which makes it super easy to run on existing clusters.
Or, another good cross-distro solution outside Docker would be to rewrite the shell install scripts in Ansible instead, which is higher level and declarative : you don't explicitly give it commands, you tell it a desired state like "this line needs to be commented out in this file" or "this package is installed", and it does all the work for you, knows about the differences between distributions. It would easily reduce the size/complexity of the install script by 2/3rds.
Then Packer can be used to automatically generate images for various targets (VBox image, Docker, Amazon AMI, many cloud/VPS providers) : it takes care of creating a blank image/server, running your install script on it, then taking a snapshot. It makes releasing new versions for all platforms a 1 line command basically.
Work on the docker version is here: https://github.com/Ysurac/docker-openmptcprouter-vps Need to adapt admin vps to simplify configuration of containers.
I don't think ansible would really make the script smaller. I would also need to make a script to install ansible on each distribution or describe the way to install it. But this can be made later.
I accept pull request if you make it in Ansible :)
Ok thanks, I'll check the docker work in progress.
Ansible is a client-side install only, that's the cool thing ! The only server side dependency is ssh+Python (actually you can even use it to install Python first if needed : https://docs.ansible.com/ansible/2.5/installation_guide/intro_installation.html#managed-node-requirements )
I thought about it because of all the sed stuff you have in the install script. Ansible has lineinfile for that (see https://docs.ansible.com/ansible/2.5/modules/list_of_files_modules.html ) : it's not shorter but it's more readable, and best of all it's idempotent (debugging is so much faster than a shell script, where you have to recreate everything from scratch to test it)
some ansible roles seems to already exist for some tools used by the script. I will look at this when I will have a working docker version.