lemmy-ansible icon indicating copy to clipboard operation
lemmy-ansible copied to clipboard

Better way of avoiding port conflicts

Open rysiekpl opened this issue 2 years ago • 8 comments

Continuing discussion from #3.

It would be good to have a better way of avoiding port conflicts when running multiple instances of lemmy. Current approach (randomizing ports on every deploy) has several downsides.

Quick brainstorm about options here:

  1. include nginx in the docker-compose.yml, allowing us to proxy_pass directly to specific containers; name relevant containers using the {{domain}}: lemmy-{{domain}}, lemmy-ui-{{domain}}, pictrs-{{domain}}, etc. Host-installed nginx would be unnecessary, or if already present would only need to hit a single port: the one exposed by the nginx container.

  2. Have some faith in the admins running instances, and make it possible to explicitly set ports per deployment, such that and admin deploying 3 different Lemmy instances would just explicitly define three sets of ports. A version of this would be to have a "starting port" configurable per instance, and each actual service port just offset by a well-defined value (say, lemmy_port would be starting_port; lemmy_ui_port would be starting_port + 1, etc).

Option 1. is the cleanest, keeps all Lemmy deploys in a "package" within the docker-compose.yml, all managed from a single lemmy-ansible checkout with minimal side-effects on the host system. From a perspective of a sysadmin this might be the most preferable, and also is the most in-line with the "docker way of doing things" so to speak.

rysiekpl avatar Mar 28 '22 20:03 rysiekpl

I definitely prefer option 2 for performance reasons... why have multiple nginx layers when you only need one.

dessalines avatar Mar 28 '22 20:03 dessalines

I definitely prefer option 2 for performance reasons... why have multiple nginx layers when you only need one.

Option 1. could/would be used with a single nginx layer, the difference would be that the nginx would be in the docker-compose.yml. The only reason for additional nginx on the host system would be if the system administrator wants to use it this way for whatever reason (other services running on the system, etc).

The important benefit of Option 1. is encapsulation. All services needed to run Lemmy are in that case encapsulated in and managed by docker-compose.yml, there are no side-effects on the host system.

Performance hit should be negligible. I've run infrastructures with 2 nginx layers, serving hundreds or thousands of requests per second; nginx was never the bottleneck.

rysiekpl avatar Mar 28 '22 21:03 rysiekpl

We have an internal repo that we use to deploy lemmy.ml and associated test instances, it also uses the method 2 you explain. Just specify the starting port for each instance, and other ports are immediately above it.

I prefer to use native nginx, because that makes it much easier to run other services besides Lemmy (its so lightweight that there is little reason to use a whole server). Also system packages will be updated more frequently (especially in case of security vulnerabilities), eg using unattended-upgrades. If we use a Docker image for nginx, it has to be updated manually in this repo all the time.

Nutomic avatar Mar 29 '22 15:03 Nutomic

We have an internal repo that we use to deploy lemmy.ml and associated test instances, it also uses the method 2 you explain. Just specify the starting port for each instance, and other ports are immediately above it.

Makes sense.

Also system packages will be updated more frequently (especially in case of security vulnerabilities), eg using unattended-upgrades. If we use a Docker image for nginx, it has to be updated manually in this repo all the time

That's a valid consideration. It also makes it easier to upgrade from the current set-up. Option 2. it is, then!

rysiekpl avatar Mar 29 '22 19:03 rysiekpl

We have an internal repo that we use to deploy lemmy.ml and associated test instances, it also uses the method 2 you explain. Just specify the starting port for each instance, and other ports are immediately above it.

So, as I might have the time to work on this a bit soon, question: how do you specify the port in that setup? I could come up with a scheme, but maybe there's no need to reinvent the wheel?

rysiekpl avatar Apr 20 '22 16:04 rysiekpl

Its like this:

# List of instance domains that are deployed and managed.
domains:
  - lemmy.ml
  - slrpnk.net
  - lemmy.perthchat.org
  - community.xmpp.net
  - jeremmy.ml
# Internal ports that are used for each instance. These should be in steps of 10
# because we need ports for different services
ports:
  "lemmy.ml": 8000
  "slrpnk.net": 8020
  "lemmy.perthchat.org": 8030
  "community.xmpp.net": 8040
  "jeremmy.ml": 8050

Nutomic avatar Apr 21 '22 11:04 Nutomic

@rysiekpl Did you find a solution to your problem?

As we now have a vars.yml file per-domain. We'll look at setting up a new variable called lemmy_web_port (or something like that) which we will check if it exists in the vars file, if not it will be random. Would that be sufficient?

Edit: The main reason I want to fix this is because it forces nginx to be reloaded every time we deploy, when it doesn't need to be.

ticoombs avatar Oct 04 '23 01:10 ticoombs

I have not, I have not had the time to dive into it.

rysiekpl avatar Oct 04 '23 12:10 rysiekpl