openfreemap
openfreemap copied to clipboard
Feasibility of Dockerizing OpenFreeMap
Hi,
Thanks for all the work on OpenFreeMap — it's an awesome project and really cool to see something so open and self-hostable.
I had a quick question that I hope is okay to ask here:
Would it be feasible to port the current setup to a Docker-based implementation?
I totally understand and respect the decision to keep this project Docker-free by design — especially considering its focus on clean server deployments and direct system-level optimizations. That said, I was wondering if you think it could be technically possible (even if not recommended for production) to wrap the core functionality in a Docker container — maybe just for experimentation, local testing, or for folks more familiar with containerized environments.
Not a request to implement it or anything — just curious if it’s something that could work in theory, or if there are specific architectural blockers that would make a Docker version inherently impractical.
Thanks again for building and sharing OpenFreeMap with the community
Hi - not a contributor, but I've just started looking into openfreemap for a project and I absolutely will be containerising it myself for a test environment - I don't see why it wouldn't be possible!
Thanks for the response, @gareth-johnstone
To be honest, I wasn’t expecting a reply. Much appreciated. I’m also primarily interested in running OpenFreeMap in a containerized environment. However, looking at how the project is currently structured and deployed, I’ve been a bit hesitant about attempting it myself, which is why I initially asked whether it would even be feasible
That said, if you think it’s doable, I might give it a shot as well. If I manage to get a working setup, I’ll contribute back here with some changes or a Docker-based deployment approach so others in the community can benefit too
Thanks for the response, @gareth-johnstone
To be honest, I wasn’t expecting a reply. Much appreciated. I’m also primarily interested in running OpenFreeMap in a containerized environment. However, looking at how the project is currently structured and deployed, I’ve been a bit hesitant about attempting it myself, which is why I initially asked whether it would even be feasible
That said, if you think it’s doable, I might give it a shot as well. If I manage to get a working setup, I’ll contribute back here with some changes or a Docker-based deployment approach so others in the community can benefit too
Nice, likewise, i'll be attempting it tomorrow so if i make decent progress i'm happy to share it
Thanks for the response, @gareth-johnstone
To be honest, I wasn’t expecting a reply. Much appreciated. I’m also primarily interested in running OpenFreeMap in a containerized environment. However, looking at how the project is currently structured and deployed, I’ve been a bit hesitant about attempting it myself, which is why I initially asked whether it would even be feasible
That said, if you think it’s doable, I might give it a shot as well. If I manage to get a working setup, I’ll contribute back here with some changes or a Docker-based deployment approach so others in the community can benefit too
I have it running in docker (at least the skip planet part of it)
It's pretty janky and I've had to fork and make modifications to the openfreemap source on Github.
certbot has been nuked completely (but you'd normally run it with a reverse proxy anyways with it being in docker)
I'll tidy it up a bit over the next few days and hopefully publish something - as for the source changes, it wont survive an update if the dev of openfreemaps changes anything, so might need to suggest some changes in a PR so its "docker friendly"
Thanks for the update! Great to hear that it’s working at least for specific geographical zones
Even if it doesn't yet support the entire planet, I think it’s definitely worth giving it a try. I now have more time to experiment, so I’ll attempt to scale it up to global coverage. It might end up being a hard fork from the current implementation, but at least this could offer an alternative approach that the community is free to choose from
Of course, such a fork would need to be actively maintained and kept in sync with the upstream OpenFreeMap project to stay relevant and functional
I’ll publish my results soon and share details of this "experiment" with the community. Hopefully it can be useful to others as well
I'm delighted to see the community progress on this! I fully support Docker for OpenFreeMap, and as I said before, I'll totally link to a Dockerized OFM if someone makes one. I just want to keep this repo Docker-free and concentrate on the basics here.
Some ideas I had is that if Docker is able to mount btrfs images from the local drive, then you can forego the whole mounting/unmounting procedure and make two Docker images (or one Docker image which can be started in 2 different ways):
- Download latest btrfs into a local folder
- Mount that image and start the nginx process.
If you have any questions I'll try to help.
@hyperknot You're absolutely right. The approach should vary depending on the size and scope of the data being served. I’ve been experimenting with containerizing OpenFreeMap using FastAPI in a containerized environment, and here's a general strategy I think could scale well depending on the deployment context:
For global map coverage, a more robust setup would involve a container (based on FastAPI, for example or something else if anyone wants another approach) that downloads the latest Btrfs image into a local volume and mounts it properly inside the container. This ensures that large datasets don’t get re-downloaded or extracted every time a container spins up, especially if we leverage proper volume persistence
For regional maps or smaller datasets (e.g. by country), it might be feasible to compile the required data directly into the image if the size is under ~1GB. This would allow quicker boot times and more flexible deployment as lightweight, stateless microservices, especially useful when autoscaling based on regional request patterns
As for orchestration:
-
In Kubernetes, something like Longhorn would be ideal to provide a distributed, persistent volume across nodes. This way, pods can spin up anywhere in the cluster and still access the map data reliably, while also enabling autoscaling based on demand
-
For Docker Swarm, a lighter-weight option like SeaweedFS with FUSE could be used to replicate the data across nodes. This setup provides redundancy and high availability without needing heavyweight storage systems, and it maintains fault tolerance even when nodes go offline
So depending on the deployment scale and geographic data granularity, the container strategy can shift between persistent Btrfs backed volumes and lightweight, region specific builds. This implementation would help meet SLAs and provide high availability for those who need reliable uptime
Thanks again for keeping the main repo clean. I totally agree with that direction. I'll start committing to the initial approach branch from @gareth-johnstone if he's okay with it, and share my implementation there.
For now,
I’ve completed the global approach and have it running successfully. Next, I’ll be working on a region based setup, with a separate Dockerfile and supporting scripts for per country deployments. This should make it easier to serve smaller datasets in a more scalable, friendly way. Here’s an image which shows the map served from a Docker container running inside an OrbStack ARM VM, successfully rendering the entire global map
@hyperknot You're absolutely right. The approach should vary depending on the size and scope of the data being served. I’ve been experimenting with containerizing OpenFreeMap using FastAPI in a containerized environment, and here's a general strategy I think could scale well depending on the deployment context:
For global map coverage, a more robust setup would involve a container (based on FastAPI, for example or something else if anyone wants another approach) that downloads the latest Btrfs image into a local volume and mounts it properly inside the container. This ensures that large datasets don’t get re-downloaded or extracted every time a container spins up, especially if we leverage proper volume persistence
For regional maps or smaller datasets (e.g. by country), it might be feasible to compile the required data directly into the image if the size is under ~1GB. This would allow quicker boot times and more flexible deployment as lightweight, stateless microservices, especially useful when autoscaling based on regional request patterns
As for orchestration:
- In Kubernetes, something like Longhorn would be ideal to provide a distributed, persistent volume across nodes. This way, pods can spin up anywhere in the cluster and still access the map data reliably, while also enabling autoscaling based on demand
- For Docker Swarm, a lighter-weight option like SeaweedFS with FUSE could be used to replicate the data across nodes. This setup provides redundancy and high availability without needing heavyweight storage systems, and it maintains fault tolerance even when nodes go offline
So depending on the deployment scale and geographic data granularity, the container strategy can shift between persistent Btrfs backed volumes and lightweight, region specific builds. This implementation would help meet SLAs and provide high availability for those who need reliable uptime
Thanks again for keeping the main repo clean. I totally agree with that direction. I'll start committing to the initial approach branch from @gareth-johnstone if he's okay with it, and share my implementation there.
For now,
I’ve completed the global approach and have it running successfully. Next, I’ll be working on a region based setup, with a separate Dockerfile and supporting scripts for per country deployments. This should make it easier to serve smaller datasets in a more scalable, friendly way. Here’s an image which shows the map served from a Docker container running inside an OrbStack ARM VM, successfully rendering the entire global map
![]()
You appear to be making excellent progress with this!
My attention is unfortunately on other projects at the moment that is taking up waaaaaaaaay too much time!
Hopefully get a chance to chip in again soon
@hyperknot I think what would help quite a lot at this stage is if we could have an option in the main repo to skip certbot completely (ive even found that when running it on my own (blank) servers - i tend to ignore what ever certbot is doing and just use my own nginx proxy anyways (and i can apply my own cert from there) - great to have options AND would help with the docker project(s)
@BulzN, nice work! What I was thinking is to save the Btrfs outside the container, and mount the image like a read-only Docker volume. But I don't know if Docker supports mounting volumes from read-only Btrfs images or not.
Instead of FastAPI, I was thinking of using pure scripts for steps 1-3. (see below). For step 4. a stock nginx image, or an nginx installed into a "mixed" image (containing Python, nginx, etc.) would work. For which step do you use FastAPI?
Basically the server part of OpenFreeMap is only 4 steps:
- Find the latest image, download and extract it.
- Download styles, fonts, etc from the assets bucket.
- Create the correct nginx config.
- Run nginx as a service.
In Docker, you could make 1. to 3. as a one-time scripts and 4. as a service in detached mode.
@gareth-johnstone these lines are about disabling certbot. Have you tried it?
https://github.com/hyperknot/openfreemap/blob/dd97e1fdcbd2f7d9c90eee9b52c96bc2ac9009b0/config/.env.sample#L15-L19
Hello,
It’s been a while since I last contributed here. I ultimately stepped back from working on a Docker implementation because of time constraints with work and personal projects.
In the meantime, I’ve been using versatiles-docker quite a lot, and I think their approach is very much in line with the kind of containerized setup that could complement OpenFreeMap. Their implementation feels close to what you described in the README also as a possible direction for this project.
If I find the time in the future, I’d still like to experiment further with a dedicated Docker setup for OpenFreeMap. Until then, I think versatiles-docker is a good alternative for anyone looking for a region-based containerized deployment.
Thanks again for all the work on this project
@BulzN Would you mind sharing the solution you had working with the global setup? Or did I miss the link somewhere?
Hi everyone,
It's been a while since I last contributed here. After our previous discussion, I stepped back from working on a Docker implementation due to time constraints with work and personal projects
In the meantime, I've been using versatiles-docker quite a bit, and their approach really aligned with what could work for OpenFreeMap
That said, I recently found some time to put together a working proof-of-concept specifically for OpenFreeMap. Following @hyperknot's guidance, I've created a Docker/Kubernetes implementation in my fork: https://github.com/BulzN/openfreemap
It includes:
- Docker Compose setup for single-host deployments
- Kubernetes manifests for local testing (tested on OrbStack)
- Handles download, extraction, and serving of tiles
- Works with both Monaco test dataset and full planet
- Extraction from Btrfs to regular filesystem for container compatibility
Important notes:
- The init container requires privileged mode for loop mounting Btrfs images
- Storage: ~1GB for Monaco, ~300GB for planet
- This is optimized for local development/testing
- Production would need distributed storage (Longhorn, Ceph, NFS, etc.)
I think versatiles-docker is still a solid alternative for anyone looking for a more production-ready containerized deployment. But this OpenFreeMap-specific implementation might be useful for understanding how the pieces fit together or as a starting point for someone
Thank you all for your feedback and consideration