osrm-backend
osrm-backend copied to clipboard
Error launching osrm - problem reading from file /data/map.osrm.mldgr
I'm building OSRM using the latest osrm-backend image, and I'm merging .pbf extracts from North America, Europe and Asia from Geofabrik. Originally this worked fine, but now the build process appears to be broken. The build process has not changed. Luckily there is a working copy of this that was built 3 months ago.
I merge extracts using Osmium into a single map file, build, then run as below. Note the max variables are passed in during build, and don't affect the error.
sudo docker run --rm -p 8080:5000 -d -v "/osrm:/data" --name osrm-runner osrm/osrm-backend osrm-routed --max-viaroute-size {{ max_viaroute_size }} --max-matching-size {{ max_matching_size }} --max-table-size {{ max_table_size }} --algorithm mld /data/map.osrm
This is the error I see
[info] starting up engines, v5.26.0
--
[info] Threads: 32
[info] IP address: 0.0.0.0
[info] IP port: 5000
[error] Problem reading from file: /data/map.osrm.mldgr : (possible cause: "Inappropriate ioctl for device") (at include/storage/tar.hpp:43)
After build, the size of the map.osrm.mldgr file is empty as below, which is very odd. I can't explain this, since the previous version built around 3 months ago uses around 20G. There are no visible errors from osrm during the build.
The build environment is Ubuntu 20.04.3 LTS on AWS, also using the same docker version as the working one. I can't see any environment differences between the existing working copy and the new one I tried to build, so I'm lost.
Can anyone provide some guidance please? This is quite critical at the moment as I need a fresh set of data.
-rw-r--r-- 1 root root 11G Feb 27 23:44 asia-latest.osm.pbf
-rw-r--r-- 1 root root 25G Feb 27 23:44 europe-latest.osm.pbf
drwx------ 2 root root 16K Feb 28 16:31 lost+found
-rw-r--r-- 1 root root 47G Feb 28 17:38 map.osm.pbf
-rw-r--r-- 1 root root 48G Feb 28 18:17 map.osrm
-rw-r--r-- 1 root root 10G Feb 28 23:36 map.osrm.cell_metrics
-rw-r--r-- 1 root root 335M Feb 28 23:17 map.osrm.cells
-rw-r--r-- 1 root root 3.5G Feb 28 19:41 map.osrm.cnbg
-rw-r--r-- 1 root root 3.5G Feb 28 23:04 map.osrm.cnbg_to_ebg
-rw-r--r-- 1 root root 68K Feb 28 23:21 map.osrm.datasource_names
-rw-r--r-- 1 root root 20G Feb 28 23:18 map.osrm.ebg
-rw-r--r-- 1 root root 5.1G Feb 28 23:06 map.osrm.ebg_nodes
-rw-r--r-- 1 root root 5.6G Feb 28 20:25 map.osrm.edges
-rw-r--r-- 1 root root 5.0G Feb 28 23:13 map.osrm.enw
-rwx------ 1 root root 20G Feb 28 23:05 map.osrm.fileIndex
-rw-r--r-- 1 root root 22G Feb 28 20:26 map.osrm.geometry
-rw-r--r-- 1 root root 3.8G Feb 28 20:25 map.osrm.icd
-rw-r--r-- 1 root root 7.5K Feb 28 23:13 map.osrm.maneuver_overrides
-rw-r--r-- 1 root root 0 Feb 28 23:36 map.osrm.mldgr
-rw-r--r-- 1 root root 165M Feb 28 18:17 map.osrm.names
-rw-r--r-- 1 root root 12G Feb 28 19:41 map.osrm.nbg_nodes
-rw-r--r-- 1 root root 3.4G Feb 28 23:17 map.osrm.partition
-rw-r--r-- 1 root root 6.0K Feb 28 18:17 map.osrm.properties
-rw-r--r-- 1 root root 81M Feb 28 20:32 map.osrm.ramIndex
-rw-r--r-- 1 root root 4.0K Feb 28 20:01 map.osrm.restrictions
-rw-r--r-- 1 root root 3.5K Feb 28 17:38 map.osrm.timestamp
-rw-r--r-- 1 root root 24K Feb 28 20:25 map.osrm.tld
-rw-r--r-- 1 root root 56K Feb 28 20:25 map.osrm.tls
-rw-r--r-- 1 root root 1.6G Feb 28 20:01 map.osrm.turn_duration_penalties
-rw-r--r-- 1 root root 9.6G Feb 28 20:01 map.osrm.turn_penalties_index
-rw-r--r-- 1 root root 1.6G Feb 28 20:01 map.osrm.turn_weight_penalties
-rw-r--r-- 1 root root 12G Feb 27 23:04 north-america-latest.osm.pbf
@gibso221 One common issue is running out of memory inside your Docker containers when processing the data (osrm-extract, etc). Unfortunately, the only indication Docker gives you that it's killed the processes due to using more memory than allowed is a return code - there is no error message logged, just a silent exit and a non-zero return code.
When you end up with a zero-sized file like that, my first guess would be failure of the pre-processing steps, and the most likely candidate is an out-of-memory error that you didn't notice.
@danpat Thanks for the response. I'm running this on AWS on a r5a.16xlarge instance, which has 512GB of memory, so I'm guessing it won't be memory limited in this scenario.
Something is definitely failing silently like you said though. It may be some hard limit set on docker that I'll need to override?
Possibly - check out the docs at https://docs.docker.com/config/containers/resource_constraints/ - I do not know if a running Docker container has access to the host machines full amount of memory by default. I'm on macOS, and it does not.
Closing as stale.