Hi,
I followed this wiki page for shared memory configuration
I made the configuration in the server files
I ran docker container using the following parameter :
docker run --restart=always --ulimit memlock=68719476736:68719476736 --name osrm-backend -d -p 5000:5000 -v /opt/osrm:/data osrm/osrm-backend:v5.18.0 sh -c "osrm-datastore /data/france-latest.osrm && osrm-routed --shared-memory=yes --algorithm mld"
The startup output is :
[info] Data layout has a size of 1894 bytes
[info] Allocating shared memory of 1096107224 bytes
[info] Data layout has a size of 1661 bytes
[info] Allocating shared memory of 2630081913 bytes
[info] All data loaded. Notify all client about new data in:
[info] /static 0
[info] /updatable 1
[info] All clients switched.
[info] starting up engines, v5.18.0
[info] Loading from shared memory
[info] Threads: 1
[info] IP address: 0.0.0.0
[info] IP port: 5000
[info] http 1.1 compression handled by zlib version 1.2.11
[info] Listening on: 0.0.0.0:5000
[info] running and waiting for requests
Why data layout has a only size of 1894 bytes and 1661 bytes ?
Also I ran performance test with or without shared-memory and the results are the same.
So my question is, does the shared memory parameter increase performance and is it compatible with docker container of OSRM or it only applies to standalone start up ?
The actual size of the shared memory is:
[info] Allocating shared memory of 1096107224 bytes
[info] Allocating shared memory of 2630081913 bytes
The other lines are just the size of the data structures that save the mappings.
So my question is, does the shared memory parameter increase performance and is it compatible with docker container of OSRM or it only applies to standalone start up ?
It is used for enabling use-cases where you would need either several processes sharing the same data (e.g. when using OSRM through the node bindings, which several node processes), or for updating the data in-place while a process is running (important for traffic updates). It is generally not relevant for performance.
I'm not sure how you could setup docker to share memory between containers though.
You can share memory between containers. By default each container has its own IPC namespace (for shared memory), but with --ipc
flag in Docker run
command you can choose different IPC namespaces and therefore share memory between containers. I experimented only with --ipc="host"
and it worked just fine for OSRM.
Be aware however, that osrm-datastore additionally creates lock files under system's temporary directory, and these are required when starting osrm-routed with shared memory. Access to files loaded by osrm-datastore is also required by osrm-routed. The full commands then look as follows:
# Load data into shared memory
docker run \
--ipc="host" \ # Use host's IPC namespace
--ulimit memlock=137438953472 \ # Increase locked memory limit
-v /tmp/osrm:/tmp:rw \ # Mount volume to persist lock files
-v /dir-with-osrm-file:/data \
osrm/osrm-backend:v5.18.0 \
osrm-datastore /data/some-dataset.osrm
# Use data from shared memory
docker run \
--ipc="host" \ # Use host's IPC namespace
-v /tmp/osrm:/tmp:rw \ # Mount volume with lock files
-v /dir-with-osrm-file:/data \ # Despite some data is loaded into memory, the files have to be available
osrm/osrm-backend:v5.18.0 \
osrm-routed \
--algorithm mld \
--shared-memory=1
Thank you for your answer, should be in the wiki page for shared memory configuration
It would be definitely easier to find. @TheMarex, I assume I can't just contribute to wiki, as it doesn't support pull requests. Could you maybe as maintainer put it there if you also think it could be helpful for others?
Most helpful comment
You can share memory between containers. By default each container has its own IPC namespace (for shared memory), but with
--ipc
flag in Dockerrun
command you can choose different IPC namespaces and therefore share memory between containers. I experimented only with--ipc="host"
and it worked just fine for OSRM.Be aware however, that osrm-datastore additionally creates lock files under system's temporary directory, and these are required when starting osrm-routed with shared memory. Access to files loaded by osrm-datastore is also required by osrm-routed. The full commands then look as follows: