[OSRM-talk] Separate Build and Serve instances for OSRM

Nikhil VJ nikhil.js at gmail.com
Fri Jul 16 08:37:13 UTC 2021

There's a big difference in RAM requirements between building a graph in
OSRM, and deploying it. I've seen for one country's osm pbf, at build time
it can easily cross 16GB, whereas when we serve the built-up graph it's
only occupying a little under 7GB RAM.

If I commission a 32GB RAM server for this, it's a bad waste of 3/4th of
the resources for something that I would need to do for just, say, half an
hour in a day, or 1/48th of the time.
So can we split this?

In GCP (Google Cloud Platform), I want to use a more exotic service like
dataproc (if you know something better then pls tell) for doing the
building, and then a VM or a Kubernetes deployment for the server.
I'm getting stuck at this question : How do I make the data files generated
by OSRM's build process, available to the server deployment?
The regular GCP tutorials I see talk about databasing data. Not about large
gigabyte+ binary files.

Is there a way I can mount one same same storage bucket/folder/drive to
Or, is there a way to FTP everything from one place to the other?

Resizing the server's RAM : from 8GB to 32GB and back to 8GB - is something
I've tried out.. have to switch the whole thing off for quite a long time
twice and it doesn't look elegant. Ideally I'd want an arrangement where
the server keeps running, the build action happens elsewhere and the graph
data gets replaced by latest, and then we quickly restart the server. A
"hot swap" so to speak.

This all has to meet org security requirements of course. For example: it's
not an option to make the generated files accessible for download over the
open internet.
Does anybody else have knowledge of this kind of setup? What would you

Nikhil VJ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstreetmap.org/pipermail/osrm-talk/attachments/20210716/7bb5b01d/attachment.htm>

More information about the OSRM-talk mailing list