[OSM-dev] Questions/Ideas/Plans on OSM Infrastructure

Dominik Bay eimann at etherkiller.de
Wed Sep 16 17:16:33 BST 2009

Hi all,

I'm coming up with a topic for discussion on how to save and
serve OSM data for Slippy Maps, Mobile Devices and handling


	Creating a Data-Delivery Infrastructure
		for OSM Data and Tiles

Part One of Many, Dominik Bay, 16th Sept. 2009


1. What is it?
2. Why do we need it?
3. A short description of what can be done

1. What is it?

This text and the attached image describe an infrastructure
which can serve the needs which are coming up in the next weeks
and months.

2. Why do we need it?
Due to growing Software based on OpenStreetMap and OpenRouteService
we need a more-realtimish behaviour of our data, to serve mobile
Users with actual tiles and a fast route calculation.
We also need to get tiles *fast* to our users, this is why they are
called *Slippy* Maps ;-)
(They are fast at the moment, thanks for your work on mod_tile, I'm

3. A short description of what can be done
We can make use of things like Anycast and Geo-aware Caching.
This means a user connects to a Tile-Proxy which is near his
ISP (routing-wise) and holds all tiles which are relevant for
the users location (Europe for example) plus the Tiles which
are requested often (big Cities).

To get a better understanding of the next lines, feel free to open this
picture: <http://eimann.etherkiller.de/nmz/osm.png>

-> Database
To achieve this we need a database which holds all the current data,
split into continents and maybe also countries and can be accessed
by rendering tools and ORS.
The per continent/country split enables us to easily see high-load
and move those databases to less-used servers to better handle updates
*to* the database (Potlatch, etc.) and requests *from* the database
(Renderers, ORS).
We should also differentiate on "Diff Data" which is created when
mappers use Potlatch or other tools, and "Persistent Data" which is for
example the current data we have from a database dump.
The "Persistent Data" gets updated every week from "Diff Data" which
hasn't changed in the last two days.
If those datasets are merged, we can dump the data for other purposes.
Routing decisions should be done on this data too, to minimize the
difference between tiles and routing data.

-> Rendering
The goal is to get nearly-realtime rendering of all map-types and the
option to easily do customized rendering for supporting events,
Wikpedia, etc.
This also enables us to support rendering of 3D Layers with very low delay.

-> Shared Storage
Rendered tiles are stored for the Webservers to serve them to the
Proxies on request.
The Webservers can only *read* from the Storage, as there is no need for
writing on it.
Same for the Rendering-Farm. A Renderer fetches a dataset, renders it
and saves it on the shared storage, together with a file which holds
meta-data like country, city, data-time, render-time etc.
This is done every five minutes, default value for expiring tiles is 30
minutes and on requests it should be 5-10 Minutes, we need to check how
this behaves and how much load we have, but it should be somewhere in
this range.

-> Webservers
Webservers answer proxy requests, they expire tiles on the proxies,
serve tiles and can read additional meta-data to make decisions on
expiring and serving old or new data.

-> Proxies
The proxy servers are located near the user, to *only* serve tiles and
to help spreading the load.
Imagine 5000 Users doing routing with AndNav2, travelling at
100km/h with different Zoomlevels on their map, this brings a lot of
Proxies use auto-expire for content but also honor expire-times served
by the Webservers to push new content to the users.
The proxy servers are located at various ISPs, all served out of the
same /24, so we also have automatic failover + serving data nearly
locally (routing-wise).


So, this is what I've done so far, hardware specs and other stuff is
currently under evaluation and I'm happy to get more details
specifically on rendering and database stuff.
(Still reading the Wiki on that part anyway)
My specific question for rendering is, how does it scale?
The more cores the better, or less cores but more speed? ;-)

I'm curious on your input to deliver a better user experience at the end.

Kind regards,

More information about the dev mailing list