[OSM-dev] mapnik image export failures (and fixes?)

Jon Burgess jburgess777 at gmail.com
Mon Jul 19 22:30:44 BST 2010


On Mon, 2010-07-19 at 08:54 +0100, Tom Hughes wrote:
> On 19/07/10 08:41, Frederik Ramm wrote:
> 
> > Mikel Maron wrote:
> >> Since the export function is linked from the main site, imo we should
> >> consider it a core service that should be available at some service
> >> level. Personally, we use this feature often through Map Kibera.
> >
> > The Mapnik image export works by custom rendering your image, whereas
> > the Osmarender image export, and also anything you do via
> > openstreetmap.gryph.de/bigmap.cgi, works by stitching together
> > individual tiles. If any of these work for you then they will be more
> > reliable than the image export (the bigmap service will let you create a
> > Perl script which you can then run at any time to update the image).
> 
> The critical point is that the mapnik export, like the export tab in 
> general, was designed for one off manual use but people seem to have 
> decided to script access to it and scrape maps from it.
> 
> Sure we could try and scale it by putting massive resources behind it, 
> but as Frederik says, it's not really a very efficient way to do things.
> 
> For the record I did ask Jon where the load was falling when you asked 
> about this on IRC precisely because if he had said it was CPU load then 
> we might have been able to arrange for a dedicated server to run the export.
> 
> Unfortunately he said it's as much database as anything, and I don't 
> really want to get into having to run a separate rendering database just 
> for the export if we can avoid it.

As Tom mentions the problem recently has been a combination of factors:
- More people viewing map tiles
- More data changing and needing to be rendered. Large changesets which
"add an apostrophe to every name St Johns Street in the northern
hemisphere" cause a lot of tiles to be invalidated.
- The total size of the tiles and database is growing and exceeds what
can easily be cached in RAM. This causes more thrashing of the disks to
provide the data.
- The profile of the data has changed breaking some assumptions made in
optimizing the style & database (e.g. we now have nearly 15 million
building polygons from various imports, this is a lot of data to filter
when processing the water-areas at low zooms).
- Additions to the map style (e.g. the query fetching route=ferry is
often causing a problem because it fetches from the large
planet_osm_line table at fairly low zooms).

The result of all the factors above mean that some of the queries
generated by large /export requests can take over 10 minutes to fetch
from the DB. In this time people often try the request multiple times
which makes things worse.

It is not just the /export which is hit by this. A few months ago it was
rare for the render queue to fill up and discard requests. Now this
happens almost every day.

I suspect we will need to attack the problem on several fronts:
- More hardware (RAM, Disks)
- Optimizing the data layout in the various tables (and updating the
styles to use the new layout)
- Adding an asynchronous job queue for larger requests, something like
the method used by http://maposmatic.org/
- Pushing more requests to http://openstreetmap.gryph.de/bigmap.cgi or
another system which works by stitching together existing tiles.

	Jon






More information about the dev mailing list