[OSM-dev] Ways on the new renderer... maybe usable after all!

Christopher Schmidt crschmidt at crschmidt.net
Fri Jun 30 15:42:03 BST 2006


On Fri, Jun 30, 2006 at 04:15:50PM +0200, Lars Aronsson wrote:
> nick at hogweed.org wrote:
> 
> > While there is a slight slowdown, it's not too bad. For 
> > instance, the renderer will render the way-rich tile with bbox 
> > -0.75,51.02,-0.7,51.07 in 2.6 seconds if ways aren't fetched 
> > from the database, against 3.7 if ways are fetched and rendered. 
> > Other bboxes give similar time ratios, typically 3:4 no 
> > ways:ways.
> 
> The 3:4 slowdown isn't too bad, but the times are too bad to begin 
> with.  For a slippy map to work alright, tiles must perhaps be 
> generated a hundred times faster: 26 and 37 milliseconds, rather 
> than 2.6 and 3.7 seconds.  Today we draw roads only at zoom 11 and 
> deeper, but it would be really nice to see roads at zoom 7.
> 
> Adding more hardware is not going to solve this.  We'd have to 
> wait ten years for Moore's law to have that effect.  So how are we 
> going to solve it?

A lot of how to solve it depends on the reason that things are slow.

Is the problem:

  * Getting data from the database is slow?
  * Determining how the lines are to be drawn is slow?
  * Drawing the lines is slow?

Based on the difference in speed on the dev server and the live server,
I'd say that it's likely "A". So then we need to figure out how to
create a way for the renderer to get data that is optimized only to the
renderer. One way to do this is to migrate away from using the
API/database for drawing, and instead to draw against a static dataset
which is regularly generated. 

It might also be worthwhile rendering higher zoom levels against the
planet.osm dataset (and making this clear to users!) -- this would allow
for the higher-level rendering to be done more quickly, since it's
against an already existing, seldom changing dataset, and if people want
better accuracy, they can zoom in further. If this path were followed,
then the cache for the higher level tiles could be set to be 30 days
instead of 48 hours, and then when planet.osm was created and processed,
the older dataset would have to be cleared from the squid cache.

A lot of this would be improved by better handling of changes in the
dataset inside th cache -- then you could set the squid cache to
essentially be 'forever', and expire tiles as changes happen.

But that's a hard problem to solve too, especially when your'e using
squid as your backing cache: given that OSM serves its data up as a WMS,
people could have requested all kinds of crap data that would then be
hard to determine how to get rid of.

An answer to this is to create a file-based cache, so that mroe complete
control over the tile storage is available. I'm currently building a
cache-expiry tool that given a geometry, expires a ka-Map based
filestore based on the changes -- calculating the extent of the shape,
then finding the files stored on disk which need to be gotten rid of.

Of course, none of this applies to OSM, at the moment, but it could: 
http://openlayers.org/dev/examples/kamap.html shows an example of OSM
data running under ka-Map. (Look at the London area to get more of a
feel for what I'm talking about.)

Just some thoughts.

-- 
Christopher Schmidt
Web Developer




More information about the dev mailing list