[OSM-dev] Ways on the new renderer... maybe usable after all!

Christopher Schmidt crschmidt at crschmidt.net
Fri Jun 30 17:46:24 BST 2006


On Fri, Jun 30, 2006 at 06:30:02PM +0200, Lars Aronsson wrote:
> David Sheldon wrote:
> 
> > On Fri, Jun 30, 2006 at 04:15:50PM +0200, Lars Aronsson wrote:
> > > Adding more hardware is not going to solve this.  We'd have to 
> > > wait ten years for Moore's law to have that effect.  So how are we 
> > > going to solve it?
> > 
> > As far as I can tell, the only way to improve it is to change the
> > datastructures in the database. Maybe denormalise a bit. 
> 
> How does traditional GIS software solve this, e.g. MapServer?  
> Perhaps OSGEO could help here, if they're committed to improving 
> software.

Rendering the whole world as segments from an optimized geographic file
with all 500,000 segments takes about 20 seconds in Mapserver. I really
don't think that *rendering* the data is the slow part of OSM, because
the techniques used may be slower, but it doesn't make sense that it's
hundreds of times slower.

This leads back to the conclusion that OpenStreetMap may be better off
using static files for data which does not need to be as up to date --
as you point out, knowing that london has 500,000 or 500,001 segments is
not important when you're looking at the whole world. 

So, take the planet.osm data, convert it to GML
(http://wiki.openstreetmap.org/index.php/Converting_OSM_to_GML), display
the resulting data in Mapserver
http://wiki.openstreetmap.org/index.php/Displaying_OSM_Shapefiles_In_Mapserver
Use scale dependant rendering to only represent more important roads at
higher zoom (minscale/maxscale in mapserver), and if that's still too
slow, then you start working on caching the data, using either squid or
something like ka-Map, which is designed to fix issues like cross-tile
boundary label rendering.

At zoom > 12, keep the current rendering. But replace it at higher
zooms.

> > The SQL at the moment requires many queries for a single tile. You will
> > not get 40ms response times when you are calling that many queries.
> 
> But even if we could, retrieving every line segment in Greater 
> London for the "map of England" scale doesn't make sense.  The 
> amount of data we need to retrieve has to be reduced far more.
> 
> Every tile is 256 x 128 pixels = 32768 pixels.  On the map of 
> England scale (zoom 6, or so) every pixel would cover hundreds of 
> line segments.  There is little point in retrieving hundreds of 
> line segments from the database only to determine whether that 
> pixel should be pink indicating an urban area, dark red indicating 
> a motorway or transparent representing the countryside.

Simplification of the polylines in OpenStreetMap is one solution to this
-- which requires some empirical testing to see what works vs. what
doesn't. I made a wiki-page about simplification of the OSM data in
GRASS, which allowed me to reduce the 500,000 segments in OSM to 20,000
chains at a worldwide scale without losing any of the way teh data
'looks'. (At the highest zoom level, the polylines/chains were 90,000
segments instead of the 500,000 in OSM).

http://wiki.openstreetmap.org/index.php/Simplifying_OSM_Shapefiles

is the page.

This leads back (again) to rendering the higher level data against a
regularly processed data file. 

> > Removing the historical data from the tables will be a good 
> > start (and we keep on comming back to this).
> 
> I agree that this would be a good idea anyway, but it's not a 
> solution because the historic data is perhaps only 50% of the 
> database.  And we need to remove 99% of our current time lag.

The amount of historic data is not the only reason that selecting data
from a combined table is a problem: Even with *no* historical data, the
query is about 4 times as slow a simpler query which doesn't need to
ensure that data is not historical.

Just some thoughts.

-- 
Christopher Schmidt
Web Developer




More information about the dev mailing list