[OSM-dev] New, faster, planet dump tool

Jon Burgess jburgess777 at googlemail.com
Fri Sep 28 21:52:56 BST 2007


On Wed, 2007-09-26 at 22:46 +0200, Joerg Ostertag wrote:
> 
> > Anoner possibility is to use the planet.c code to stream a DB dump
> > into the PostgreSQL mapnik database. Avoiding the bzip2 compression
> > should allow this to be done quite rapidly. We could then update the
> > Mapnik layer more frequently than the formal weekly planet dump.
> 
> If you do this, it would be great to also supply the planet-postgis-dump on 
> the website. This dump can be  used to display the maps for gpsdrive. If the 
> mainserver already creates the sql statements for this, why not also dump 
> them so anyone else can use the same dump to create the postgis equivalent on 
> his home machine.

osm2pgsql does not create a normal SQL output, it pushes the data into
the database using a C interface. I can dump the data into an SQL format
but there are numerous options about whether to include the table
definitions etc. 

I think the answer is to dump just the data from the tables. 

Dumping the entire database is no good because it contains Postgis
definitions and meta data which are probably specific to a particular
installation. 

I've made a trivial (200kB) example dump available at:
http://tile.openstreetmap.org/example.pgsql.bz2

To run the import, first setup the database as per the Mapnik wiki
page. 

Create the table structure:

$ osm2pgsql --create --database Database-Name

Import the data:

$ bzip2 -dc example.pgsql.bz2 | psql Database-Name


If the above example works for you then I can make the dump of the
current data available. It'll probably be a few hundred MB of .gz
compressed output assuming you want all 4 tables (planet_osm_point,
planet_osm_line, planet_osm_roads, planet_osm_polygon).

	Jon







More information about the dev mailing list