[OSM-dev] disk size for planet osm import into PostGIS (on an SSD)?

Akos Maroy akos at maroy.hu
Thu Jun 27 08:20:01 UTC 2013


Paul,

>
> There are a couple of ways to reduce the disk space. The first is to use
> flat-nodes. This turns what was a 80GB+ nodes table for a full planet into a
> 17GB flat file.
>
> One other optimization is if you're not planning on doing updates and don't
> need the slim tables you can use --drop to get rid of them.
>
> By default osm2pgsql does indexing and clustering of the tables in parallel.
> This is fastest, but it results in a big spike of disk usage while
> rearranging and indexing as it happens on all the tables at once. I believe
> --disable-parallel-indexing will fix this.
thanks, trying --flat-nodes flat-nodes --hstore --hstore-match-only 
--disable-parallel-indexing  now
>
> I am quite surprised you ran into problems on a 512GB SSD. I've imported
> recent planets on smaller volumes. On the other hand, I don't know anyone
> who's done a full planet import without --flat-nodes lately and that
> probably helps lots.
>
> Two other tips for the next time you try are to use the .osm.pbf file
> instead of .osm.bz2, and that there was a new planet file generated about
> two hours ago.
the .bz2 file seems to work fine with bzcat
>
> Another general tip is that if your planet file is more than a day or so
> old, use osmupdate (https://wiki.openstreetmap.org/wiki/Osmupdate) to update
> your planet file before importing. It only takes about an hour even if it's
> a week old, and that's way faster than importing diffs after.
will look into it, thanks


Akos




More information about the dev mailing list