[OSM-dev] Memory error while converting osm to gml

Matt Amos zerebubuth at gmail.com
Thu Apr 2 23:11:54 BST 2009

2009/4/2 Stefan de Konink <stefan at konink.de>:
> Matt Amos wrote:
>> so you'd need to either store
>> all the node locations in ram or some external indexed format... which
>> is exactly what osm2pgsql does :-)
> mmap the node table in array index form to disk, that will work because of
> hole allocation, that will result in a temporary file of around 5GB
> allocated space, now for any way you encounter you can do { node[2*nd]
> node[2*nd + 1] }. And get the line string upon any nd event.

"or some external indexed format"

>> i second frederik's recommendation - import into postgres using
>> osm2pgsql then export the GML from that. its a little convoluted, but
>> the toolchain is well tested.
> Does anyone have any timings on importing the current planet.osm in
> PostgreSQL with any tool available? Using lets say 8GB of ram and 'typical'
> disk?

we do these in about 3 hours in-memory from a bz2 file, but it uses
about 4.5Gb. machine is a dual quad-core xeon with 16Gb ram, disks are
5x sata raid 5. during import the max disk read was 15k blocks/s.
osm2pgsql seems to spend most of its time at >80% cpu, so it looks
like most of the effort is going on decompressing and xml parsing.



More information about the dev mailing list