[OSM-dev] Reducing osm2pgsql memory usage using a database method

Styno styno at hotmail.com
Mon Mar 12 12:33:32 GMT 2007

Frederik Ramm wrote:
> As for osm2pgsql, I don't even know what it does, I only know that it  
> is a C program that used to keep most of the planet file in memory  
> making it unusable on low-end machines, and that there's another  
> version that holds temporary data in a database table which is much  
> slower but consumes hardly any memory. From that, I derived my  
> suggestion that the plant file might probably be processed in chunks.  
>  From your rather cynical reaction to that, I gather it must have  
> been a stupid idea. Ok, no problem, sorry to have intervened, I'll  
> stick to my own toys in the future.
> Bye
> Frederik
Splitting the data in chunks (in a smart way) is the better solution for 
the future. Loading everything into memory may function now still, but 
when the dataset grows the memory requirements will grow as well. That 
could mean one needs e.g. 2GB now, 4GB in a year and 8GB the following 
year (as more ppl join the project). True, memory is getting cheaper, 
but who is willing to upgrade it's system just because a non-optimized 
app needs it?

To solve the current discussion and future problems osm2pgsql and other 
tools working on the world.osm dataset should be able to do so using 
chunks of data. IMHO.

More information about the dev mailing list