[OSM-dev] production site disk use question

Frederik Ramm frederik at remote.org
Tue Jan 1 11:42:40 GMT 2013


On 31.12.2012 18:25, Jeff Meyer wrote:
> For example, I just tried importing a 1.7GB planet-reduced.pbf into my
> rails port osm db and it failed after ~30 hrs because I ran out of disk
> space after it had eaten up 50GB of disk. Bad planning on my part, but
> how should I budget for this?

In addition to Sly's data:

A typical "apidb" setup has two sets of tables - "current" tables that 
have only the last version of each object, and "history" tables that 
contain every version (they don't have "history" in the name - the 
current nodes table is called current_nodes, and the history nodes table 
is called just nodes).

This means that if you import data from a non-history planet into an 
apidb database, you'll have everyting twice.

Depending on what you want to do with the data, you might really need 
that - or you might not. For example, if you wanted to run a read-only 
API that gives you data for a given bbox, only the "current" tables are 
required. For some other types of queries, only the history tables are 
be required.

So it might be possible for you to take a shortcut by importing things 
only once. Osmosis has an option called "populateCurrentTables" which is 
on by default, but you can switch that off and it will only create 
history tables. If you have an use case that only needs current tables, 
then Osmosis doesn't offer that but you could actually achieve that by 
creating views on the history tables, instead of copies. This will save 
time and space; of course if you do that then you can't apply updates to 
your database without breaking the views.


Frederik Ramm  ##  eMail frederik at remote.org  ##  N49°00'09" E008°23'33"

More information about the dev mailing list