[OSM-dev] speeding up loading an OSM dump into PostGIS?
fatzopilot at gmx.net
Fri Dec 9 01:11:00 GMT 2011
Well, I wonder why there isn't a feasible to do this with Postgres
There was a talk at FOSS4G 2011
but I did not attend, slides are not that detailed, and the videos are not
available so far. I've just theoretical knowledge about replication so far,
but I think it could be a very convenient way to import a planet DB and to
keep it updated.
Is this already "officially" considered (i.e. by those known to push OSM)?
Of course, there are some hurdles. Afaik:
* Efficient (binary) Postgres replication works only if the same Postgres
versions are used for both, server and client
* There is no selection possible, i.e. there must be a master DB for every
subset (country) and function (rendering, geocoding, routing, GIS...)
* There is probably a much higher requirement to server hardware (who is
providing / paying for it?) in comparison to just dump the DB once to a PBF
and have that file served by a webserver
* It is easier and cheaper to distribute a file than a replicated db.
* Some "authority" needs to host an up-to-date master replication server,
where clients could connect (read only) and get updated asynchronously. I
believe the current 9.1.2 could be a good starting point.
* To relieve that master server, several stages of clients could act as
masters for other clients as well (I hope this is possible). A client is
only allowed to connect to that network if it also acted as a master, or/and
there must be an intelligent load distribution. At least, this should be
valid for servers that are connected to the internet permanently. Hopefully
though, this also works on a voluntary basis.
View this message in context: http://gis.638310.n2.nabble.com/speeding-up-loading-an-OSM-dump-into-PostGIS-tp7045762p7076576.html
Sent from the Developer Discussion mailing list archive at Nabble.com.
More information about the dev