[OSM-dev] New, faster, planet dump tool
Brett Henderson
brett at bretth.com
Thu Sep 27 00:01:59 BST 2007
Robert (Jamie) Munro wrote:
> Presumably if you run the changeset that covers the time over which the
> planet dump was made, it will be able to apply all the changes that the
> dump missed, and ignore all the ones it got. That way, you will have a
> consistent planet replica going forward.
>
That should work. One thing to note with all of this is that osmosis
won't create identical history on the destination database. When
extracting changes it will summarise all the changes within a time
interval. In other words, it will only use the most recent change
within the interval for each entity. This was a decision I made when
writing the --mysql-read-change task to reduce the amount of data for
large time intervals and to make it easier to apply changes to xml
files. Most of osmosis could work with complete history though so this
can be changed in the future if required.
If you create overlapping changesets as described above the current
tables in the destination database should end up correct so long as you
apply the changes in the correct order .
One issue with the current osmosis code (and I intend to fix it) is that
it doesn't create user ids on the destination database. It creates a
single osmosis user and assigns all changes to that user. I only
recently added user attribute support to most tasks but haven't added it
to the mysql writing tasks. It's not a big job to add.
More information about the dev
mailing list