[OSM-talk] Live Data - all new Data in OSM
ian.dees at gmail.com
Wed May 13 19:05:52 BST 2009
On Wed, May 13, 2009 at 12:18 PM, Matt Amos <zerebubuth at gmail.com> wrote:
> On Wed, May 13, 2009 at 5:15 PM, Tom Hughes <tom at compton.nu> wrote:
> > Ian Dees wrote:
> >> The whole argument I'm making is that after the initial
> >> implementation**, streaming the data is a lot less resource intensive
> >> than what we are currently doing. Perhaps I don't have the whole picture
> >> of what goes on in the backend, but at some point the changeset XML
> >> files are applied to the database. At this point, we already have the
> >> XML changeset that was created by the client. The stream would simply be
> >> mirroring that out to anyone listening over a compressed HTTP channel.
> > You don't want Potlatch's changes then? or changes made by changing
> > individual objects rather than uploading diffs?
> or even the diffs? any diff where someone creates an element has
> negative placeholder IDs, so extra work would have to be done altering
> the XML to match the IDs returned by the database.
These are implementation details that would have to be hammered out after we
talk about design.
You're right, I would prefer to have the database itself (via triggers) dump
to a file/network handle the data that's being written to it. This way, it
would be able to get everything (including Potlatch and diffs) as it was
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the talk