[OSM-talk] Live Data - all new Data in OSM

Matt Amos zerebubuth at gmail.com
Wed May 13 19:09:01 BST 2009


On Wed, May 13, 2009 at 7:05 PM, Ian Dees <ian.dees at gmail.com> wrote:
> On Wed, May 13, 2009 at 12:18 PM, Matt Amos <zerebubuth at gmail.com> wrote:
>> On Wed, May 13, 2009 at 5:15 PM, Tom Hughes <tom at compton.nu> wrote:
>> > Ian Dees wrote:
>> >> The whole argument I'm making is that after the initial
>> >> implementation**, streaming the data is a lot less resource intensive
>> >> than what we are currently doing. Perhaps I don't have the whole
>> >> picture
>> >> of what goes on in the backend, but at some point the changeset XML
>> >> files are applied to the database. At this point, we already have the
>> >> XML changeset that was created by the client. The stream would simply
>> >> be
>> >> mirroring that out to anyone listening over a compressed HTTP channel.
>> >
>> > You don't want Potlatch's changes then? or changes made by changing
>> > individual objects rather than uploading diffs?
>>
>> +1
>>
>> or even the diffs? any diff where someone creates an element has
>> negative placeholder IDs, so extra work would have to be done altering
>> the XML to match the IDs returned by the database.
>
> These are implementation details that would have to be hammered out after we
> talk about design.
>
> You're right, I would prefer to have the database itself (via triggers) dump
> to a file/network handle the data that's being written to it. This way, it
> would be able to get everything (including Potlatch and diffs) as it was
> created.

why via triggers?

cheers,

matt




More information about the talk mailing list