<br><br><div class="gmail_quote">On Wed, May 13, 2009 at 12:18 PM, Matt Amos <span dir="ltr"><<a href="mailto:zerebubuth@gmail.com">zerebubuth@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div><div></div><div class="h5">On Wed, May 13, 2009 at 5:15 PM, Tom Hughes <<a href="mailto:tom@compton.nu">tom@compton.nu</a>> wrote:<br>
> Ian Dees wrote:<br>
>> The whole argument I'm making is that after the initial<br>
>> implementation**, streaming the data is a lot less resource intensive<br>
>> than what we are currently doing. Perhaps I don't have the whole picture<br>
>> of what goes on in the backend, but at some point the changeset XML<br>
>> files are applied to the database. At this point, we already have the<br>
>> XML changeset that was created by the client. The stream would simply be<br>
>> mirroring that out to anyone listening over a compressed HTTP channel.<br>
><br>
> You don't want Potlatch's changes then? or changes made by changing<br>
> individual objects rather than uploading diffs?<br>
<br>
</div></div>+1<br>
<br>
or even the diffs? any diff where someone creates an element has<br>
negative placeholder IDs, so extra work would have to be done altering<br>
the XML to match the IDs returned by the database.</blockquote><div><br>These are implementation details that would have to be hammered out after we talk about design.<br><br>You're right, I would prefer to have the database itself (via triggers) dump to a file/network handle the data that's being written to it. This way, it would be able to get everything (including Potlatch and diffs) as it was created. <br>
</div></div>