[OSM-dev] Minute Diffs Broken
ian.dees at gmail.com
Wed May 6 01:23:06 BST 2009
On Tue, May 5, 2009 at 6:16 PM, Brett Henderson <brett at bretth.com> wrote:
> Ian Dees wrote:
>> Forgive me for injecting into this conversation part-way through, but
>> would it make sense to offer an HTTP stream of the complete contents of all
>> changesets as they are closed and applied to the database?
> How do you mean exactly? For it to be reliable it needs to be persisted
> somewhere. And that presumably means using the existing database in some
> way. And that is the problem we're trying to solve :-)
Originally, my suggestion involved running an HTTP server on some
alternative port on the API server. When a client connected to this HTTP
server, the server would write out changesets as XML as soon as they are
finished writing to the database and closed by the client. They would be
streamed as multipart MIME messages.
One complication: I realized that there are several API servers. This would
definitely cause concurrency problems.
>> To reduce load on the server, the stream could be proxied or mirrored to
>> other machines.
> In a sense that's what osmosis is doing. Granted it's not a stream as
> such, but a stream approach implies a queue per client which isn't somewhere
> I want to go just yet. At least not until I get the current system working
A client to the aforementioned stream could do many things:
0. Show some cool realtime stats on a rotating globe.
1. Filter the stream so that only changes with nodes in a particular bbox
are passed through to the next client.
2. Automatically apply the exact same changesets to its copy of the database
3. Slice up the stream into minute-sized chunks and save them off into a
You're right -- it sounds an awful lot like what Osmosis currently does :).
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the dev