[OSM-dev] TIGER data show OSM getting slower??
osm at sspaeth.de
Mon Sep 17 09:00:53 BST 2007
On Mon, Sep 17, 2007 at 12:41:42AM +0100, Dermot McNally wrote:
> I don't know if we can consider this a cause for the slowdown, but it
> might explain a further phenomenon - over the past week or so, I've
> noticed that my render requests for areas I've recently altered just
> don't get processed, while the tiles at home pending queue just gets
> longer and longer. I assumed the cause was either slow API (tiles
> clients giving up and the render request going around the loop again),
> teething problems with osmarender5 or some combination of the two.
> My instinct is that we shouldn't trigger any more tile re-renders than
> we have to until the rendering queue returns to a sane state. (Sanity
> here is defined as "not growing").
let me describe the nature of the pending queue, things are not as bad as they seem from that graph. Prio 1 means high, 2 means low:
- Active prio 1, 34; prio 2, 190 (Some of these being low-zoom renders, there won't be >100 regular requests being out there at any one time.)
- Pending prio 2, ca 6900
So you can see that while there are 34 active high-priority out there being rendered just now, there is no waiting queue for those. That means that if you press re-render on IFW now, your requests will be dispatched as the next one to a client. Any IP address can have up to 10 unfinished render reqests before they are auto-lowered in priority (to avoid high priority bulk-requesters).
So while our queue is incredibly long, high priority requests are not affected by that. So what remains is the concern of whether it overloads the API server too much.
As t at h clients get only 3 three slots on which they are being served at the API, the load on the server should be constant, independently of whether there is a pending queue of 100 or 10000000. We also never hand out more than 100 requests to clients, so there won't be any more renderers clogging up the API, just because our queue is longer.
I'm not saying that situation is good and dandy, all I'm saying is that a longer queue in t at h doesn't necessarily results in worse API performance than before.
That having said, I think while rendering old tiles is a worthwhile goal, I find re-rendering blindly a large part of a country each day, rather wasteful and impolite against others (who will have to wait much longer for their render requests to be finsihed).
I think that one part of the solution on the t at h side could be to parse the planet diff each week, and mark all 'dirty' z12 tileset as needing rerender and insert them automatically in the queue. If that were done on the server side, people wouldn't have to request their whole country each day any more.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 186 bytes
Desc: not available
More information about the dev