[Tilesathome] Cutover to new server? When?

Kai Krueger kakrueger at gmail.com
Sun Jul 13 00:04:02 BST 2008


spaetz wrote:
>> Presumably that will also mean that re-rendering all of the tiles under 
>> the new server will take at least as long. So there will be an overlap 
>> of old and new system for quite a while. I think that is perfectly fine, 
>> but it does mean that the legacy system has to work well. I.e. the 
>> blankness lookup must be implemented correctly.
> 
> I don't do database lookups for blankness in the legacy code, I just look for a real tile. Going through the "new tile search" -> "old tile file search" -> "recursive database search" would be quite a stretch for an operation that should be blazingly fast.

The efficiency of serving the new tile shouldn't really be effected by 
the fallback mechanisms down the line and does the third fall back 
option really have to be blazingly fast? Unless perhaps loads of people 
look at the pacific ocean at the same time... ;-) But then I don't know
how things would scale.

> 
>> I see three possible options for the legacy blankness lookup.
>> 1) Just like the LegacyTileset, call the old blankness db. From your 
>> numbers above, this seems quite inefficient though, although I could 
>> imagine this could be optimised quite a bit, e.g. not spanning a new 
>> database connection on each new lookup
>> 2) Implement a new blankness database in some form. Not sure how this 
>> would be populated though.
>> 3) Use the ocean tiles db as a fallback of the fallback. From my 
>> previous tests, this was actually quite efficient and easy to code at 
>> least for z12 and above.
> 
> I'd appreciate a patch for solution 3, if you can make it efficient enough. The code would be called from the serve_legacy_tile function in Tile.py in the directory tah_intern.
> I am still not 100% sure that this will be necessary or efficient enough, but we should try it.

I do think a solution to the blankness is necessary, as it would 
presumably take a long time to fill all of the blank areas with real 
tilesets. Until then, the lowzoom map would continue to look less than 
ideal, as it does at the moment, as most of z12 tiles seem blank. If you 
zoom in to e.g. z17 then a good portion of the tiles even in well mapped 
areas are blank and currently return "unknown", as they don't have fall 
back tiles.

Anyway, I have attached a patch that is based upon the oceantiles 
approach. For efficiency reasons, it will only return blankness 
information for z12 and above. Lower zooms will have to have real 
tilesets. I think this should be pretty efficient. At least when testing 
it locally I could not measure a difference between the three types, 
serving from a tileset, falling back to a legacy tile or falling all the 
way through to the ocean tiles. In all cases wget took about 0.01 second 
to get a single tile, but we will have to see how to it would hold up 
under load.

There is one thing that could probably still be optimised. I used the 
django database model to determine weather a layer is transparent or 
not. That probably adds extra overhead that could be hard coded away if 
necessary.

Any feedback for the patch is appreciated.

Kai

> 
> spaetz
> 
> _______________________________________________
> Tilesathome mailing list
> Tilesathome at openstreetmap.org
> http://lists.openstreetmap.org/cgi-bin/mailman/listinfo/tilesathome

-------------- next part --------------
A non-text attachment was scrubbed...
Name: blankness.patch
Type: text/x-diff
Size: 3186 bytes
Desc: not available
URL: <http://lists.openstreetmap.org/pipermail/tilesathome/attachments/20080713/4299ad6b/attachment.patch>


More information about the Tilesathome mailing list