[OSM-talk] Bounding box
Frederik Ramm
frederik at remote.org
Mon Feb 5 12:35:50 GMT 2007
Hi,
quoting Nick Hill:
> bbox for GPX or OSM data queries: 0.3 degrees squared
> Maximum nodes for OSM queries: 50,000.
>
> Previously, there was no bounding box limitation for GPX queries
> and there was
> no node number limitation. However, the bbox for OSM queries was
> 0.06 degrees sq
> so that has been relaxed substantially. These changes shouldn't
> affect most OSM
> mappers, but will prevent newbies from tying up the database with
> damaging
> queries they probably didn't mean to send.
The change adversely affects my way of working with the database; I
used to be able to fetch the data for my city in one go but cannot do
so anymore (bounding box
48.83502516137676,8.161324938209406,49.15643593760289,8.531501487380181)
.
I now have to download data in consecutive requests, which causes
some hiccups in JOSM (it seems to think certain segments have changed
and insists on uploading them although I haven't touched them), plus
there seesm so be an issue with ways that lie between the two
downloaded areas (the "incomplete flag" is not proberly cleared even
if the second download completes the way, or something like that).
Granted, these things can be avoided by changing the way I work or by
fixing JOSM, but I don't like the outlook. 0.3 degrees is
devastatingly small if you want to work on things like large rivers,
and the 50.000 node limit kills your request if you accidentially
have a city in your bbox even if you were only interested in the
waterway.
I would much rather like an approach where I can filter my request,
for example by last modification time. I could then keep a local copy
of the data and just fetch the changes since my last update. And for
working with rivers and motorways and the like, I could request a
large area with only the features of interest (typical application:
drawing woodland areas from landsat images - in that case I am not
interested in downloading highways...).
Please, let us investigate ways to make our data MORE accessible,
instead of cutting and clipping access to keep up server performace.
Extrapolating this into the future, we'll have ever-smaller bounding
boxes and ever-smaller maximum numbers of points... and we will not
be able to solve this (at least not permanently) by throwing
expensive hardware at it.
One possible solution I could imagine would be splitting download and
upload servers. Uploads - which will always be only a small
percentage of downloads - must go the the central server, but
downloads can be served from any number of servers running a copy.
These copies can be kept current by built-in database replication
techniques, or simply by batch-updating them with a hourly diff file
(this would also enable us to build mirrors distributed all over the
world). Just one idea, but *something* has to be done...
Bye
Frederik
--
Frederik Ramm ## eMail frederik at remote.org ## N49°00.09' E008°23.33'
More information about the talk
mailing list