[OSM-dev] ESRI article sent by a friend
Alan_Mintz+OSM at Earthlink.Net
Mon Jul 12 06:25:43 BST 2010
At 2010-07-11 22:11, Stefan de Konink wrote:
>Op 12-07-10 06:57, Alan Mintz schreef:
> > At 2010-07-10 14:55, John Smith wrote:
> >> On 11 July 2010 07:32, Thomas Emge <temge at esri.com> wrote:
> >> > ...
> >> Some like to think there is 2 types of importing, implopping is where
> >> people blindly upload data without checking what they are importing or
> >> what already exists,
> > ...which is unacceptable in any area or type of data for which there may
> > be existing data in OSM. A good example was the import of the EPA
> > superfund data, which was poorly geo-referenced, not discussed with
> > other users, woefully incomplete (for some unknown reason), and
> > duplicative of existing data. It was eventually reverted.
>So why is this /so/ unacceptable? The renderer decides what to render.
>If we end up in rendering only a specific subset of data, for example
>from a list of users that we trust. Then any import is just ignored,
>until something thinks its good enough to be used.
1. It's not ignored if users have to stumble over duplicate data when
editing and either reconcile (which should have been done by the importing
person) or ignore it.
2. If the import uses existing tags (as did my example), it _does_ get
3. Adding lots of junk data expands the database, costing performance and
real $$ with no benefit. Everything is slower with more data - downloads,
editing, backups, etc. Also, it increases the number and complexity of
chunks you have to chop areas into because of the object limit in the API.
All with no benefit.
All, of course, in my own experience, which is admittedly somewhat less (in
OSM) than others.
Alan Mintz <Alan_Mintz+OSM at Earthlink.net>
More information about the dev