[OSM-dev] Recapping wishlist + hello world + PDX Bof
Paul Norman
penorman at mac.com
Fri Oct 19 01:00:40 BST 2012
I'm just replying to a few points in the limited areas of my expertise. This
isn't to say that other improvements aren't valuable, just that I'm not in a
position to comment on them.
> From: Alex Barth [mailto:alex at mapbox.com]
> Subject: [OSM-dev] Recapping wishlist + hello world + PDX Bof
>
> ## API
>
> - deleted item map call
> - Way level history calls
> - Diffs on changesets
Right now our history viewing tools suck. I'm not sure this is an API
problem. Good tools to visualize a .osc file as well as a way to easily get
a .osm for how a particular area looked on $date would *really* help my DWG
work. We get cases of edit wars where it's not exactly clear what's happened
in a changeset. I can easily enough generate a .osc, but knowing what's
happened is difficult.
> ## Data delivery
>
> - Diff / patch tool for imported data
Diffs/patches on geodata are hard. There's no getting around that, and I
doubt we'll ever see tools as good as there are for source code.
Where I think the problem is more immediately solvable is for a specific
subset of data: addresses.
As "addressed" in a few of the SOTM-US talks, addresses are important and
they're an area where OSM needs improvement. Addresses also have another
unique feature. In the vast majority of cases, they serve as their own
unique key. The hard part of dealing with external dataset updates is
knowing when a normally mapped feature corresponds to something in the
dataset. For addresses, this becomes trivial. All that's stopping me is a
need to put the time into it and a lack of time.
> - ogr3osm for converting very large sets of geometries (also see
> ogr2osm)
ogr2osm works on up to ~2 GB shapefiles without issues on my home server,
albit slowly and CPU-bound. The bigger question is, what do you do with the
many gigs of .osm that it gives you? I have high hopes for snapshot-server
but haven't had time to test it out. Also I understand from Andy that the
current data loading process for it is sub-optimal for large files and may
not scale to extremely large datasets. The solution is likely to add a new
osmosis task to import to the slightly different schema that it uses.
> - Support larger downloads through API (cached tile sources?)
Jxapi and overpass support this. Long term I think pre-splitting into tiles
that are kept up to date and then merging those tiles for requests is an
option that needs to be considered as usage grows, but the current system
works for now.
More information about the dev
mailing list