[talk-au] Mapping houses and addresses in Sydney

Warin 61sundowner at gmail.com
Wed Jun 20 11:14:02 UTC 2018


On 20/06/18 20:50, Dion Moult wrote:
> So I get the impression that everybody on this list seems pretty cool 
> with the approach of using this script to aid in ensuring that the 
> points entered in JOSM have good coordinates and have correct 
> addresses to the best knowledge of the armchair mapper. It clearly 
> isn't a waterfall import. If there are no more objections, on Friday 
> I'll send an email to the imports mailing list describing the 
> approach. Given that the script can be run by anyone and is not part 
> of a bulk import that is done by a single user, I agree that we need 
> some way to connote the source. However instead of a tag, why don't we 
> just add a `source:import=NSW LPI Web Services` to the changeset? That 
> way anybody seeing the history will know.
>
> Given that there are 3.8 million addresses in total in NSW, assuming 
> it took 1 second for somebody to add an address, it would take 440 
> days of non-stop work to add every single address. This is not exactly 
> an exciting task! We can probably cover the city and immediate suburbs 
> relatively quickly, but maybe it is worthwhile investigating the bulk 
> import a bit more. Perhaps once Andrew Harvey finishes his work on 
> openaddresses, we can use that data dump and follow the New Zealand 
> approach of importing bit by bit - we can divide the dataset into an 
> alphabetical list of suburbs, and then treat each suburb's import 
> separately.
>
> Ideas?

All sounds fine. The devil is in the detail.

I'd chose a suburb you are familiar with and do that .. even just a part 
of it and see what happens.

And yes it is not exciting, and very time consuming. However it is usefull.
Once it is proven then I'd target those areas of most demand ... the 
CBDs of major centres, tourist areas, hotels, shopping centres etc. That 
will get most people happy most of the time.

The low use bits .. well that will take quite some time.
Most of the roads are now entered, so finding freds house in a street 
won't be too hard once you have found that street - traffic should be 
light. So I don't see that as a high priority.

I think 1 second per address is optimistic.
And people will get sick of it. So there will be a sporadic 
participation rate. Could be wrong.


>
> Dion Moult
>
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On June 18, 2018 9:23 PM, Dion Moult <dion at thinkmoult.com> wrote:
>
>> On June 18, 2018 8:56 PM, Warin <61sundowner at gmail.com> wrote:
>>
>>> On 18/06/18 20:30, Andrew Harvey wrote:
>>>> On 18 June 2018 at 19:21, Dion Moult <dion at thinkmoult.com 
>>>> <mailto:dion at thinkmoult.com>> wrote:
>>>>
>>>>     Thanks Andrew for your reply!
>>>>
>>>>     1. Thanks for the link to the import guidelines. My responses
>>>>     to the import guidelines below:
>>>>
>>>>
>>>> First up I think any changesets that import addresses in this way 
>>>> should have an extra changeset tag so if we need to we can identify 
>>>> which changesets did the import (so more than just source=LPI NSW 
>>>> Base Map). Something like import=NSW Address Points or something.
>>>
>>> source:import=LPI API via ?? something like that?
>>
>> Sure thing, I would be happy to if that is the appropriate thing to 
>> do :)
>>>
>>>
>>>>
>>>> I'm not sure about separating the address with a ";" like 
>>>> https://www.openstreetmap.org/way/593297556/history#map=19/-33.78072/151.06688&layers=N, 
>>>> could they not be two separate points? If it's a duplex, then I'd 
>>>> do it as a single building with addr:housenumber=11, then if you 
>>>> want two nodes inside the building for 11A and 11B.
>>> I have had separate buildings for A and B - share a common wall. In 
>>> some instances I have 11 then 11A .. but no B.
>>
>> Thanks for the advice! I've fixed it to use two nodes. However, 
>> please note that that particular building was not mapped as part of 
>> my import script proposal. That was mapped previously by me 
>> completely manually. If I had used the import script it would have 
>> created two nodes, one for 11A and one for 11B.
>>>
>>>>
>>>> While I don't think there's anything wrong with 2/18 as a first 
>>>> pass, eg https://www.openstreetmap.org/node/5667899003, I think 
>>>> it's better to use addr:unit=2 addr:housenumber=18.
>>
>> Thanks :) I was wondering what was a better way of doing that. Fixed 
>> :) Again as above this was mapped manually by me and not using the 
>> script.
>>
>>>>
>>>>      1. I am aware that big automatic updates can cause problems. I
>>>>     will only import addr:housenumber and addr:street and a single
>>>>     node.
>>>>
>>>>
>>>> What are you planning on doing where the address in already in OSM? 
>>>> I think in this case we should just not import that point and leave 
>>>> the existing OSM addresses.
>>> Depends .. I have come across addresses that were out of sequence. 
>>> Contacted the still active mapper (moved to Germany) and had not 
>>> response .. after some months I have simple deleted them.
>>> So it is worth checking that the new data is is not 'better' than 
>>> the present OSM data.
>>
>> With my proposal of a semi-automated approach, every single new 
>> address will have to be explicitly decided upon by a human mapper. A 
>> human mapper can decide when to import the point if the existing data 
>> look bad (based off the LPI Base Map raster background) and when to 
>> leave the existing OSM addresses.
>>>
>>>>
>>>>     2. Yes, you are absolutely right that this is not a huge
>>>>     automatic import - it relies on a human choosing what addresses
>>>>     to add and a human submitting it as a change. All it does it
>>>>     automate the address lookup and make sure that the node is
>>>>     neatly positioned at the correct location.
>>>>
>>>>
>>>>     3. It looks like you're grabbing their entire dataset. That
>>>>     would be the alternative approach, doing a data dump, then
>>>>     importing that dump. This can import a lot more addresses, but
>>>>     is also much more complex. Is it worth pursuing? What do you
>>>>     reckon?
>>>>
>>>>
>>>> Oh I'm not suggesting that. It makes sense for the OpenAddresses 
>>>> project to use a complete extract, but as you might have seen in 
>>>> the openaddresses ticket there's a lot of problems trying to dump 
>>>> the data, so your approach of doing it bit by bit should work much 
>>>> better for an OSM import.
>>
>> Sounds good! Sorry for the misunderstanding :)
>>
>>>>
>>>>     4. It seems odd that they would provide an API but would
>>>>     prevent anything from using it.
>>>>     5. Looks like they are doing the big data import. See 3.
>>>>
>>>>
>>>> Not quite, they did it using the approach you've described, broken 
>>>> it down into pices and manually imported everything.
>>>
>>> It might be good to do one section and let people have a look at it?
>>> I do think you'll find it repetitive. Maybe take a break and map 
>>> something else for a while.
>>>
>>> Good Luck.
>>
>> Yes, an incremental approach followed by regular review sounds good.
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstreetmap.org/pipermail/talk-au/attachments/20180620/3916b743/attachment-0001.html>


More information about the Talk-au mailing list