[Geocoding] Nominatim script log output - how to tell progress?
Simon Nuttall
info at cyclestreets.net
Fri Jul 3 09:56:40 UTC 2015
And a further question:
Can the whole installed running Nominatim be copied to another
machine? And set running?
Presumably this is a database dump and copy - but how practical is that?
Are there alternative ideas such as replication or backup?
On 3 July 2015 at 08:31, Simon Nuttall <info at cyclestreets.net> wrote:
> Further to my previous questions, I'm now seeing:
>
> string(123) "INSERT INTO import_osmosis_log values
> ('2015-06-08T07:58:02Z',25816916,'2015-07-03 06:07:34','2015-07-03
> 06:44:10','index')"
> 2015-07-03 06:44:10 Completed index step for 2015-06-08T07:58:02Z in
> 36.6 minutes
> 2015-07-03 06:44:10 Completed all for 2015-06-08T07:58:02Z in 58.05 minutes
> 2015-07-03 06:44:10 Sleeping 0 seconds
> /usr/local/bin/osmosis --read-replication-interval
> workingDirectory=/home/nominatim/Nominatim/settings --simplify-change
> --write-xml-change /home/nominatim/Nominatim/data/osmosischange.osc
>
> Which presumably means it is updating June 8th? (What else can I read
> from this?)
>
> Also, at what point is it safe to expose the Nominatim as a live service?
>
> http://nominatim-guildenstern.cyclestreets.net/
>
> (Ah that reminds me that page shows the Data - so that answers one of
> the questions.)
>
>
>
> On 3 July 2015 at 07:23, Simon Nuttall <info at cyclestreets.net> wrote:
>> On 2 July 2015 at 22:27, Sarah Hoffmann <lonvia at denofr.de> wrote:
>>> On Thu, Jul 02, 2015 at 02:19:40PM +0100, Simon Nuttall wrote:
>>>> The Nominatim build I started on 13 June has is now showing:
>>>>
>>>> Done 73137484 in 821606 @ 89.017715 per second - ETA (seconds): -8.357887
>>>> Done 73138021 in 822413 @ 88.931015 per second - ETA (seconds): -14.404424
>>>> Done 73138021 in 822413 @ 88.931015 per second - FINISHED
>>>>
>>>> Search indices
>>>> CREATE INDEX
>>>>
>>>>
>>>> Any idea how much more there is to go?
>>>
>>> The index creation normally takes a couple of hours. Having maintenance_work_mem
>>> set to a high value (10GB or more) will speed up the process considerably.
>>
>> Thanks. The 10GB value was used in our configuration. It is hard to
>> tell how long the index creation took as the output sailed by, but I
>> think it was more than a couple of hours.
>>
>> Now it is showing these again:
>>
>> Done 274 in 136 @ 2.014706 per second - Rank 26 ETA (seconds): 2467.854004
>>
>> Presumably this means it is now playing catchup relative to the
>> original download data?
>>
>> How can I tell what date it has caught up to? (And thus get an idea of
>> when it is likely to finish?)
>>
>> Is it catching up by downloading minutely diffs or using larger
>> intervals, then switching to minutely diffs when it is almost fully up
>> to date?
>>
>> This phase still seems very disk intensive, will that settle down and
>> become much less demanding when it has eventually got up to date?
>>
>> Simon
>>
>>>
>>> Sarah
>>
>>
>>
>> --
>> Simon Nuttall
>>
>> Route Master, CycleStreets.net
>
>
>
> --
> Simon Nuttall
>
> Route Master, CycleStreets.net
--
Simon Nuttall
Route Master, CycleStreets.net
More information about the Geocoding
mailing list