<br><div class="gmail_quote"><br><div class="im"><blockquote style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;" class="gmail_quote">(1) It's not exactly more reliable with dedicated bulk upload scripts
either. If the API takes too long to check the uploaded osmChange for
validity, the TCP session appears to timeout. The script/JOSM never
receives the OK from the API, including the new object IDs. The next
time you hit upload to resume, it will reupload that failed chunk in its
entirety, leading to (in my example) 5k duplicate objects on the
server.<br></blockquote></div>I have often tried to upload smaller or larger chunks from slow/unstable connections, and have experienced various problems, I have even come to the point where I have had ~100 items left when the connection have timed out, with the result that the entire upload had to be done again. The only way to fix it afterwards is to do a code validation of the area, but with these unstable lines that I suffer from time to time, that doesn't solve much either.<br>
<br>Having JOSM or similar intelligently combine nodes and ways in these chunks, as well as accepted that "all but the last" have been transferred correctly, than that would have helped imensely on the end result.<br>
<br><br><div class="gmail_quote"><div class="im"><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
This makes sense in in certain situations. E.g. you upload 20000 objects and you get "precondition failed" on the last 100 of them. That would mean:<br>
(1) - upload 19899+i objects<br>
(2) - upload is aborted by server -> get the server error<br>
(3) - fix the problem (e.g. download way and fix it)<br>
(4) - i++<br>
(5) - goto (1)<br>
<br>
If you have a slow connection, this is not acceptable.<br><font color="#888888">
<br></font></blockquote></div><div>This is exactly what I am talking about <br></div></div><br>
</div><br>