[OSM-dev] No new planet.osm files?

Immanuel Scholz immanuel.scholz at gmx.de
Tue Oct 10 08:47:06 BST 2006


Hi,

> This gives Mysql::result. This is broadly the same as PHP, you get a
> result set back from the MySQL query.

Yes, that's what I found out yesterday evening too.


> If there are huge numbers of nodes I can see this would use up lots of
> memory.

Nope. For me, the result set is always constant size, no matter how many
result items there are. So the code should not use up memory or the
problem is somewhere different.


> I'm not sure how one would best deal with this, to be honest - a huge
> result set is a problem I haven't had to deal with before. Get one node
> out of the database one at a time (I can see this being very slow)? Do
> tiled queries, say 10x10 degrees?  Shutdown osm while the export is done?
> Anyone?

Mysql supports iteration with constant size memory usage or tools like
mysqldump would be impossible to implement ;).

My assumption yesterday was, that I use the wrong ruby function and that
this function first transfer all data to memory.
On my current platform (windows) here, the implementation of
Mysql::Result#each just calls to #fetch_row which looks sane to me.

I tested the script on Windows yesterday and it did fine. No more memory
than about 8 MB.


Steve: Are you sure the planet.rb sucks up all your memory? Can you start
the script manually, watch the process table and confirm that its memory
usage constantly growing? Maybe the problem is somewhere else?

Maybe mysql-logging is the problem? Or caching of the planet.rb's output
stream? Is there some benchmarking/projiling in place around the planet.rb
process (ruby -rprofile)? Is the version running sync with the svn
repository?


Ciao, Imi






More information about the dev mailing list