[OSM-dev] osm2pgsql slim mode, postgis, and hard disk spindles
emilie.laffray at gmail.com
Fri Sep 4 10:58:13 BST 2009
2009/9/3 Frederik Ramm <frederik at remote.org>
> That is supported by my own experiments. I loaded the 20090819 planet,
> tarred the PostGIS partition so that I could quickly re-create the
> scenario, and then applied the 20090819-20090820 daily diff with
> osm2pgsql --slim in a number of different configurations:
> "plain" (nothing special, all on one hd)..... 270 minutes
> "plain" with -C4000.......................... 262 minutes
> with pg_xlog on other hd..................... 266 minutes
> with all indexes on other hd................. 210 minutes
> with the "slim mode" tables on other hd ..... 220 minutes
> "crossed" (normal tables on disk 1, their
> indexes on disk 2, slim tables on disk 2,
> their indexes on disk 1)..................... 213 minutes
> like "plain" but on a RAID-0 md device....... 191 minutes
> like "plain" but on a RAID-1 md device....... 281 minutes
> I have not tried combinations of these; it is to be expected that the
> -C4000 will speed up the RAID-0 value a little but that's as good as it
> gets. Since the process is still disk bound, adding a third hd to the
> RAID-0 array could again improve things.
> I haven't thoroughly tested read performance; I guess that the RAID-1
> should give slightly faster reads than RAID-0.
> I feel a bit like someone who has spent days to code something in
> assembler only to find that writing it in C and having the compiler
> produce machine code yields something more effective ;-)
Thanks for those interesting tests. They are clearly interesting; the
variations between them are not small. I am planning to invest on a nice
machine to do some work, this is clearly giving me an idea of what I should
Yes, it is too bad that SSD drives are still too expensive.
I was considering for the new machine a RAID 10 (0+1) configuration, but I
am wondering if a RAID 5 might not be more interesting in the end.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the dev