[OSM-dev] 0.6 move and downtime (re-scheduled)

Grant Slater openstreetmap at firefishy.com
Fri Mar 13 02:13:56 GMT 2009


Stefan de Konink wrote:
>
> Maybe a stupid question; but is your database server able to exploit 
> the above configuration? Especially related to your processor choice.

Yes, the disks are _currently_ over spec'ed, but not for 6 month's time.
Replacing the hardware for the central database server is a nightmare 
job we do not want to repeating often.
Currently 100k users, with an exponential growth curve. The USA is still 
a sleeping beast. Large imports in the pipeline.

> Now it is nice you put 32GB (extra expensive) memory in there, but 
> most likely your hot performance would be far better with more (cheap) 
> memory than more disks. At the time I wrote my paper on OSM Dec2008, 
> there was about 72GB of CSV data. Thus with lets say 128GB you will 
> have your entire database *IN MEMORY* no fast disks required.

Indexes in memory, not data.
Yes; 32GB is intentionally low, price point. with lots of room to 
expand. 8GB-DDR2-ECC is not yet available. (It will also need to climbed 
off the silly price shelf).
Currently DB is 344GB with indexes, excluding binary logs. Significant 
jump with the 0.6 API switch. Large imports in the pipeline.
The extra available space and performance will likely lead to innovation.

>
> ...or are you actually moving from OS to Solaris to utilize those 10 
> disks for your lets say less than 100G worth of geodata using them as 
> duplicates in the pool [opposed to integrity duplicates]?

Solaris ZFS right?
We are going with Linux because it's within our current skillset.
RAID10 is extremely high read and write performance, ideally suited to 
our database load.
I'll look again at RAID5E and RAID6 (effectively pool duplicates with 
entire array integrity), but they will likely again be discarded due to 
slow write performance on small blocks.

Regards
 Grant





More information about the dev mailing list