Raphael Jacquot sxpert at esitcom.org
Tue Apr 25 18:11:42 BST 2006

Nick Hill wrote:
> I don't know how efficient postgress is for these data types, or if an 
> R-tree index can be created for a 4-byte integer type in postgress or 
> MySQL.
>  From my tests over the last few days, I tentatively conclude:
> 1) The geographic data types in MySQL are not suitable for GPX traces.
> 2) Assuming we can't yet set an R-tree index on a 2-column integer 
> co-ordinate, then a partitioned database scheme using a single index key 
> column would be the most efficient
> 3) The Geographic data types with an object representation moving from 
> the database through the API to the application would likely be efficient.

I'm not sure that database size on disk is of any interest (after all, 
250G disks go for about 80€). My *experience* with postgres on a laptop 
(that is, with a 4200 rpm laptop drive) has been pretty good (running my 
navsys.py navigation / capture system).
Also, my *experience* with it and a standard seagate ATA 100 drive on a 
40 wire cable (because that's what's inside the box) is pretty good too.

mebbe if your mysql was using innodb or something it would be faster, 
but there's no way this is gonna work with myisam (which is after all a 
flat file a la d-base 3)

here are the data sizes:

on the laptop:
75845 gps points
11389 nodes       \
12048 segments    / only the grenoble area
30M are used

on the main server
(no gps points - yet)
296806 nodes     \
295899 segments	 / a large part of the UK and France
320M are used

that's including all the indexes

(for the database schema, check 
http://www.navsys.org/cvs/pynavsys/pg-osm.sql?rev=1.3&view=auto )

any questions ?

More information about the talk mailing list