[OSM-dev] Too many slow queries in db
jburgess777 at googlemail.com
Tue Sep 4 22:11:12 BST 2007
On Tue, 2007-09-04 at 11:41 -0700, Dave Hansen wrote:
> On Tue, 2007-09-04 at 19:30 +0100, Jon Burgess wrote:
> > On Mon, 2007-09-03 at 21:57 -0700, Dave Hansen wrote:
> > > On Tue, 2007-09-04 at 00:28 +0100, Jon Burgess wrote:
> > > > I've also got some evidence that the 758M object size quoted on the
> > > > tiger stats page is wrong, with the true figure being only half that
> > > > size. I'm still downloading more of the data to confirm this.
> > >
> > > Well, for goodness sake, please keep them to yourself!
> > It was late when I replied last night and even now I can not be certain
> > of my figures. Anyway, here goes.
> > I've downloaded all the data from the following directories:
> > AK AL AR AS AZ CA CO CT DC DE FL GA GU HI IA ID IL IN
> > KS KY LA MA MD ME MI MN MO MP MS MT NC ND OR
> You're being mean to my poor web server! ;)
> Seriously. Please don't download them all that way. I'll count them
> myself using your methods.
I'm not doing this just for the hell of it. I'm downloading them so that
they'll be available on a Mapnik layer either on my home web server or
another layer on the main OSM tile server.
The ones I've downloaded are on the map at:
What do you recommend as a method to acquire the data?
> > This is 33 of the 57 directories and should therefore have over half the
> > complete data set (du -shc * gives 3.1GB). I don't know the size of the
> > complete tiger files so I can't rule out the possibility that there are
> > some very big files which I've not downloaded yet.
> > Counting the nodes with:
> > $ gzip -dc */*.gz | grep -c "<node "
> > 95326826
> > Then repeat with segment and way, I get:
> > Node: 95M
> > Segment: 99M
> > Way: 8M
> > Total = 202M objects.
> I'm doing this:
> zcat */*.osm.gz | egrep -c '<(node|segment|way)'
> Seem good to you?
Yes that should work.
More information about the dev