[josm-dev] Handling large files?

Petr Nejedly Petr.Nejedly at Sun.COM
Mon Nov 24 18:44:08 GMT 2008


Michael Speth napsal(a):
> Greetings,
>   I am trying to read OSM data from a large file but am unable to (the
> process keeps getting killed).  I have downloaded an OSM file from here
> http://downloads.cloudmade.com/.  Uncompressed its about 3.2GBs.

That would be around 22 million nodes, right? You'll need close to 1GB
of memory just to reasonably represent only the nodes in java.
If you have that kind of memory (that is, realistically, 2GB+ RAM)
available for the task, we can talk about in-memory approach...

> 
> Here is the simple code that I wrote to try and parse the OSM file.
>             Main.proj = new Epsg4326();
>             DataSet dataSet = OsmReader.parseDataSet(new
> FileInputStream(file), null, null);
> 
> This works for small files.
> 
> So should I be using a different parser for large files?  Does OSM handle
Different data model, but still ...

> large files?  My end goal is to be able to search through OSM data to find a
> road based on the GPS coordinates.  Is there a better way of doing that?  I
> was thinking about putting this information in an embedded database like
> DB4O.
... yes, you'd better use a geographically indexed DB, you don't want to iterate
all the nodes anyway.

-- 
Petr "Nenik" Nejedly, NetBeans/Sun Microsystems, http://www.netbeans.org
355/113 -- Not the famous irrational number PI, but an incredible simulation!




More information about the josm-dev mailing list