[GraphHopper] Graph Storage Types and their correct initialization

Jürgen Zornig juergen.zornig at gmail.com
Fri Apr 25 08:44:20 UTC 2014


Hi!

Some spare time to push this forward again.

 > how much RAM do you assign for this process? (the -Xmx setting)

Ok, sry, that seems to be one problem. I already set XmX to 3Gig before 
in Netbeans project properties, but as I saw now in the processmanager, 
java never allocated more than 1G. After setting the main class of the 
project to my own GraphHopper implementation and not running my class 
directly that uses the memory now as it should.

 > What version of GraphHopper do you use?

I forked your repo on github a couple of months ago, its a snapshot of 
0.3. Think I should fetch some of your commits, but most basic things I 
use should work in this snapshot.

 > do you apply some specific configuration or customization to the 
LocationIndexTree?

I copied and modified mostly from OSMReader.java for my IdfReader and 
from GraphHopper.java for my own GipHopperIdf implementation. 
Modifications are mostly simplifying. I removed all parts which are 
related to ContractionHierachies for example, because I want to create a 
router which can change the graph dynamically while runtime. The 
LocationIndex is an instance of LocationIndexTree with a snapResolution 
set to 500 (just like in GraphHopper.java). The LocationIndex is 
prepared and flushed to disk properly, by the way.

RAM and RAM_STORE keep crashing just when I call graph.flush() with the 
following ArrayIndexOutOfBoundsException in the RAMDataAccess Provider. 
If I don't call flush I actually can route on the graph, but it is not 
stored on disk for later use and I have to keep everything in memory:

2014-04-25 10:13:58,736 [main] INFO  reader.idf.IdfReader - Read 1207004 
records which equals 1207004 rows as expected
2014-04-25 10:13:58,736 [main] INFO  reader.idf.IdfReader - INTREST 
import took 13 seconds.
2014-04-25 10:14:00,009 [main] INFO storage.index.LocationIndexTree - 
location index created in 1.2706958s, size:1 223 484, leafs:265 528, 
precision:500, depth:4, entries:[64, 64, 64, 4], entriesPerLeaf:4.60774
2014-04-25 10:14:00,029 [main] INFO  reader.idf.IdfReader - Location 
Index built in 1 seconds.
2014-04-25 10:14:00,029 [main] INFO  reader.idf.IdfReader - Total 
seconds: 14
2014-04-25 10:14:00,029 [main] INFO com.graphhopper.GipHopperIdf - 
optimizing ... (totalMB:562, usedMB:457)
2014-04-25 10:14:00,030 [main] INFO com.graphhopper.GipHopperIdf - 
finished optimize (totalMB:562, usedMB:457)
2014-04-25 10:14:00,030 [main] INFO com.graphhopper.GipHopperIdf - 
flushing graph GraphHopperStorage|car|RAM_STORE|,,,,, 
details:edges:1 207 004(36), nodes:1 034 868(11), name: - (10), 
geo:4(0), bounds:46.3957092,49.0165,9.4787003,17.1566185, totalMB:562, 
usedMB:457)
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0
     at 
com.graphhopper.storage.RAMDataAccess.setBytes(RAMDataAccess.java:213)
     at 
com.graphhopper.storage.StorableProperties.flush(StorableProperties.java:74)
     at 
com.graphhopper.storage.GraphHopperStorage.flush(GraphHopperStorage.java:1309)
     at com.graphhopper.GipHopperIdf.flush(GipHopperIdf.java:358)
     at com.graphhopper.GipHopperIdf.process(GipHopperIdf.java:159)
     at com.graphhopper.GipHopperIdf.importOrLoad(GipHopperIdf.java:137)
     at com.graphhopper.GipHopperIdf.main(GipHopperIdf.java:42)


 > Are you calling create() before you use the DataAccess files?

No, I don't....seems that you've found my second major error here. I 
insprected OSMReader for how to do that. So when I have 1034868 nodes, 
does that mean, that I have to create GraphStorage with a bytecount of 
at least 1034868 / 50 = 20698?

I am adding nodes to my graph while reading my Idf/Csv File. I only know 
how much nodes I have, when I am completely through. So I guess there is 
no way to make GraphStorage increase bytecount while creating nodes 
automatically and that I have to do an intermediate read of my file 
first before filling the GraphStorage (which is absolutely ok, I just 
want to know if I choose the right strategy)?

Thank you a lot for helping me out understanding...again ;)

best, Jürgen

-----------------------------------------------------------------------
Am 2014-04-24 10:19, schrieb Peter K:
> Hey Jürgen,
>
> how much RAM do you assign for this process? (the -Xmx setting)
> What version of GraphHopper do you use?
> And do you apply some specific configuration or customization to the 
> LocationIndexTree?
>
> > I already am aware, that I should not flush() the graph when I use 
> RAM or RAM_STORE (because everything is kept in Memory I guess?).
> > When I use MMAP, the reader flushes after every 100000 items which 
> were read and after finishing reading.
>
> I think flush is only necessary at the end even for MMAP.
>
>
> > I already am aware, that I should not flush() the graph when I use 
> RAM or RAM_STORE (because everything is kept in Memory I guess?).
>
> You should still flush at the end. When you use 'RAM' then simply 
> nothing happens.
>
> But when you use RAM_STORE 'flush' writes to disk. The next time you 
> start GH it then can avoid parsing and it will just load the data from 
> disk into memory
>
>
> > Exception in thread "main" java.lang.NullPointerException
> >    at 
> com.graphhopper.storage.MMapDataAccess.newByteBuffer(MMapDataAccess.java:176)
>
> Are you calling create() before you use the DataAccess files?
> Or what is 'null' on line 176 - would you point to the github source 
> for it?
>
> > First I have to say, I'm sorry, that I am not really totally focused 
> on this,
>
> No problem. Good to have some other usecases to make it robust and 
> more userfriendly.
>
> > initialize GraphStorage
>
> Please see the javadocs of GraphHopperStorage:
> Life cycle: (1) object creation, (2) configuration via setters & 
> getters, (3) create or loadExisting, (4) usage, (5) flush, (6) close
>
> Regards,
> Peter.
>
>
>> Hi Peter,
>>
>> I can't get my head around storage types in graphhopper, and how to 
>> treat them correctly. I have my own little Idf File reader which 
>> works perfectly with small graphs. Although there is still much work 
>> to do (writing tests, refactor some classes, actually test) it reads 
>> my idf mockup graphs and routes correctly then.
>>
>> But when I take a larger amount of data (Graph of austria with 
>> approx. 1 million nodes and 1.2 million links) my 4GB 4 Core Linux 
>> Notebook runs out of memory, when I use DAType.RAM_STORE
>>
>> 2014-04-23 20:57:20,221 [main] INFO  reader.idf.IdfReader - Starting 
>> to read nodes!
>> 2014-04-23 20:57:24,281 [main] INFO  reader.idf.IdfReader - Read 
>> 1034868 records which equals 1034868 rows as expected
>> 2014-04-23 20:57:24,281 [main] INFO  reader.idf.IdfReader - Graph has 
>> 1034868 nodes.
>> 2014-04-23 20:57:24,281 [main] INFO  reader.idf.IdfReader - Starting 
>> to read links!
>> 2014-04-23 20:57:36,024 [main] INFO  reader.idf.IdfReader - Read 
>> 1207004 records which equals 1207004 rows as expected
>> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead 
>> limit exceeded
>>     at 
>> com.graphhopper.storage.index.LocationIndexTree$InMemTreeEntry.<init>(LocationIndexTree.java:844)
>>     at 
>> com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:428)
>>     at 
>> com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:433)
>>     at 
>> com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:433)
>>     at 
>> com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:433)
>>
>> When I use DAType.MMAP I get a NullPointerException immediately after 
>> inserting the very first node, while creating the "newByteBuffer" in 
>> MMapDataAccess.java:
>> 2014-04-23 21:18:27,264 [main] INFO  reader.idf.IdfReader - Starting 
>> to read nodes!
>> Exception in thread "main" java.lang.NullPointerException
>>     at 
>> com.graphhopper.storage.MMapDataAccess.newByteBuffer(MMapDataAccess.java:176)
>>     at 
>> com.graphhopper.storage.MMapDataAccess.mapIt(MMapDataAccess.java:150)
>>     at 
>> com.graphhopper.storage.MMapDataAccess.incCapacity(MMapDataAccess.java:103)
>>     at 
>> com.graphhopper.storage.GraphHopperStorage.ensureNodeIndex(GraphHopperStorage.java:261)
>>     at 
>> com.graphhopper.storage.GraphHopperStorage.setNode(GraphHopperStorage.java:232)
>>     at com.graphhopper.reader.idf.IdfReader.loadGraph(IdfReader.java:237)
>>     at 
>> com.graphhopper.reader.idf.IdfReader.doIdf2Graph(IdfReader.java:102)
>>     at com.graphhopper.GipHopperIdf.importINTREST(GipHopperIdf.java:200)
>>     at com.graphhopper.GipHopperIdf.process(GipHopperIdf.java:175)
>>     at com.graphhopper.GipHopperIdf.importOrLoad(GipHopperIdf.java:159)
>>     at com.graphhopper.GipHopperIdf.main(GipHopperIdf.java:42)
>>
>>
>> I already am aware, that I should not flush() the graph when I use 
>> RAM or RAM_STORE (because everything is kept in Memory I guess?). 
>> When I use MMAP, the reader flushes after every 100000 items which 
>> were read and after finishing reading. In this example where I load 
>> all Nodes and Links from Austria, no graph files are created in the 
>> graph-location folder.
>>
>> First I have to say, I'm sorry, that I am not really totally focused 
>> on this, because I develop on this reader really as a hobby project, 
>> while doing ten others. But for now I did a lot of debugging and 
>> research but I think I am completely stuck at the moment, which is 
>> possibly a result of my incomplete understanding of how to initialize 
>> a GraphStorage correctly.
>>
>> Any hints appreciated.
>>
>> best regards,
>>
>> Jürgen
>>
>>
>> _______________________________________________
>> GraphHopper mailing list
>> GraphHopper at openstreetmap.org
>> https://lists.openstreetmap.org/listinfo/graphhopper
>
>
> -- 
> GraphHopper.com - Fast & Flexible Road Routing
>
>
> _______________________________________________
> GraphHopper mailing list
> GraphHopper at openstreetmap.org
> https://lists.openstreetmap.org/listinfo/graphhopper

-------------- n?chster Teil --------------
Ein Dateianhang mit HTML-Daten wurde abgetrennt...
URL: <http://lists.openstreetmap.org/pipermail/graphhopper/attachments/20140425/658be7ff/attachment.html>


More information about the GraphHopper mailing list