[GraphHopper] Graph Storage Types and their correct initialization

Peter K peathal at yahoo.de
Thu Apr 24 08:19:00 UTC 2014


Hey Jürgen,

how much RAM do you assign for this process? (the -Xmx setting)
What version of GraphHopper do you use?
And do you apply some specific configuration or customization to the
LocationIndexTree?

> I already am aware, that I should not flush() the graph when I use RAM
or RAM_STORE (because everything is kept in Memory I guess?).
> When I use MMAP, the reader flushes after every 100000 items which
were read and after finishing reading.

I think flush is only necessary at the end even for MMAP.


> I already am aware, that I should not flush() the graph when I use RAM
or RAM_STORE (because everything is kept in Memory I guess?).

You should still flush at the end. When you use 'RAM' then simply
nothing happens.

But when you use RAM_STORE 'flush' writes to disk. The next time you
start GH it then can avoid parsing and it will just load the data from
disk into memory


> Exception in thread "main" java.lang.NullPointerException
>    at
com.graphhopper.storage.MMapDataAccess.newByteBuffer(MMapDataAccess.java:176)

Are you calling create() before you use the DataAccess files?
Or what is 'null' on line 176 - would you point to the github source for it?

> First I have to say, I'm sorry, that I am not really totally focused
on this,

No problem. Good to have some other usecases to make it robust and more
userfriendly.

> initialize GraphStorage

Please see the javadocs of GraphHopperStorage:
Life cycle: (1) object creation, (2) configuration via setters &
getters, (3) create or loadExisting, (4) usage, (5) flush, (6) close

Regards,
Peter.


> Hi Peter,
>
> I can't get my head around storage types in graphhopper, and how to
> treat them correctly. I have my own little Idf File reader which works
> perfectly with small graphs. Although there is still much work to do
> (writing tests, refactor some classes, actually test) it reads my idf
> mockup graphs and routes correctly then.
>
> But when I take a larger amount of data (Graph of austria with approx.
> 1 million nodes and 1.2 million links) my 4GB 4 Core Linux Notebook
> runs out of memory, when I use DAType.RAM_STORE
>
> 2014-04-23 20:57:20,221 [main] INFO  reader.idf.IdfReader - Starting
> to read nodes!
> 2014-04-23 20:57:24,281 [main] INFO  reader.idf.IdfReader - Read
> 1034868 records which equals 1034868 rows as expected
> 2014-04-23 20:57:24,281 [main] INFO  reader.idf.IdfReader - Graph has
> 1034868 nodes.
> 2014-04-23 20:57:24,281 [main] INFO  reader.idf.IdfReader - Starting
> to read links!
> 2014-04-23 20:57:36,024 [main] INFO  reader.idf.IdfReader - Read
> 1207004 records which equals 1207004 rows as expected
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead
> limit exceeded
>     at
> com.graphhopper.storage.index.LocationIndexTree$InMemTreeEntry.<init>(LocationIndexTree.java:844)
>     at
> com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:428)
>     at
> com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:433)
>     at
> com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:433)
>     at
> com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:433)
>
> When I use DAType.MMAP I get a NullPointerException immediately after
> inserting the very first node, while creating the "newByteBuffer" in
> MMapDataAccess.java:
> 2014-04-23 21:18:27,264 [main] INFO  reader.idf.IdfReader - Starting
> to read nodes!
> Exception in thread "main" java.lang.NullPointerException
>     at
> com.graphhopper.storage.MMapDataAccess.newByteBuffer(MMapDataAccess.java:176)
>     at
> com.graphhopper.storage.MMapDataAccess.mapIt(MMapDataAccess.java:150)
>     at
> com.graphhopper.storage.MMapDataAccess.incCapacity(MMapDataAccess.java:103)
>     at
> com.graphhopper.storage.GraphHopperStorage.ensureNodeIndex(GraphHopperStorage.java:261)
>     at
> com.graphhopper.storage.GraphHopperStorage.setNode(GraphHopperStorage.java:232)
>     at com.graphhopper.reader.idf.IdfReader.loadGraph(IdfReader.java:237)
>     at
> com.graphhopper.reader.idf.IdfReader.doIdf2Graph(IdfReader.java:102)
>     at com.graphhopper.GipHopperIdf.importINTREST(GipHopperIdf.java:200)
>     at com.graphhopper.GipHopperIdf.process(GipHopperIdf.java:175)
>     at com.graphhopper.GipHopperIdf.importOrLoad(GipHopperIdf.java:159)
>     at com.graphhopper.GipHopperIdf.main(GipHopperIdf.java:42)
>
>
> I already am aware, that I should not flush() the graph when I use RAM
> or RAM_STORE (because everything is kept in Memory I guess?). When I
> use MMAP, the reader flushes after every 100000 items which were read
> and after finishing reading. In this example where I load all Nodes
> and Links from Austria, no graph files are created in the
> graph-location folder.
>
> First I have to say, I'm sorry, that I am not really totally focused
> on this, because I develop on this reader really as a hobby project,
> while doing ten others. But for now I did a lot of debugging and
> research but I think I am completely stuck at the moment, which is
> possibly a result of my incomplete understanding of how to initialize
> a GraphStorage correctly.
>
> Any hints appreciated.
>
> best regards,
>
> Jürgen
>
>
> _______________________________________________
> GraphHopper mailing list
> GraphHopper at openstreetmap.org
> https://lists.openstreetmap.org/listinfo/graphhopper


-- 
GraphHopper.com - Fast & Flexible Road Routing

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstreetmap.org/pipermail/graphhopper/attachments/20140424/d8d51f5b/attachment.html>


More information about the GraphHopper mailing list