<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi!<br>
<br>
Some spare time to push this forward again.<br>
<br>
> how much RAM do you assign for this process? (the -Xmx setting)<br>
<br>
Ok, sry, that seems to be one problem. I already set XmX to 3Gig
before in Netbeans project properties, but as I saw now in the
processmanager, java never allocated more than 1G. After setting the
main class of the project to my own GraphHopper implementation and
not running my class directly that uses the memory now as it should.<br>
<br>
> What version of GraphHopper do you use?<br>
<br>
I forked your repo on github a couple of months ago, its a snapshot
of 0.3. Think I should fetch some of your commits, but most basic
things I use should work in this snapshot.<br>
<br>
> do you apply some specific configuration or customization to
the LocationIndexTree?<br>
<br>
I copied and modified mostly from OSMReader.java for my IdfReader
and from GraphHopper.java for my own GipHopperIdf implementation.
Modifications are mostly simplifying. I removed all parts which are
related to ContractionHierachies for example, because I want to
create a router which can change the graph dynamically while
runtime. The LocationIndex is an instance of LocationIndexTree with
a snapResolution set to 500 (just like in GraphHopper.java). The
LocationIndex is prepared and flushed to disk properly, by the way.<br>
<br>
RAM and RAM_STORE keep crashing just when I call graph.flush() with
the following ArrayIndexOutOfBoundsException in the RAMDataAccess
Provider. If I don't call flush I actually can route on the graph,
but it is not stored on disk for later use and I have to keep
everything in memory:<br>
<small><small><font face="Courier New, Courier, monospace"><br>
2014-04-25 10:13:58,736 [main] INFO reader.idf.IdfReader -
Read 1207004 records which equals 1207004 rows as expected<br>
2014-04-25 10:13:58,736 [main] INFO reader.idf.IdfReader -
INTREST import took 13 seconds.<br>
2014-04-25 10:14:00,009 [main] INFO
storage.index.LocationIndexTree - location index created in
1.2706958s, size:1 223 484, leafs:265 528, precision:500,
depth:4, entries:[64, 64, 64, 4], entriesPerLeaf:4.60774<br>
2014-04-25 10:14:00,029 [main] INFO reader.idf.IdfReader -
Location Index built in 1 seconds.<br>
2014-04-25 10:14:00,029 [main] INFO reader.idf.IdfReader -
Total seconds: 14<br>
2014-04-25 10:14:00,029 [main] INFO
com.graphhopper.GipHopperIdf - optimizing ... (totalMB:562,
usedMB:457)<br>
2014-04-25 10:14:00,030 [main] INFO
com.graphhopper.GipHopperIdf - finished optimize (totalMB:562,
usedMB:457)<br>
2014-04-25 10:14:00,030 [main] INFO
com.graphhopper.GipHopperIdf - flushing graph
GraphHopperStorage|car|RAM_STORE|,,,,,
details:edges:1 207 004(36), nodes:1 034 868(11), name: -
(10), geo:4(0),
bounds:46.3957092,49.0165,9.4787003,17.1566185, totalMB:562,
usedMB:457)<br>
Exception in thread "main"
java.lang.ArrayIndexOutOfBoundsException: 0<br>
at
com.graphhopper.storage.RAMDataAccess.setBytes(RAMDataAccess.java:213)<br>
at
com.graphhopper.storage.StorableProperties.flush(StorableProperties.java:74)<br>
at
com.graphhopper.storage.GraphHopperStorage.flush(GraphHopperStorage.java:1309)<br>
at
com.graphhopper.GipHopperIdf.flush(GipHopperIdf.java:358)<br>
at
com.graphhopper.GipHopperIdf.process(GipHopperIdf.java:159)<br>
at
com.graphhopper.GipHopperIdf.importOrLoad(GipHopperIdf.java:137)<br>
at com.graphhopper.GipHopperIdf.main(GipHopperIdf.java:42)</font></small></small><br>
<br>
<br>
> Are you calling create() before you use the DataAccess files?<br>
<br>
No, I don't....seems that you've found my second major error here. I
insprected OSMReader for how to do that. So when I have 1034868
nodes, does that mean, that I have to create GraphStorage with a
bytecount of at least 1034868 / 50 = 20698? <br>
<br>
I am adding nodes to my graph while reading my Idf/Csv File. I only
know how much nodes I have, when I am completely through. So I guess
there is no way to make GraphStorage increase bytecount while
creating nodes automatically and that I have to do an intermediate
read of my file first before filling the GraphStorage (which is
absolutely ok, I just want to know if I choose the right strategy)?<br>
<br>
Thank you a lot for helping me out understanding...again ;)<br>
<br>
best, Jürgen<br>
<br>
-----------------------------------------------------------------------<br>
<div class="moz-cite-prefix">Am 2014-04-24 10:19, schrieb Peter K:<br>
</div>
<blockquote cite="mid:5358C8F4.9010304@yahoo.de" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<div class="moz-cite-prefix">Hey Jürgen,<br>
<br>
how much RAM do you assign for this process? (the -Xmx setting)<br>
What version of GraphHopper do you use?<br>
And do you apply some specific configuration or customization to
the LocationIndexTree?<br>
<br>
> I already am aware, that I should not flush() the graph
when I use RAM or RAM_STORE (because everything is kept in
Memory I guess?). <br>
> When I use MMAP, the reader flushes after every 100000
items which were read and after finishing reading.<br>
<br>
I think flush is only necessary at the end even for MMAP.<br>
<br>
<br>
> I already am aware, that I should not flush() the graph
when I use RAM or RAM_STORE (because everything is kept in
Memory I guess?).<br>
<br>
You should still flush at the end. When you use 'RAM' then
simply nothing happens. <br>
<br>
But when you use RAM_STORE 'flush' writes to disk. The next time
you start GH it then can avoid parsing and it will just load the
data from disk into memory<br>
<br>
<br>
<font face="Courier New, Courier, monospace"><small><small>>
Exception in thread "main" java.lang.NullPointerException<br>
> at
com.graphhopper.storage.MMapDataAccess.newByteBuffer(MMapDataAccess.java:176)</small></small></font><br>
<br>
Are you calling create() before you use the DataAccess files?<br>
Or what is 'null' on line 176 - would you point to the github
source for it?<br>
<br>
> First I have to say, I'm sorry, that I am not really
totally focused on this,<br>
<br>
No problem. Good to have some other usecases to make it robust
and more userfriendly.<br>
<br>
> initialize GraphStorage<br>
<br>
Please see the javadocs of GraphHopperStorage:<br>
Life cycle: (1) object creation, (2) configuration via setters
& getters, (3) create or loadExisting, (4) usage, (5) flush,
(6) close<br>
<br>
Regards,<br>
Peter.<br>
<br>
<br>
</div>
<blockquote cite="mid:53581359.4010002@gmail.com" type="cite">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
Hi Peter,<br>
<br>
I can't get my head around storage types in graphhopper, and how
to treat them correctly. I have my own little Idf File reader
which works perfectly with small graphs. Although there is still
much work to do (writing tests, refactor some classes, actually
test) it reads my idf mockup graphs and routes correctly then.<br>
<br>
But when I take a larger amount of data (Graph of austria with
approx. 1 million nodes and 1.2 million links) my 4GB 4 Core
Linux Notebook runs out of memory, when I use DAType.RAM_STORE<br>
<br>
<small><small><font face="Courier New, Courier, monospace">2014-04-23
20:57:20,221 [main] INFO reader.idf.IdfReader - Starting
to read nodes!<br>
</font></small></small><small><small><font face="Courier
New, Courier, monospace">2014-04-23 20:57:24,281 [main]
INFO reader.idf.IdfReader - Read 1034868 records which
equals 1034868 rows as expected<br>
2014-04-23 20:57:24,281 [main] INFO reader.idf.IdfReader
- Graph has 1034868 nodes.<br>
2014-04-23 20:57:24,281 [main] INFO reader.idf.IdfReader
- Starting to read links!<br>
2014-04-23 20:57:36,024 [main] INFO reader.idf.IdfReader
- Read 1207004 records which equals 1207004 rows as
expected<br>
Exception in thread "main" java.lang.OutOfMemoryError: GC
overhead limit exceeded<br>
at
com.graphhopper.storage.index.LocationIndexTree$InMemTreeEntry.<init>(LocationIndexTree.java:844)<br>
at
com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:428)<br>
at
com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:433)<br>
at
com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:433)<br>
at
com.graphhopper.storage.index.LocationIndexTree$InMemConstructionIndex.addNode(LocationIndexTree.java:433)</font></small></small><br>
<br>
When I use DAType.MMAP I get a NullPointerException immediately
after inserting the very first node, while creating the
"newByteBuffer" in MMapDataAccess.java:<br>
<font face="Courier New, Courier, monospace"><small><small>2014-04-23
21:18:27,264 [main] INFO reader.idf.IdfReader - Starting
to read nodes!<br>
Exception in thread "main" java.lang.NullPointerException<br>
at
com.graphhopper.storage.MMapDataAccess.newByteBuffer(MMapDataAccess.java:176)<br>
at
com.graphhopper.storage.MMapDataAccess.mapIt(MMapDataAccess.java:150)<br>
at
com.graphhopper.storage.MMapDataAccess.incCapacity(MMapDataAccess.java:103)<br>
at
com.graphhopper.storage.GraphHopperStorage.ensureNodeIndex(GraphHopperStorage.java:261)<br>
at
com.graphhopper.storage.GraphHopperStorage.setNode(GraphHopperStorage.java:232)<br>
at
com.graphhopper.reader.idf.IdfReader.loadGraph(IdfReader.java:237)<br>
at
com.graphhopper.reader.idf.IdfReader.doIdf2Graph(IdfReader.java:102)<br>
at
com.graphhopper.GipHopperIdf.importINTREST(GipHopperIdf.java:200)<br>
at
com.graphhopper.GipHopperIdf.process(GipHopperIdf.java:175)<br>
at
com.graphhopper.GipHopperIdf.importOrLoad(GipHopperIdf.java:159)<br>
at
com.graphhopper.GipHopperIdf.main(GipHopperIdf.java:42)</small></small></font><br>
<br>
<br>
I already am aware, that I should not flush() the graph when I
use RAM or RAM_STORE (because everything is kept in Memory I
guess?). When I use MMAP, the reader flushes after every 100000
items which were read and after finishing reading. In this
example where I load all Nodes and Links from Austria, no graph
files are created in the graph-location folder.<br>
<br>
First I have to say, I'm sorry, that I am not really totally
focused on this, because I develop on this reader really as a
hobby project, while doing ten others. But for now I did a lot
of debugging and research but I think I am completely stuck at
the moment, which is possibly a result of my incomplete
understanding of how to initialize a GraphStorage correctly.<br>
<br>
Any hints appreciated.<br>
<br>
best regards,<br>
<br>
Jürgen<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
GraphHopper mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:GraphHopper@openstreetmap.org">GraphHopper@openstreetmap.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="https://lists.openstreetmap.org/listinfo/graphhopper">https://lists.openstreetmap.org/listinfo/graphhopper</a>
</pre>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
GraphHopper.com - Fast & Flexible Road Routing</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
GraphHopper mailing list
<a class="moz-txt-link-abbreviated" href="mailto:GraphHopper@openstreetmap.org">GraphHopper@openstreetmap.org</a>
<a class="moz-txt-link-freetext" href="https://lists.openstreetmap.org/listinfo/graphhopper">https://lists.openstreetmap.org/listinfo/graphhopper</a>
</pre>
</blockquote>
<br>
</body>
</html>