[OSM-dev] Re: osmeditor2 to Java, and a common Java OSM client library
Nick Hill
nick at nickhill.co.uk
Thu May 25 20:38:01 BST 2006
Hi Ben
Ben Gimpert wrote:
> You're absolutely correct -- the *relative* performance of Ruby will
> never improve. But the same could be said about writing assembly and C.
> C will always be relatively slower than assembly, because of the design
> of the two languages. The important point is that *we don't care*!
> Assembly is only rarely used because it is so difficult to read and
> write, compared to higher level languages like C (2GL's). And with
> contemporary computers, the performance difference is not enough to
> justify "assembly's" development hassle.
I understand that optimising C compilers produce faster binary than can
generally be written directly in assembly. The optimising C compiler
re-arranges order of evaluation to minimise pipeline stalls. I guess the
same algorithm could be applied directly to assembly at RTL level.
Assuming a higher level language is slower, often, but not always, the
abstraction and language difference can lead the programmer to a
better/faster algorithm to offset the slower execution. This is
certainly the case in particular problem domains when comparing
declarative languages such as lisp with imperative such as C, Java,
Ruby. I haven't considered whether our problem domain falls into this
category.
> If there's one almost-absolute I try to stand by, it's to develop with
> the platform providing: a) the highest level of abstraction, and b) the
> largest set of libraries. (In that order.) I'm just too busy to give a
> shit about absolute performance. (Note, I did not say "algorithm
> performance.")
I don't understand the distinction between algorithm performance and
absolute performance. If what you are performing is an algorithm, which
is surely the case for any computer program, that is the same?
I can think of an example where the selection of Ruby wouldn't be
suitable. Say there is a need to serve data based on mathenmatical
transformations. Say that transformation is similar to a mandelbrot
function. Let's say we need to serve 250 such services per second.
Let's say the average data served took 11.2M CPU cycles to process when
written in C and 9Bn CPU cycles when written in Ruby (tests bear this
relationship out as being realistic for a mandelbrot type function).
We could service all requests with one 3Ghz CPU if written in C. How
many CPUs would we need if written in Ruby?
>>Would your suggestion to use Ruby require a complete re-write of the
>>client side?
>
>
> Not really. I'm suggesting Nick use Ruby only because *he's* talking
> about doing a greenfield (clean slate) rewrite. I do not suggest Imi
> (for example) rewrite the lovely JOSM as ROSM. Though I'd be happy if
> he did... ;)
Do you think there is a good argument to build a common code base and
data representation for client side programs in Java?
>>Could we deploy a Ruby implementation conveniently as an applet in web
>>browsers with reasonable performance like we can for Java?
>
>
> If we're going with the heavy client model, then the answer is "no."
> Nothing compares to the convenience of Flash and Java applets. If we're
> talking about a thin client (AJAX-y), Ruby's perfect.
>
>>Could we make statically linked binary distributions of such a ruby
>>client side for various platforms (point and click to run), along with
>>adequate performance to work on moderate (say 500Mhz/256M) platforms
>>like we will be able to for free java?
>
>
> Free Java's performance is a joke. I genuinely wish this were not the
> case -- because I love what the GNU CLASSPATH people are doing -- but
> the native output of gcj doesn't hold a candle to Sun's JIT-ed JVM
> bytecode. (If you don't believe me, try a natively compiled Eclipse...
> It's almost funny.)
I guess that is an issue with the implementation you used. If you
install ubuntu dapper, enable universe, then install Eclipse, it will
install bytecode eclipse with eclipse's AWT compiled against GTK, run on
the GIJ virtual machine. GIJ performs just-in-time compilation of the
same code as GCC.
You will find that on a P3-500 with 500M ram, performance is good. Not
blistering, but certainly no slouch. Certainly usable. Everything else
being equal, if the code was pre-compiled, it *should* be faster. By
compiling classes linked with elf, the symbol tables are smaller so
takes less memory as well. Perhaps the implementation you used
re-compiled classes each time it looked for them or something similarly
insane.
I have seen benchmarks for GCC compiled binary from Java. It performs
comparatively to SUN's 'hotspot' technology enabled VM.
>
> Oh, and why do we want statically linked binaries anyway? Ruby's
> interpreter works on the AST so it doesn't really get you more than code
> obfuscation, which is stupid for Free software anyway. Or maybe we want
> the convenience of not requiring a hundred supporting downloads --
> library JAR's or interpreters / VM's.
If we made a statically linked binary, the end user would not need a VM.
They would not need JARs. They simply have a piece of code to run. Just
click on the EXE. No external dependencies. The code is genuine native
code.
>
> There *are* language features at a higher level than Ruby provides,
> which I hope to someday use prolificly, but the library support hasn't
> kept up. I'm more a "high level" zealot than a Ruby zealot.
Perhaps we won't need c=d+e :-)
Nick. Computer: End program.
Computer: The holodeck is unstable. Cannot comply.
More information about the dev
mailing list