[Openstreetmap] Re: [Openstreetmap-dev] OSM's Schema - moving it forwards.

Ben Gimpert ben at somethingmodern.com
Thu Dec 1 10:25:37 GMT 2005


On Thu, Dec 01, 2005 at 12:25:10AM +0100, Tommy Persson wrote:
> And what is the problem with using well written and tested libraries?
> The code to parse it using the library is usually pretty trivial.
> 
> Where do you find these buggy XML-parsers?  I have not used them and
> the name of them could be good to know so I can avoid them.

Umm, they're neither well written nor particularly well tested. Take a
look at the bloat-tastic Xerces JAR, just as one example. I've suffered
through a number of projects using it as the data model parser, and we
had numerous problems. Maybe the newer versions are cleaned up, but it's
still almost a megabyte compiled. For just a parser. On a platform that
was originally all about lean web applications (remember Applets)?

> And to parse CSV data you usually have to read the manual.  I consider
> it to be a bug if data file are not resistent to extensions of the
> format.  How do you add new fields ina CSV format without breaking old
> files?

CSV parsing does not require a manual. It's in most good, standard
libraries (perl-CSV, csv.rb) and is simple enough to "just use." It
streams well -- line by line -- and works well when reading ahead of
time (ala' JDOM).

How do you add new fields to an XML data model, if it breaks every
version of the DTD / Schema out there? (Yeah yeah, it can be an optional
tag but that leaves different versions of the DTD / Schema floating
around.) What about the prevalence of crappy SAX event parsers that make
assumptions about tag order?

		Ben





More information about the dev mailing list