[OSM-dev] How do I build the latest db schema?
Brett Henderson
brett at bretth.com
Sun Jul 8 07:11:29 BST 2007
Frederik Ramm wrote:
> Is the following correct:
>
> (a) if you want to run the rails API against the data, then assuming
> you have freshly loaded the history tables using osmosis, you can simply
>
> insert into current_nodes select * from nodes;
> insert into current_segments select * from segments;
> insert into current_ways select id,user_id,timestamp,visible from ways;
Yes. Should be as simple as that. I'll add this into the --write-mysql
task soon. Please let me know if the above sql statements are all that
is required.
> (b) if you only want to read the "current" tables as if you were
> operating on live data, but don't need write access, then you can
>
> drop table current_nodes; create view current_nodes as select * from
> nodes;
> drop table current_segments; create view current_segments as select *
> from segments;
> drop table current_ways; create view current_ways as select
> id,user_id,timestamp,visible from ways;
I guess that would work, I hadn't thought of that. So long as each
entity only has a single version which is the case when importing a
planet from scratch.
> (don't know about the efficiency of MySQL views though.)
>
> Do you plan to make osmosis use the ability to store multiple versions
> of the same object? I.e. when I import the planet file each week and a
> change is detected, the old object is kept and a new version is
> created. This would of course never be the same as on our central
> server because osmosis will lack intermediate changes (and thus, the
> "way" version numbering wold not be the same), but it would be as
> close to the real thing as one can get without downloading each
> individual object history.
Yes, this is the plan. There is a --write-mysql-change task which is
intended to do just that. I've checked it in but haven't tested it yet
so it will probably have a few kinks to iron out. With the current
planet exports, the --derive-change task can compare two planet files
and generate a change set. This can be fed into the
--write-mysql-change task which will apply it to an existing database
with the previous planet contents and will create new history elements
for all affected entities.
I thought about making the --read-mysql-change task read full history
when processing a time interval but I decided full history wasn't as
important as being able to derive the minimal change set. Extracting
full history adds a few complications when applying changes too so I've
ignored it for now.
The replication tasks may take some time to get working. I'm worried
that the main OSM database will have some unusual data that doesn't
occur on my local database (eg. nullable fields that I'm relying on).
More information about the dev
mailing list