[OSM-dev] Recording and Playback of camera movement in OSM2World

Ryan Cole rcyoalne at gmail.com
Thu Mar 19 19:12:31 UTC 2015


Hello,

    My name is Ryan Cole, and I am interested in the Recording and Playback
of Camera Movement in the OSM2World project that the OpenStreetMap
community suggested as a project for Google Summer of Code 2015.  I have
been exploring OpenStreetMap, OSM2World, and Planet.osm in order to
construct a general idea of how the program should be designed before I
apply, but I have some questions.  Since videos are made up of a group of
images, each displayed on a screen for a very short period of time, one
after the other, a number of images must be rendered from a given set of
co-ordinates.  If the user were to pass in a beginning and ending location
in latitude/longitude, a frame rate, and a time, a set of co-ordinates for
each of the frames could be calculated. Planet.osm could then be polled for
osm files that correspond to the given co-ordinates, which would then be
rendered into a series of images.  Finally, the series of images could be
composited into a video for playback.  Before I ask anything else, are
there any glaring problems with this design that would render it
ineffective?
    For the beginning and ending sets of co-ordinates, is there a
particular way that they should be passed in?  On the OpenStreetMap
website, it seems that there are two methods of handling the map
co-ordinates.  The first method is the one displayed by the export
function, in which a maximum and minimum latitude and longitude are given.
The second method is apparent in the url, in which a central latitude,
longitude, and what appears to be a zoom level are displayed from which a
maximum and minimum set of co-ordinates could be calculated.  Is there a
preferred method for use in this project?  The first method would require
fewer calculations but the second method would allow the user to set a
given zoom level.  It would also provide the user with the ability to make
sure the zoom level stays constant if they want, to avoid the camera moving
up and down during playback.  My second question is regarding file types.
OSM2World clearly supports .osm files, but are .pbf files supported as
well?  According to the Planet.osm page on the OpenStreetMap wiki, .pbf
files are smaller to download and faster to process.  The page recommends
that they be used whenever possible.
    My final question is about extensions to this particular project.  I
noticed that OpenStreetMap can provide navigation from one location to
another.  Would using this video playback feature to allow the user to
visualize a driving route from one point to another be a feasible
extension?  It seems like a logical and useful addition to the project, but
having the program make sure that the camera that is representing the
vehicle stays on roads seems like a daunting task.  Is this a possible
extension or is this idea something that should be saved for a future
Summer of Code project?  Other possibilities for extensions to the project
are smooth movements (eg, the camera would speed up at the beginning and
slow down at the end) and a keyframe style system similar to 3d animation
tools such as Blender 3d and Maya, in which the user can input multiple
co-ordinates and camera angles that represent important frames, then
OSM2World would compute the intermediary frames and create a video with a
smooth path from keyframe to keyframe.
    Please provide input on anything that I mentioned here as well as any
other ideas that you may have regarding this project.  Everything helps.
Thank you for your time and I hope to have the opportunity to work with you
this summer.

Ryan Cole
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstreetmap.org/pipermail/dev/attachments/20150319/c6f03497/attachment.html>


More information about the dev mailing list