[OSM-dev] Playback of Camera Movement in OSM2World Question

Oleksiy Muzalyev oleksiy.muzalyev at bluewin.ch
Sat Mar 19 09:43:17 UTC 2016


Hi Peter,

I am intrigued. I plan to install OSM2World and follow it.

I work on the problematics from another side. I work on using real flights to capture data for 3D mapping and make it readily available for all mappers.

It is possible to publish 3D aerial images both into a Wikipedia article and a respective Wikipedia category. There's an OSM tag also for a category.

Here are the aerial images of Collège Madame de Staël in Geneva which I made with the DJI Phatom 3 this week:

https://fr.m.wikipedia.org/wiki/Coll%C3%A8ge_Madame_de_Sta%C3%ABl#

https://commons.m.wikimedia.org/wiki/User:Alexey_M./gallery

An advantage of Wikipedia is that it keeps the original images resolution, so building:levels are well visible. Google Maps automatically reduces resolution.

DJI Phantom 3 has got the range of 2.2 km, and the new Phantom 4 has got the range of 4.5 km (3 miles). But the range is also limited by the constant visual contact required by law. So actually it is about 1 km (and the maximum allowed altitude 150 meters in this case). Still it is enough to cover quite a large area.

Phantom 4 has got increased reliability due to doubled systems, but in any case it's recommended to perform hundreds of training flights before even considering a takeoff in a city.

DJI Phantom writes coordinates and altitude into JPGs Exif automatically, so an automation is possible.

Best regards
Oleksiy

Sent from my acer Liquid Z630On Mar 18, 2016 10:11 PM, Peter Barth wrote: > > Hi Wilson, > > Thank you for your interest in OSM and GSoC. > > The project idea you're talking about is actually meant in a > different way. What we'd like to achive is a way to generate > virtual camera flights. I.e. you use the OSM2World GUI to define > waypoints and direction vectors to look at, i.e. the definition > of the camera movement. The student's code would try to > interpolate those positions to generate a smooth trajectory in 3D > space and afterwards render the images along this > trajectory. Rendering might either happen directly with OSM2World > to output PNG images or via Povray. And in the end, ffmpeg/libav > will take these images and make a video out of the single images. > > Of course there's more to it: The trajectory might be inputed via > a GPX file or by defining your own format. Also the interpolation > of the camera movement might be as simple as a polygon or > something more sophisticated as bspline approximation or whatever > else. > > But in the end the details are up to the student. We have a rough > idea, we also have an idea how we would implement it. But we want > to hear and see your ideas as long as it matches our overall goal :) > > Hope that helps, > Peda > > _______________________________________________ > dev mailing list > dev at openstreetmap.org > https://lists.openstreetmap.org/listinfo/dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstreetmap.org/pipermail/dev/attachments/20160319/3375c747/attachment.html>


More information about the dev mailing list