[osg-users] LOD capabilities

Jason Beverage jasonbeverage at gmail.com
Tue Apr 26 08:49:51 PDT 2016


Hi Bruno,

We have a point cloud capability that we've developed here at Pelican
Mapping that integrates nicely with osgEarth.  You can see a video of it in
action here https://youtu.be/lUeF4Y8yGNI rendering around 5 billion
points.  We can easily perform picking on individual points so you can get
all the information about the point (location, RGB, intensity, etc) but we
don't have any support for modifying the point cloud (although that is
something we could add).  Editing a giant paged dataset would take some
special consideration, but it's definitely doable.

Let me know if you want anymore information, happy to chat sometime.

Thanks!

Jason

On Tue, Apr 26, 2016 at 11:28 AM Robert Osfield <robert.osfield at gmail.com>
wrote:

> Hi Bruno,
>
> On 26 April 2016 at 16:04, Bruno Oliveira <bruno.manata.oliveira at gmail.com
> > wrote:
>
>> the tool I'm creating is designed to handle clouds from 100 million
>> points to 1 billion, hence the out of core rendering.
>> By "with the already existing code", I mean code from OSG.
>>
>
> 100 million to 1 billion does qualify as "huge" point cloud.
>
> The issues aren't directly related to the OSG, but how to manage that
> amount of data on disk, in main memory and on the GPU.  The OSG itself can
> be used to start the data in main memory, on the GPU, and even provide
> mechanisms for paging data in out from disk (via PagedLOD/DatabasePager).
> You will be responsible for how you set up the scene graph, how you do this
> is the crux of the task, it's not trivial given the amount of data you are
> dealing with.
>
> When learning what can be done you need to start doing the maths on how
> much memory each point in your cloud requires. A single vec3 for each point
> for 100 million points requires 1.2 GB, 1 bilion requires 12GB.  When
> passing data to OpenGL you need to put the data into OpenGL FIFO which then
> gets passed in driver memory.  If you are using VBO's then you'll end
> potentially with one copy of data in application memory, one in driver
> memory, and then when it finally gets rendered a copy on the GPU too.  This
> means we'll need at least 2.4GB main memory and 1.2GB on the GPU just for
> 100 million vertices - this without anything else.
>
> This is without any colours, normals or textures.  You haven't mentioned
> anything about these, perhaps you should... as it makes a big difference to
> the memory footprint we are talking about.  I'm not expecting that you'll
> just have white points...
>
> So when you start asking can the OSG do it, you can get it to scale to
> handling multi-terrabyte databases thanks to the built in paging support,
> but only if you give it an appropriately built scene graph - point clouds
> are a niche that doesn't have open source tools that will build the
> database for you.  You need to be realistic about the memory management,
> handling really large amounts of data requires far more skills than just
> rendering a few pretty textured polygons.  I don't know what your
> background knowledge, what can we assume?
>
> If you don't have the skills right now then you'll need patiently develop
> them OR just pay a 3rd party engineer who has the skills to work with you
> on it.
>
> Robert.
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> osg-users mailing list
> osg-users at lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openscenegraph.org/pipermail/osg-users-openscenegraph.org/attachments/20160426/94f41898/attachment-0003.htm>


More information about the osg-users mailing list