[osg-users] RTT Cameras for computer vision simulation
Matt Donahoe
matt at skyd.io
Tue Apr 14 20:37:00 PDT 2015
Hi,
I am looking to use OpenSceneGraph to simulate multi-camera images in real time, but my frame rates are not great with the number of cameras I need. I currently have 6 cameras rendering to 640x480 images and I see ~8fps max, with a lot of jitter.
The osg examples have been a great help, but I wanted to ask for general advice before I dig too much deeper. Everything I have read indicates that GPU->CPU is a very slow operation, but I am wondering if I am actually maxed out or just doing something dumb.
Here is my setup for N cameras:
0. Populate the scene with objects
1. Create the N osg::Cameras with RenderTargetImplementation of FRAME_BUFFER_OBJECT (PIXEL_BUFFER doesn't work for me yet)
2. Create N rgb 640x480 osg::Images and attach to the cameras
3. Add my scene to each camera
4. Add draw callbacks to each camera.
5. Add each camera to a root scene.
6. Create a viewer with the root scene.
Now in a custom main loop, I do the following:
0. Read the requested camera poses from a message queue
1. Adjust the pose of each camera
2. Viewer.frame()
3. Each camera callback will fire
4. When all camera callbacks have fired, I read .data() from each Image.
5. Publish the N images in a single message, which gets serialized and sent over UDP to other processes.
I still need to profile this to see where the bottleneck is, but my hunch is that I can speed up by rendering and reading the data from each camera independently instead of simultaneously. I am not sure how to do it yet, but I assume "Pixel Buffer Objects" will be part of the answer.
Any suggestions on where to start would be appreciated!
Cheers,
Matt
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=63398#63398
More information about the osg-users
mailing list