[osg-users] Multiple cameras sharing multisample fbo

Ravi Mathur ravidavi at utexas.edu
Sat Mar 11 17:06:12 PST 2017

OK, after thinking about this, reading MANY other forum posts, and looking into the osg::Camera code, I've realized that my post title itself was incorrect. Sharing FBOs is not the appropriate approach, since FBOs are an internal object to a camera's RenderStage, which itself doesn't exist until the cull stage.

I really should have asked: How can I have a camera render on top of an existing texture, if that camera has a multisample attachment? The answer to that question is much easier to find: use a view-aligned textured quad and nested camera with the desired "background" texture bound to it.

Here's the code I'm using for this:


// Create Texture that will be used for rendering
colorTex = new Texture2D(); // Also perform texture setup, not shown here...

// Create Camera that draws background scene
backCamera = new osg::Camera();
backCamera->setRenderOrder(Camera::PRE_RENDER, 0); // Render first 
backCamera->attach(Camera::COLOR_BUFFER0, colorTex); // Not multi-sampled

// Create Camera that draws foreground scene
frontCamera = new osg::Camera();
frontCamera->setRenderOrder(Camera::PRE_RENDER, 1); // Render after backCamera 
frontCamera->setClearMask(GL_DEPTH_BUFFER_BIT); // Don't erase backCamera's results 
frontCamera->attach(Camera::COLOR_BUFFER0, colorTex, 0, 0, false, 4, 4); // multi-sampled

// Create view-aligned textured quad
osg::Geometry* geom = osg::createTexturedQuadGeometry(osg::Vec3(), osg::Vec3(1, 0, 0), osg::Vec3(0, 1, 0));
osg::Geode *quad = new osg::Geode;

// Create Camera that copies colorTex to current render buffer
copyCamera = new osg::Camera();
copyCamera->setReferenceFrame(osg::Transform::ABSOLUTE_RF); // Use our own view/projection matrix
copyCamera->setProjectionMatrixAsOrtho2D(0, 1, 0, 1); // Same as textured quad's bounds
osg::StateSet *ss = copyCamera->getOrCreateStateSet();
ss->setMode(GL_LIGHTING, osg::StateAttribute::OFF); // Don't need lighting to copy a texture
ss->setMode(GL_DEPTH_TEST, osg::StateAttribute::OFF); // Don't want depth test to clip out texture
ss->setTextureAttributeAndModes(0, colorTex, osg::StateAttribute::ON); // Bind colorTex

// Add copy camera as child of frontCamera
copyCamera->setRenderOrder(Camera::NESTED_RENDER, -100); // Render first, within frontCamera's render stage

This seems to work correctly. The render order is:
1: backCamera renders its scene directly to colorTex without multisampling
2: frontCamera creates its multisampled renderbuffer
3: copyCamera renders colorTex to frontCamera's renderbuffer via the textured quad
4: frontCamera renders its scene to its renderbuffer
5: frontCamera resolves its renderbuffer to colorTex

I can confirm that, using this method, colorTex contains both the background and foreground scenes in the correct order.

So now my question is, is this the correct way to approach the problem of chaining a texture between multi-sampled cameras?


Read this topic online here:

More information about the osg-users mailing list