<div dir="ltr"><div><div><div><div><br><br></div>I am finding that with the following modification to PixelBufferWin32.cpp I can get my floating point PBuffer easily (no nvidia specific extensions required)<br><br>    fAttribList.push_back(WGL_PIXEL_TYPE_ARB);<br>    if (_traits->red == 32 && _traits->green == 32 && _traits->blue == 32)<br>#define WGL_TYPE_RGBA_FLOAT_ARB 0x21A0<br>        fAttribList.push_back(WGL_TYPE_RGBA_FLOAT_ARB);<br>    else<br>        fAttribList.push_back(WGL_TYPE_RGBA_ARB);<br><br></div>Right now the presence of 32 bit color components in the context traits triggers the use of floating point texture format.<br><br></div>My use case would be fast readback of scientific results from a GLSL shader, performing only off-screen rendering.  I am basing this on the osgscreencapture example.<br><br></div>Christian<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">2016-07-22 14:48 GMT+02:00 Christian Buchner <span dir="ltr"><<a href="mailto:christian.buchner@gmail.com" target="_blank">christian.buchner@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Hi all,<br><br></div>I spent the last 3 hours trying to coerce OSG to give me a floating point pbuffer. Just setting the required bits for color components to 32 bits in the graphicscontext traits isn't working.<br><br></div>Turns out, on nVidia cards you also have to give the WGL_FLOAT_COMPONENTS_NV flag as "true" to get a valid pixel format on Windows. The following code does this:<br><div><br>    std::vector<int> fAttribList;<br><br>    fAttribList.push_back(WGL_SUPPORT_OPENGL_ARB);<br>    fAttribList.push_back(true);<br>    fAttribList.push_back(WGL_PIXEL_TYPE_ARB);<br>    fAttribList.push_back(WGL_TYPE_RGBA_ARB);<br><br>    fAttribList.push_back(WGL_RED_BITS_ARB);<br>    fAttribList.push_back(32);<br>    fAttribList.push_back(WGL_GREEN_BITS_ARB);<br>    fAttribList.push_back(32);<br>    fAttribList.push_back(WGL_BLUE_BITS_ARB);<br>    fAttribList.push_back(32);<br>    fAttribList.push_back(WGL_ALPHA_BITS_ARB);<br>    fAttribList.push_back(32);<br>    fAttribList.push_back(WGL_STENCIL_BITS_ARB);<br>    fAttribList.push_back(8);<br>    fAttribList.push_back(WGL_DEPTH_BITS_ARB);<br>    fAttribList.push_back(24);<br>    fAttribList.push_back(WGL_FLOAT_COMPONENTS_NV);<br>    fAttribList.push_back(true);<br>    fAttribList.push_back(WGL_DRAW_TO_PBUFFER_ARB);<br>    fAttribList.push_back(true);<br>    fAttribList.push_back(WGL_DOUBLE_BUFFER_ARB);<br>    fAttribList.push_back(false);<br><br>    fAttribList.push_back(0);<br><br>    unsigned int nformats = 0;<br>    int format;<br>    WGLExtensions* wgle = WGLExtensions::instance();<br>    wgle->wglChoosePixelFormatARB(hdc, &fAttribList[0], NULL, 1, &format, &nformats);<br>    std::cout << "Suitable pixel formats: " << nformats << std::endl;<br><br></div><div>On my GTX 970 card here this returns exactly one suitable pixel format (3 if you drop the DOUBLE_BUFFER_ARB requirement even)..<br><br></div><div>It seems that the implementation of PixelBufferWin32 cannot currently be given any user-defined attributes to the wglChoosePixelFormatARB function. Is this a capability that we should consider adding? Or should we automatically sneak in this vendor specific flag if the color components the traits specify have 32 bits and a previous call to wglChoosePixelFormatARB returned 0 matches?<br><br></div><div>I am leaving this up for debate.<br><br></div><div>Is there a vendor-neutral alternative to the WGL_FLOAT_COMPONENTS_NV flag?<br></div><div><br>For now, I can simply patch my local copy of the OSG libraries to support floating point pbuffers on nVidia cards.<span class="HOEnZb"><font color="#888888"><br></font></span></div><span class="HOEnZb"><font color="#888888"><div><br></div><div>Christian<br><br></div></font></span></div>
</blockquote></div><br></div>