[osg-users] Latency
David Heitbrink
david-heitbrink at uiowa.edu
Mon Oct 1 10:25:19 PDT 2018
I currently have an odd problem I am stuck on. I have a system with 2 Quadro P5000 cards- driving 3 displays that are warped and blended on a 120 degree dome. Running Windows 10. The application is ground vehicle simulation - I have pretty high rates of optical flow. Each display is running its own process, they receive a UDP broadcast with position update information. What I am seeing is 1 of the displays is off by 1 frame 95% of the time. When this happens....my blending fails and I get a large seam in my scene. I added logging as to the eye point position as well as high frequency counter time.
>From what I can tell from the logs, the return from the VSync's (viewer->frame() ) are all within 200 microseconds, and the eye point position and data frame number (i.e. the frame number for my incoming data) is the same across all of the channels.
So I strongly suspect this has something to do with the graphics card/driver's own internal frame buffering....and there is not a lot I can do about it.
This leaves me with a couple of real issues.....
1) Programmatically cannot tell if a channel is a frame behind or not. Basically I added buffering for the other 2 channels for position information......and my seam goes away 95% of the time
2) Since things are not 100% the same.......I randomly get a seam 5% of the time (assuming I am buffering).
At this point I don't know what to do.....I have talked to NVidia about this, they mentioned making sure DPI scaling in windows is set to 100% and/or setting up the app to be DPI aware. I have done this.....but I get the same result.
Any advice and/or speculative guesses on this would be great.
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=75001#75001
More information about the osg-users
mailing list