<div dir="ltr">Hi Marlin,<br><div><br><div class="gmail_quote"><div dir="ltr">On Wed, 11 Jul 2018 at 19:59, Rowley, Marlin R <<a href="mailto:marlin.r.rowley@lmco.com">marlin.r.rowley@lmco.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div link="blue" vlink="purple" lang="EN-US">
<div class="m_614407391003027124WordSection1">Are you suggesting that I compute the world space normal in the application and pass it to the shader? I absolutely need world space coordinates. If so, how would I get the normal of the triangle before evaluating the shader?
<div><blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in"><p class="MsoNormal"></p></blockquote></div></div></div></blockquote><div>No, I'm suggesting that you shouldn't be computing anything on the GPU in world coordinates, not positions, not normals, not anything. Any time you convert coordinates from local object coordinates into world coordinates you introduce issues with precision. On the CPU you can workaround this by using doubles, but on the GPU this is typically an option, so you are using floats that just can't handle the precision issues.</div><div><br></div><div>So if you say you "absolutely need word space coordinates" but I'd say, take a step back, come up with a algorithm that use eye coordinates or local coordinate for doing calculations as this is (absolutely:-) required to avoid precision issues.<br></div><div><br></div><div>This may require being a bit creative, but in the end you whole system will work far better, it'll scale better, it'll avoid precision issues.<br></div><div><br></div><div>Robert. <br></div></div></div></div>