Noesis wouldn't have to do the math, that's all built in. It would just have to draw a line; the VR rendering software that's built into the game engine will do the warping for you (and for that matter, also the latency reduction, etc).
If someone 'renders' a cube, they don't have to warp it. They just draw a cube.
Noesis wouldn't have to do much more than that.
I imagine that it might have to have a scaling factor, so that it knows where to draw, but that could be based on the objects in-game dimensions. If I put a XAML on top of a cube that's going from 0 to 4, then it knows 0 to 300 becomes 0 to 4 when it renders the XAML. From what I can gather (still learning the UI), it obviously already does that mapping, as if my screen changes dimensions the XAML still stretches the whole way. So, Noesis is getting the screen dimensions and scaling the XAML.
BTW, Whoops, typo, I said 'screen space' but meant World space. Touches / mouse / controller clicks are a little more interesting but there's helper functions that do the mapping.
== John ==
P.S. Yes, I know that section ... there's other reasons to avoid near to the POV objects, called the Vergence-accommodation conflict. In fact, that article says "our eyes are unable to focus on something so close," which is inaccurate. In an HMD, you CAN focus on everything; it's the way the optics are built. The HMD lenses focus the display at a simulated infinite distance, so every pixel that is rendered is in focus. The problem isn't that you can't focus. The problem is that your brain 'knows' something up close requires different focusing than something far away, in addition to the stereoscopic effect. So, you have a higher chance of getting queasy because your brain says "wait, something that close would require focusing differently, and I don't have to do that, it's right in focus ..." See this article for example:
https://medium.com/vrinflux-dot-com/ver ... ab1a7d9ba