gemelo
Topic Author
Posts: 3
Joined: 12 Feb 2018, 09:43

CUDA / Noesis

21 May 2021, 13:02

I am looking into using Noesis for a project to use a XAML based UI in front of a texture created with CUDA.

Currently we are using WPF with a D3DImage. But that suffers from poor performance (no stable framerates) and screen tearing. I could transfer the CUDA texture directly on the GPU to DirectX/Direct3D as well as to OpenGL.

I've already looked at quite a few Noesis samples and documentations and played with the code myself, but I still lack some understanding in various areas. I would definitely like to use the Managed Application Framework from Noesis, as it is very close to the WPF framework.

Maybe you can help me with the following questions to get started:

1. Is there a preferred RenderContext when comparing DirectX11 and OpenGL? It is especially important to have a stable framerate and using VSync. Screen tearing must be avoided at all costs.
2. Where do I have to step into to get the texture into the background. Is this possible at all with the RenderContexts from the Application Framework or do I have to modify/reimplement them?
3) The documentation says that I have to save the GPU state before RenderOffscreen() and restore it afterwards. I must admit that I do not understand what exactly is meant by this. Is there perhaps an example where this is implemented?

Many, many thanks for any help :)

BTW: In the Application Framework for Windows there is the problem that a window that should be maximised at startup (WindowState=Maximized) is not correctly displayed maximized. If you maximize it later (e.g. due to an event) it works correctly. I have already checked the code, but could not find the error quickly.

Tags:
 
gemelo
Topic Author
Posts: 3
Joined: 12 Feb 2018, 09:43

Re: CUDA / Noesis

25 May 2021, 10:05

A small follow-up to my post and my questions:

to 1.
I have now decided to use OpenGL. The performance in the first tests was convincing. Nevertheless, I would be interested to know if there is any experience comparing the two technologies in relation to Noesis.

to 2./3.
I have discovered the Noesis TextureSource class for my purposes. After a short trial and error, I also found the correct parameter assignment when calling WrapGLTexture(). Maybe a little hint for all who are also facing this question.:

This is the C# method signature:
public static Texture WrapGLTexture(object texture, IntPtr nativePointer, int width, int height, int numMipMaps, bool isInverted);
The nativePointer parameter must contain the OpenGL ID, not a reference to a variable containing that ID as you might think based on the IntPtr type. So you just have to cast the ID (uint type) into an IntPtr:
uint openGlTextureID = .... // Assign ID
var nativePointer = (IntPtr)openGlTextureID;
The TextureSource also eliminates the need for me to intervene additionally in the rendering process. Is it correct that when using TextureSource, the render process does not need to transfer image data from GPU memory to CPU memory and back?

Thanks
 
User avatar
jsantos
Site Admin
Posts: 3139
Joined: 20 Jan 2012, 17:18
Contact:

Re: CUDA / Noesis

25 May 2021, 12:38

Sorry for the late reply, we are having days busier than normal now that the first beta of 3.1 is almost ready to be published.
to 1.
I have now decided to use OpenGL. The performance in the first tests was convincing. Nevertheless, I would be interested to know if there is any experience comparing the two technologies in relation to Noesis.
In normal circumstances, both renderer should have similar performance. But in the past we have observed much more buggy drivers in OpenGL that in D3D. Right now, if your GPU support buffer storage (core in OpenGL 4.4 or through the extension GL_EXT_buffer_storage), OpenGL could be even a bit faster than D3D, at least until we implement D3D12 (it is coming with Noesis 3.1).
The TextureSource also eliminates the need for me to intervene additionally in the rendering process. Is it correct that when using TextureSource, the render process does not need to transfer image data from GPU memory to CPU memory and back?
That's correct. WrapTexture is the fastest way, as we will be using that handle without transferring.
 
User avatar
jsantos
Site Admin
Posts: 3139
Joined: 20 Jan 2012, 17:18
Contact:

Re: CUDA / Noesis

25 May 2021, 12:42

3) The documentation says that I have to save the GPU state before RenderOffscreen() and restore it afterwards. I must admit that I do not understand what exactly is meant by this. Is there perhaps an example where this is implemented?
Our reference renderers change the state of the passed device. So for example, imagine you have a GL context and you enabled blending on it. To avoid redundant sets, you have that information cached. When you invoke Noesis, we may change blending, and after that you cache is not correct and can lead to incorrect code. You need to flush your cache and set all states again to a "known" state for your code.
BTW: In the Application Framework for Windows there is the problem that a window that should be maximised at startup (WindowState=Maximized) is not correctly displayed maximized. If you maximize it later (e.g. due to an event) it works correctly. I have already checked the code, but could not find the error quickly.
I created #2035 to solve this. Thanks for the report!

Who is online

Users browsing this forum: Google [Bot] and 2 guests