User avatar
gemelo
Topic Author
Posts: 8
Joined: 12 Feb 2018, 09:43
Contact:

CUDA / Noesis

21 May 2021, 13:02

I am looking into using Noesis for a project to use a XAML based UI in front of a texture created with CUDA.

Currently we are using WPF with a D3DImage. But that suffers from poor performance (no stable framerates) and screen tearing. I could transfer the CUDA texture directly on the GPU to DirectX/Direct3D as well as to OpenGL.

I've already looked at quite a few Noesis samples and documentations and played with the code myself, but I still lack some understanding in various areas. I would definitely like to use the Managed Application Framework from Noesis, as it is very close to the WPF framework.

Maybe you can help me with the following questions to get started:

1. Is there a preferred RenderContext when comparing DirectX11 and OpenGL? It is especially important to have a stable framerate and using VSync. Screen tearing must be avoided at all costs.
2. Where do I have to step into to get the texture into the background. Is this possible at all with the RenderContexts from the Application Framework or do I have to modify/reimplement them?
3) The documentation says that I have to save the GPU state before RenderOffscreen() and restore it afterwards. I must admit that I do not understand what exactly is meant by this. Is there perhaps an example where this is implemented?

Many, many thanks for any help :)

BTW: In the Application Framework for Windows there is the problem that a window that should be maximised at startup (WindowState=Maximized) is not correctly displayed maximized. If you maximize it later (e.g. due to an event) it works correctly. I have already checked the code, but could not find the error quickly.
 
User avatar
gemelo
Topic Author
Posts: 8
Joined: 12 Feb 2018, 09:43
Contact:

Re: CUDA / Noesis

25 May 2021, 10:05

A small follow-up to my post and my questions:

to 1.
I have now decided to use OpenGL. The performance in the first tests was convincing. Nevertheless, I would be interested to know if there is any experience comparing the two technologies in relation to Noesis.

to 2./3.
I have discovered the Noesis TextureSource class for my purposes. After a short trial and error, I also found the correct parameter assignment when calling WrapGLTexture(). Maybe a little hint for all who are also facing this question.:

This is the C# method signature:
public static Texture WrapGLTexture(object texture, IntPtr nativePointer, int width, int height, int numMipMaps, bool isInverted);
The nativePointer parameter must contain the OpenGL ID, not a reference to a variable containing that ID as you might think based on the IntPtr type. So you just have to cast the ID (uint type) into an IntPtr:
uint openGlTextureID = .... // Assign ID
var nativePointer = (IntPtr)openGlTextureID;
The TextureSource also eliminates the need for me to intervene additionally in the rendering process. Is it correct that when using TextureSource, the render process does not need to transfer image data from GPU memory to CPU memory and back?

Thanks
 
User avatar
jsantos
Site Admin
Posts: 3905
Joined: 20 Jan 2012, 17:18
Contact:

Re: CUDA / Noesis

25 May 2021, 12:38

Sorry for the late reply, we are having days busier than normal now that the first beta of 3.1 is almost ready to be published.
to 1.
I have now decided to use OpenGL. The performance in the first tests was convincing. Nevertheless, I would be interested to know if there is any experience comparing the two technologies in relation to Noesis.
In normal circumstances, both renderer should have similar performance. But in the past we have observed much more buggy drivers in OpenGL that in D3D. Right now, if your GPU support buffer storage (core in OpenGL 4.4 or through the extension GL_EXT_buffer_storage), OpenGL could be even a bit faster than D3D, at least until we implement D3D12 (it is coming with Noesis 3.1).
The TextureSource also eliminates the need for me to intervene additionally in the rendering process. Is it correct that when using TextureSource, the render process does not need to transfer image data from GPU memory to CPU memory and back?
That's correct. WrapTexture is the fastest way, as we will be using that handle without transferring.
 
User avatar
jsantos
Site Admin
Posts: 3905
Joined: 20 Jan 2012, 17:18
Contact:

Re: CUDA / Noesis

25 May 2021, 12:42

3) The documentation says that I have to save the GPU state before RenderOffscreen() and restore it afterwards. I must admit that I do not understand what exactly is meant by this. Is there perhaps an example where this is implemented?
Our reference renderers change the state of the passed device. So for example, imagine you have a GL context and you enabled blending on it. To avoid redundant sets, you have that information cached. When you invoke Noesis, we may change blending, and after that you cache is not correct and can lead to incorrect code. You need to flush your cache and set all states again to a "known" state for your code.
BTW: In the Application Framework for Windows there is the problem that a window that should be maximised at startup (WindowState=Maximized) is not correctly displayed maximized. If you maximize it later (e.g. due to an event) it works correctly. I have already checked the code, but could not find the error quickly.
I created #2035 to solve this. Thanks for the report!
 
User avatar
gemelo
Topic Author
Posts: 8
Joined: 12 Feb 2018, 09:43
Contact:

Re: CUDA / Noesis

05 May 2022, 21:53

Sorry for reactivating this thread. We are working on a small update for the project this thread was about.

We also updated the NuGet packages to the current version as a test. However, we noticed that the WrapGLTexture methods apparently no longer exist in the new version.
public static Texture WrapGLTexture(object texture, IntPtr nativePointer, int width, int height, int numMipMaps, bool isInverted);
I tried to find a replacement or an alternative methodology in the documentation and in the class hierarchy, but so far I was not able to find anything.

What would be the approach in the new version to use an existing OpenGL texture in Noesis?

Thanks, Peer
 
User avatar
gemelo
Topic Author
Posts: 8
Joined: 12 Feb 2018, 09:43
Contact:

Re: CUDA / Noesis

06 May 2022, 10:48

I guess I found the replacement. The method moved to the RenderDeviceGL class which makes sense:
public static Texture WrapTexture(object texture, IntPtr nativePointer, int width, int height, int numMipMaps, bool isInverted, bool hasAlpha)
 
User avatar
sfernandez
Site Admin
Posts: 2983
Joined: 22 Dec 2011, 19:20

Re: CUDA / Noesis

06 May 2022, 13:17

That's right, we reorganized those functions a bit to have everything inside the corresponding render device implementation.

Who is online

Users browsing this forum: No registered users and 42 guests