CUDA / Noesis
I am looking into using Noesis for a project to use a XAML based UI in front of a texture created with CUDA.
Currently we are using WPF with a D3DImage. But that suffers from poor performance (no stable framerates) and screen tearing. I could transfer the CUDA texture directly on the GPU to DirectX/Direct3D as well as to OpenGL.
I've already looked at quite a few Noesis samples and documentations and played with the code myself, but I still lack some understanding in various areas. I would definitely like to use the Managed Application Framework from Noesis, as it is very close to the WPF framework.
Maybe you can help me with the following questions to get started:
1. Is there a preferred RenderContext when comparing DirectX11 and OpenGL? It is especially important to have a stable framerate and using VSync. Screen tearing must be avoided at all costs.
2. Where do I have to step into to get the texture into the background. Is this possible at all with the RenderContexts from the Application Framework or do I have to modify/reimplement them?
3) The documentation says that I have to save the GPU state before RenderOffscreen() and restore it afterwards. I must admit that I do not understand what exactly is meant by this. Is there perhaps an example where this is implemented?
Many, many thanks for any help :)
BTW: In the Application Framework for Windows there is the problem that a window that should be maximised at startup (WindowState=Maximized) is not correctly displayed maximized. If you maximize it later (e.g. due to an event) it works correctly. I have already checked the code, but could not find the error quickly.
Currently we are using WPF with a D3DImage. But that suffers from poor performance (no stable framerates) and screen tearing. I could transfer the CUDA texture directly on the GPU to DirectX/Direct3D as well as to OpenGL.
I've already looked at quite a few Noesis samples and documentations and played with the code myself, but I still lack some understanding in various areas. I would definitely like to use the Managed Application Framework from Noesis, as it is very close to the WPF framework.
Maybe you can help me with the following questions to get started:
1. Is there a preferred RenderContext when comparing DirectX11 and OpenGL? It is especially important to have a stable framerate and using VSync. Screen tearing must be avoided at all costs.
2. Where do I have to step into to get the texture into the background. Is this possible at all with the RenderContexts from the Application Framework or do I have to modify/reimplement them?
3) The documentation says that I have to save the GPU state before RenderOffscreen() and restore it afterwards. I must admit that I do not understand what exactly is meant by this. Is there perhaps an example where this is implemented?
Many, many thanks for any help :)
BTW: In the Application Framework for Windows there is the problem that a window that should be maximised at startup (WindowState=Maximized) is not correctly displayed maximized. If you maximize it later (e.g. due to an event) it works correctly. I have already checked the code, but could not find the error quickly.
Re: CUDA / Noesis
A small follow-up to my post and my questions:
to 1.
I have now decided to use OpenGL. The performance in the first tests was convincing. Nevertheless, I would be interested to know if there is any experience comparing the two technologies in relation to Noesis.
to 2./3.
I have discovered the Noesis TextureSource class for my purposes. After a short trial and error, I also found the correct parameter assignment when calling WrapGLTexture(). Maybe a little hint for all who are also facing this question.:
This is the C# method signature:
The nativePointer parameter must contain the OpenGL ID, not a reference to a variable containing that ID as you might think based on the IntPtr type. So you just have to cast the ID (uint type) into an IntPtr:
The TextureSource also eliminates the need for me to intervene additionally in the rendering process. Is it correct that when using TextureSource, the render process does not need to transfer image data from GPU memory to CPU memory and back?
Thanks
to 1.
I have now decided to use OpenGL. The performance in the first tests was convincing. Nevertheless, I would be interested to know if there is any experience comparing the two technologies in relation to Noesis.
to 2./3.
I have discovered the Noesis TextureSource class for my purposes. After a short trial and error, I also found the correct parameter assignment when calling WrapGLTexture(). Maybe a little hint for all who are also facing this question.:
This is the C# method signature:
Code: Select all
public static Texture WrapGLTexture(object texture, IntPtr nativePointer, int width, int height, int numMipMaps, bool isInverted);
Code: Select all
uint openGlTextureID = .... // Assign ID
var nativePointer = (IntPtr)openGlTextureID;
Thanks
Re: CUDA / Noesis
Sorry for the late reply, we are having days busier than normal now that the first beta of 3.1 is almost ready to be published.
In normal circumstances, both renderer should have similar performance. But in the past we have observed much more buggy drivers in OpenGL that in D3D. Right now, if your GPU support buffer storage (core in OpenGL 4.4 or through the extension GL_EXT_buffer_storage), OpenGL could be even a bit faster than D3D, at least until we implement D3D12 (it is coming with Noesis 3.1).to 1.
I have now decided to use OpenGL. The performance in the first tests was convincing. Nevertheless, I would be interested to know if there is any experience comparing the two technologies in relation to Noesis.
That's correct. WrapTexture is the fastest way, as we will be using that handle without transferring.The TextureSource also eliminates the need for me to intervene additionally in the rendering process. Is it correct that when using TextureSource, the render process does not need to transfer image data from GPU memory to CPU memory and back?
Re: CUDA / Noesis
Our reference renderers change the state of the passed device. So for example, imagine you have a GL context and you enabled blending on it. To avoid redundant sets, you have that information cached. When you invoke Noesis, we may change blending, and after that you cache is not correct and can lead to incorrect code. You need to flush your cache and set all states again to a "known" state for your code.3) The documentation says that I have to save the GPU state before RenderOffscreen() and restore it afterwards. I must admit that I do not understand what exactly is meant by this. Is there perhaps an example where this is implemented?
I created #2035 to solve this. Thanks for the report!BTW: In the Application Framework for Windows there is the problem that a window that should be maximised at startup (WindowState=Maximized) is not correctly displayed maximized. If you maximize it later (e.g. due to an event) it works correctly. I have already checked the code, but could not find the error quickly.
Re: CUDA / Noesis
Sorry for reactivating this thread. We are working on a small update for the project this thread was about.
We also updated the NuGet packages to the current version as a test. However, we noticed that the WrapGLTexture methods apparently no longer exist in the new version.
I tried to find a replacement or an alternative methodology in the documentation and in the class hierarchy, but so far I was not able to find anything.
What would be the approach in the new version to use an existing OpenGL texture in Noesis?
Thanks, Peer
We also updated the NuGet packages to the current version as a test. However, we noticed that the WrapGLTexture methods apparently no longer exist in the new version.
Code: Select all
public static Texture WrapGLTexture(object texture, IntPtr nativePointer, int width, int height, int numMipMaps, bool isInverted);
What would be the approach in the new version to use an existing OpenGL texture in Noesis?
Thanks, Peer
Re: CUDA / Noesis
I guess I found the replacement. The method moved to the RenderDeviceGL class which makes sense:
Code: Select all
public static Texture WrapTexture(object texture, IntPtr nativePointer, int width, int height, int numMipMaps, bool isInverted, bool hasAlpha)
-
sfernandez
Site Admin
- Posts: 2984
- Joined:
Re: CUDA / Noesis
That's right, we reorganized those functions a bit to have everything inside the corresponding render device implementation.
Who is online
Users browsing this forum: Ahrefs [Bot] and 20 guests