How to change GPU to run graphic simulations

Hey so my current code runs everything on my AMD APU, and I want to offload the graphics part of my code to my GPU, is there any way I can do that? The graphics part of the code is a separate python file independent of my calculations file, there is only the transfer of final data points from my main code to the visualisation code.

As I said in another thread:

Hmm… That might be a matter of the settings for your graphics card. I know that mine has a control-program that provides options specifying whether and when to use its integrated or dedicated graphics chips.

However, note that Python code will not, I think, be run on the GPU–regardless of such settings. Shader code, on the other hand, should be.

It’s then just a matter of getting your computer to run your shaders (and other graphics operations) on your dedicated GPU rather on the integrated chip.

You’ll probably have to use your GPU’s control app, usually provided by the GPU drivers themselves. If you cannot find them, you’re either out of luck or you can try a third-party utility. This differs from OS-to-OS, but usually, if you’re using Windows, you’re most likely to find this pre-installed on your device, or you can download it via AMD’s website. On Linux, this is slightly more tricky as most GPU manafacturers target Windows and Linux is usually worth of a second thought to them. Good luck!

Oh, and by the way. If you want to run Python code on your GPU, then I recommend you to check out Numba, it is a fast LLVM-based JIT Python compiler that is used to speed up code via JIT compilation, and it also supports running Python code on the GPU, without many modifications to pre-existing code. However, I cannot guarantee any compatibility with Panda3D due to the entire FFI thing, but I’m sure @rdb or someone who actually knows what they’re doing here can answer this.

2 Likes

It really depends on what that code looks like and what you want it to do.

The way to run generic code on the GPU without external libraries is with compute shaders:

https://docs.panda3d.org/1.10/python/programming/shaders/compute-shaders

It is also possible to read and write to SSBOs (generic buffer objects) via ShaderBuffer, I just noticed they are not documented.

2 Likes

I am sorry if there was confusion before. Let me elaborate. I am trying to make simulation software, and hence have a code that does all the calculations on the CPU. THe code then sends the relevant data to the panda code which will then plot the simulation for visualisation to the user. For the code that does the visualisation, the code runs on the integrated GPU and not the discrete one. So my question was is there any way to run Panda on the discrete GPU?

My understanding is that this is likely something that is controlled not by Panda, but by a settings-program related to your graphics card, I believe.

For example, in my case, using an NVIDIA card under Ubuntu Linux, I have a program called “NVIDIA X Server Settings”. There I can specify whether I want the system to prefer the dedicated graphics card, the integrated graphics chip, or to have the system decide automatically in each case. I can also (if I so desire) specify “application profiles” in order, I believe, to have it treat specific applications in specific ways–but I don’t usually bother with that, and have just told it to generally prefer the dedicated card.

So, I’d suggest looking for something similar for your machine (or your client’s machine, if this is for a third party), relevant to your graphics card, OS, and potentially your computer.

The manual agrees with Thaumaturge
https://docs.panda3d.org/1.10/python/optimization/performance-issues/motherboard-integrated-video

If you are on Linux, you may be able to select the discrete GPU by setting the environment variable DRI_PRIME=1 or using the optirun command. There is also Feral’s gamemoderun command that does some additional stuff mabe worth checking out.

1 Like

Which OS are you on?

There are ways to build an executable that always uses the dedicated graphics card, but they are difficult to use from Python since they would require modifying the Python executable:

https://docs.nvidia.com/gameworks/content/technologies/desktop/optimus.htm

There is an open feature request for our distribution system to be able to set these flags:

1 Like

I am on Windows 11, with Python 3.10, the latest panda version.