I mainly developing on Linux-based OS. I’m using Panda3d to display a model composed of a few .dae meshes, with a custom skybox and basic tessellated ground plane. Everything runs perfectly smooth on Ubuntu, at about 250 FPS for a single model, and at about 30 FPS when duplicating the model about 500 times on the scene. On the contrary, if I run the exact same code on Windows, it runs at about 22 FPS even with a single model… And yes, I have specified ‘load-display pandagl’ and my GPU driver is up-to-date. I’m using a laptop with NVIDIA GeForce RTX 2070 MAX-Q.
My specific suggestion is that it may be worth looking at your graphics card’s settings-application, and seeing whether you can’t specify that it prefer the high-performance graphics card over the (guessed at) integrated one.
Is it possible to detect if the Nvidia gpu is used by Panda3d on Python side ? Ideally I would like to run smoothly on both, and detect if it is running on chipset could be useful to enable a kind of degraded mode (disable stuff like multisamples, shadows…etc) and throw a warning.
I want to use know the GPU vendor before setting some options such as 'framebuffer-multisample' and 'multisamples' using loadPrcFileData. However, base.win.gsg.driver_renderer is not defined before creating a graphical window as far as I understand how panda3d works. So it is too late at this point. But maybe I’m wrong. I’m not very familiar with Panda3d.
Yes, unfortunately it’s not possible to detect this before opening a window.
However, in most modern games that have postprocessing effects, you probably don’t want the main window to have multisamples anyway, but you’d rather have a main scene buffer that is being rendered to a texture and postprocessed before it is rendered to the main window that needs the multisamples (via FilterManager/CommonFilters), so it’s not really an issue.
Alternatively, you would need to reopen the window, or first open an offscreen buffer.
Hum, I get what you mean but it looks more advanced than my current knowledge of Panda3d. I will give it a try. Do you think it is related to about simplepbr shader is working for it has nothing to do with it ?
Actually, if you use simplepbr, it should handle this for you. Don’t enable multisampling on the main window, but rather pass msaa_samples to the simplepbr init call; you should be able to call simplepbr.init after the Panda window has been created (so you can determine what to pass based on the capabilities reported by the driver), and even call it again if you wish to change the settings, I believe.
For now I’m not using it, but I’m trying to yes. It seems to fix multiple issues at once for me. I still have a washed out colors issue but apart from that it looks good !
EDIT: By the way, I completely forgot that I do open a first dummy off screen buffer anyway for some other concern. So it was already possible for me to do what you suggested first. No need for simplebr after all it should be already straightforward. Yet I think I will still use it if I solve my issue because it looks really neat.