SLI mode

I have a problem running a Panda application on SLI:

I have a system with 2 Geforce GTX 460 graphics boards running in SLI mode. I use an intel quad code i5 processor and a P7P55D-E PRO motherboard, plus 4 GB memory. The OS is Windows7 Professional.

The Panda3d application that I am making uses 3 displays in 3 x 1680 x 1050 x 32 resolution (total resolution 5040x1050), with a matrox triplehead digital edition videosplitter. This application should run faster than 30 Hz, but occasionally it drops below 20 Hz. When I use a single GTX 460 card, framerate is the same as when I use the 2 GTX 460 cards in SLI, so there is no improvement in framerate with the 2 cards in SLI.

I am certain that the SLI is running properly: in the 3dmark Vantage benchmark tests there are framerate improvements of around 170-180% with the two cards in SLI compared to SLI disabled. Also, the Nvidia control panel is set to maximize 3D performance with SLI.

The performance drops in the panda application are definitely related to graphics load (according to pstats) and not CPU limited: it occurs when more objects need to be rendered.

Does anyone have any experience with panda3d applications in SLI ? because I really have to get SLI working in panda3d.

Well, and what do you mean by “more objects”? Note that SLI will not improve performance bottlenecks caused by too many Geoms in the scene. That limit is due to the PC bus and CPU, and has little to do with your graphics card. So if you’re losing performance as your Geom count increases, then everything is working as it should, but you still have to reduce your Geom count to get better performance.

As a rule of thumb, try for no more than 300 - 600 Geoms onscreen at once. PStats can tell you the number of Geoms you’re rendering.


By more objects I mean more polygons to be rendered because more models (like buildings, trees etc) come into view. Part of this drop in performance is caused by increased culling time, but the biggest part is because of increased drawing time. It’s not so much that these cards are overloaded by too many polygons, because I have a comparable application in OSG with the same databases that renders this at about 60 Hz. That uses the same bus and CPU. But I don’t like OSG as much as panda, I prefer Panda because development is so much faster.
The drop in performance is not related to the CPU or the bus, because CPU load is way below 100%. I would expect that if performance drops, because polygon count increases, an additional graphics board in SLI will improve performance. That’s the whole point of SLI. So, what I would like to know is, if there are other people who have applied SLI to a panda application and encountered the same issue. Because the limitation is in the drawing speed of the graphics board, SLI should almost double framerate.

SLI will only double framerate if the bottleneck is pixel fill. It could potentially come close to double framerate if the bottleneck is polygon count, but this is almost never the bottleneck in modern hardware. SLI will not affect your framerate if the bottleneck is Geom count. Geom count is actually by far the most common bottleneck in modern applications.

You tell me that SLI is not improving your framerate. Ergo, I strongly suspect your bottleneck is Geom count.

Are you sure that the bottleneck you’re seeing is polygon count and not Geom count? What is the number of Geoms you’re rendering? (You won’t see the PC bus saturation show up in CPU utilization.)


You are right, it’s the geom count. When framerate drops, the number of geoms per sec increases to above 700. So I guess I need a card with a higher bandwidth, like the GTX590. Thank you for your help. The application looks phantastic though, with 3 x 1680x1050. Panda is really great to work with.

Again, it’s not the card. Geom count is the PC bus; there’s nothing you can buy to raise your Geom limit. You actually have to reduce the Geom count, for instance by judicious use of flattenStrong().