Lighting limits with p3d_LightSourceParameters

Hi, I was playing around with custom shader based lighting in panda3d and was testing / pushing the shader generator to its limits. To be precise, I was mostly playing around with the p3d_LightSourceParameters uniform structure. With my hardware, I can push the the structure to have a maximum of 19 lights which is nice, but I was hoping to push that number higher, maybe something to like 32 or 50 lights rendered actively at once.

I have looked at techniques like UBOs or SSBOs but don’t really know here to start (I’m not very experienced with them). Before I dig into these different techniques, I would like to know a few things.

  1. in the next version of panda3d (1.11) are there going to be new light shader inputs like p3d_SpotlightParameters (for example, every light type will gets its own structure type) where you can separate the lights into separate structures. If there is, it might just be worth waiting for these to be added instead trying to create something unnecessarily complex to do the same function.

  2. which option out of UBOs and SSBOs are the most efficient / fastest in updating (for example I want to change the lights position and want it updated into the shader).

  3. I heard you can directly input light nodes into a shader through setShaderInput('light', lightnp) but am not really sure how this works. If someone could explain what this does inside a glsl shader, that would be nice.

  4. is it even worth having more lights actively rendered in the scene. I currently have created a priority system in panda3d where lights will automatically be disabled if they are not in the camera’s view.

From some little research, I know that SSBOs require newer hardware than UBOs but that isn’t really to much of a problem for me. I am just looking for efficiency and ease of use.

Another problem with the current p3d_LightSourceParameters input is that point lights shadows aren’t supported. So having something to solve this small issue would also be nice.

Any help is appreciated!

I can’t speak to all of your questions, but here are a few answers, as best I know them:

Unless some light-specific code exists, I think that this is simply an instance of the fact that Panda allows one to have a NodePath–any NodePath–as a shader-input. (Lights being, at their core, nodes like everything else.)

This then allows one to get various pieces of data about the NodePath in question–its world-space position, or its model-space position, etc.

So, for example, let’s say that you have two NodePaths, named “myNPShaded” and “myNPInput”, and a shader applied to “myNPShaded” that is intended to make use of the model-space position of “myNPInput”.

You could then do something like this:

In Python:

myNPShaded.setShaderInput("aNodePath", myNPInput)

In GLSL:

# Model-space position of input aNodePath
uniform vec4 mspos_aNodePath; #Note the name here: "mspos_" + <name of input>

You can see more about this on the manual page that lists Cg inputs–they need a little translation into GLSL, but they should still work in that language.

I mean, that depends on what you want to do with them–whether your project calls for the increased number of lights.

It’s not really a one-size-fits all thing, I feel: some projects can use just a single light (or no lights!), while others may call for a great many!

1 Like

can these node inputs be in the form of structures?

for example:

uniform struct SpotLights{
   vec4 color;
   vec3 position;
   vec3 direction;
   ...
} spotlights[];

if so, how would these glsl translations flow into this structure?

coming back to this, I want to have as much flexibility as possible, and I think a flexible number of active lights is the range of 32 - 50 lights or even possibly more if its doable.

I haven’t done it myself, but based on the list of GLSL inputs, it looks like something like that should work.

And searching the forum, I found this post:

(It looks like the preview loses some formatting, so click through for a readable version.)

The post is old, so things may have changed, but it looks like it may have the information that you’re looking for.

If that’s the number of lights that you think that your project calls for, then it seems to me that it’s worth having that number of lights!

1 Like

That’d be a good feature to add, but they’re not in place at the moment.

UBOs are a smidge faster but they’re not universally well-supported by Panda at the moment.

This works mostly as you expect, except that there’s no array allowed since you can’t currently pass an array of NodePaths to a shader input.

This really depends on your scene’s requirements. But the more lights, the higher the pressure on the GPU will be, because the fragment shader will become more complex. This is especially a problem at higher screen resolutions.

2 Likes

Hmm, does this mean that SSBOs are more supported. If I’m correct, RenderPipline uses SSBOs.
I’ve looked a bit into the manual on the ShaderBuffer() and am a bit confused with how you can update and write data to send to the shader. Do you feed values into a list then convert them to bytes to put into the ShaderBuffer object? is there a clean way of updating the ShaderBuffer that doesn’t require a entire rewrite of the data?

If I cant pass an array of light nodes, it would require me to have to make either my own sub-shader generator that dynamically adds or removes light uniform inputs in my render pipeline, or would require me to make a custom light structure and feed in raw values to the shader, not their nodepaths. At this point, I think alternatives like SSBOs and UBOs would be better.

That is true, games should really focus on how many actives lights are rendered on screen at once. the fragment shader for the render pipeline will have to work harder the bigger the resolution is, this means more light calculations as more fragments are being calculated.
I do want however to have a comfortable number of lights being rendered at once. I have already implemented some CPU side culling techniques for lights where their shadow quality will reduce the further the camera is away, and deactivate the shadow buffer so that shadows aren’t updated from a certain distance away. This means that the fragment shader is less under load and could potentially render more lights in the scene if more lights could be supplied into the shader.

Hi, I’m kind of stuck on SSBOs. How do I update the data inside an SSBO efficiently and re-send it to the gpu (I’m updating light data like positions). Is there a way to do this efficiently?

I think you need to remember:

You need to pass a new ShaderBuffer object to the shader every frame if you change it. There are practically no data restrictions.

You can use a PTA_LMatrix4f array or another similar type by passing it to the shader input once. When updating it, you don’t need to worry about passing it to the shader again with a new frame.

However, the amount of data is limited in the form of components.

won’t creating and updating a new shader every frame be very slow after a certain amount of data is created?

So far I have experimented with buffer textures (1D textures), and so far it looks very promising. So far I have positions, directions, exponents, attenuations, colors and light fovs working to some degree. However for the position and directions, I realize you have to convert them to screen space. I’m doing this currently on the GPU side in glsl, but maybe should switch to CPU side. How do I get the screen space coords of direction and position for the light?

The other big problem I’ve also encountered is implementing shadows. how can I support more sampler2Dshadow? The best thing I can think of right now is assigning an integer pointer in another texture to the index of a array of sampler2Dshadows inside the shader. I have looked into sampler2DshadowArrays but they also provide some issues. The first is, how do I change the buffer output location of a lights shadow map to be stored into an array. The second is, each light may want to have its own shadow map resolution but these shadow arrays can only store one kind of resolution.

Are there ways to increase the amount of shadow buffers inputted into a shader?

I didn’t say that you need to generate a shader every frame, this is done by calling a method.

NodePath.setShaderInput()

You probably mean the shadow map, you don’t have to use different buffers for each light source. You can generate your own shadow map, where you will sum up all the shadows.

oops I did mean generate a new ShaderBuffer object every frame. That was a grammatical error.

Are you referring to a shadow atlas? I’m not very familiar with them. I don’t know how to store a lights shadow buffer into a single shared texture. Do you have an example?

Yes, you need to create a new ShaderBuffer every frame and pass it to the shader using the .setShaderInput() method. There’s nothing wrong with that, it’s an inevitability, even if you only use opengl, it’s a well-trodden path.

I have not delved into the shadow implementation myself, I do not have an example. I mastered lighting with these materials, and put the shadows aside.

Do you reckon ShaderBuffers are a better alternative to 1D textures? Because when I want to update a lights position, it will require me to sample over all the lights in the scene, then pack them again into a shader buffer. The lights don’t just have the direction and position that would have to be repacked. It would also have to repack the exponent and cosine cutoff value which don’t have to be updated. With a texture, you can just change the pixel values and not have to repack the entire texture again. It will automatically update in the shader (if you keep the image in the ram).

I have no idea which is more appropriate, since it can only be found out experimentally.

I think for now, the texture based approach looks the most promising so I will stick with that. However the problem now is with the shadows. What can I do to increase the amount of renderable shadows. What makes it harder to implement shadows is that I have a shadow LOD system that should be brought into the equation. Shadow Atlases could be a splendid solution but may not be a good choice as the lights shadow map size will dynamically change. This also rules out shadow map texture arrays as they can only store one kind of resolution each.

But doesn’t the Cascaded Shadow Maps - CSM approach eliminate all your resolution issues?

1 Like

just having a read through the documentation you provided, the example uses sampler2DArray and resolution upscaling so that the shadow map, even when its at a lower resolution than the 3d texture needs, upscales to the write resolution. I’m still not very sure what way I should go about this. Should I upscale the shadow map, or pad the shadow map with fill pixels.

This looks like a promising approach. How do I render a shadow map to a sampler2DArray (in panda3d of course)?

This has already been done many times, if you use a search, you will find examples and code.

1 Like

I’m now having some trouble getting the shadow map buffers from a light (via lightnp.node().getShadowBuffer(base.win.getGsg())). It keeps returning None whatever I do. I’ve tried to force render some frames with base.graphicsEngine.renderFrame() but it doesn’t work.

is there a way to force the light shadow buffer to render or should i create my own? I’m trying to create my own right now, but I’m Having trouble. Keep getting a black window.

I’m also having some trouble remaking the light position to the screen view space. I don’t know the calculation to do that. I’ve currently been calculating the function in glsl:

vec3 light_pos = (p3d_ViewMatrix * vec4(world_light_position.xyz, 1.0)).xyz - vertex_view_position;

I would like to do this on the CPU side but don’t know how to get the ViewMatrix out of the camera.
according to this part of the documentation Shaders and Coordinate Spaces — Panda3D Manual
how can I calculate the lights screen space like with p3d_LightSourceParameters.position vector? (This also applies to the direction vector as well)