How do I add another rendertarget?

Retracted.

Okay, actually this is the wrong approach.

I currently have one makeTexturebuffer to have one set of shaders render into it …
… and a second set of shaders currently displaying the output.

I want to be able to read and input into another texture, aka feeding the shader the same texture
it is going to write to. As it’s always ever only reading from the same coordinate before it writes
to that coordinate, there won’t be issues in that regard … I believe.

But I can’t seem to be able to set it up.

I use this for the first:

		RenderBuffer = self.win.makeTextureBuffer("RenderBuffer",HPixels,VPixels)
		RenderTexture = RenderBuffer.getTexture()
		RenderTexture.setComponentType(Texture.TFloat)

		RenderTarget = NodePath("RenderTarget")
		RenderCam = base.makeCamera(RenderBuffer,lens=base.camNode.getLens())
		RenderCam.reparentTo(RenderTarget)		

...

		SecondShaderSet.setTexture(RenderTexture)

And this actually works for some unknown reason.

But how do I add a second rendertexture into this mix, avoiding the filtermanager?
And how do I default that texture to (0,0,0,1) ??

I feel like adding another camera to RenderBuffer is silly and adds unnecessary beaurocrazy.
There must be an easy way, but I can’t find out how to do it.

TL;DR:

Have RenderTarget. First shaders render into it, second shaders display.
Want to have second RenderTarget. Want this to be black as default, but not cleared every frame.
Want second RenderTarget to be also fed into first set of shaders to read from and render to it.

Thanks!

Don’t worry, I’m trying to solve this problem as well.

And so I just bumped into this panda3d.org/manual/index.ph … troduction
… and wonder why it is so overly complicated to add another texture …
… when it should be more like “setTexture(Texture,1)”.

Suddenly weird TextureStages.

Yes, I am always ranting about high level stuff being seemingly more complicated than low level stuff.

I need my ranting, it keeps me going. :stuck_out_tongue:

I’ve found the filtermanager.py class and will disect it …
… but why does it deal with FBOs?

It seems rendertotexture works without dealing with that?

Thats not possible, due to how GPU’s work, you theoretically could use imageLoadStore for that, however it leads to undefined behaviour.

What exactly are you trying to do?

Thank you for your reply, tobspr.

I want to use a second rendertarget as texture to check for something and make the following
output on screen dependent on it. I would as well want to modify that second target accordingly.

Hell, it can be anything, as long as I can read/write to it. I know there’s new fancy OpenGL buffers nowadays,
or for a while now, but I haven’t looked into that.

I already have two passes and don’t want to add a third one. I know I could do it this way,
but I also know there’s no reason to and it makes no sense to go a slower route.

When I realized I have issues simply setting up another RenderTarget …
… (why is it so complicated? It works so easy for one, but doesn’t for a second??) …
… I’ve tried going for the single target instead, but it doesn’t seem to work.

I’ve had two rendertargets in a single shader once, using the filter manager.
I found it’s python source, but that’s not helpfull.
There’s tons of beaurocrazy and it’s higher level than I want.

The things most in my way are “higher level abstractions” of actually more usefull lower level calls.

It’s nice to have helper classes, but I’d really prefer getting closer to the metal.

The time spent on understanding higher level stuff could be as well just used to understand
the lower level stuff, for which there’s tons of documentations out there already anyway. :stuck_out_tongue:

So …

The FirstPass renders to a texture, which gets passed to the SecondPass.
I want to add another texture the FirstPass can render to.

I see examples using FBOs and question the sense, because it’s not needed before.
I need to add a camera to the FirstPass texture and don’t see the sense, because I already have a camera.

I really just want an easy way to add a rendertarget and be able to modify it in the same pass.
Or at least just another rendertarget and I’ll modify my approach.

rant :stuck_out_tongue:

You could use my RenderTarget (github.com/tobspr/RenderTarget), the code for your case would look like:

target1 = RenderTarget("target1")
target1.add_color_texture()
target1.add_aux_texture() # Add a second texture
target1.prepare_offscreen_buffer()
target1.set_shader(fancy_shader)

target2 = RenderTarget("target2")
target2.add_color_texture()
target2.prepare_offscreen_buffer()
target2.make_main_target() # Display the target on screen
target2.set_shader(display_shader)
target2.set_shader_input("texture1", target1["color"])
target2.set_shader_input("texture2", target1["aux0"])

That way your “fancy_shader” could write to two textures, and your “display_shader” can use that two textures and display them.

I’m not sure what you are actually doing, so I’m more guessing, if you provide some further information, that’d probably help.

Wow, I actually forgot that you made this!
I’ll test this and dissect your code. :stuck_out_tongue:

I’m looking for ways to speed up my ray marcher.

This is a shot from a few days ago. I’m already at 40fps+ at 1600x1000,
but that’s still long from what I want to reach. I have a few tricks up my sleeve I know will work
and I have a few additional ideas that I’m not sure I can implement … but that’s the fun part. :slight_smile:

Your description and the code sound fine! I’m on some older Panda 1.10,
will report back with results tomorrow at latest. :slight_smile:

Thank you for your help!

Can I just alter your code to make it easier to fit my needs?
Like replacing the card with my own plane.egg positioning and flattening it?

Reminds me that I need to recheck if it makes a difference performance wise…

Why do you want to have your own plane geometry?

If you are raymarching, the most time spent is in your shader. I believe you are overoptimizing, while the main performance problem is your shader. The FBO creation and setup doesn’t affect the performance much at all.
If you enable “pstats-gpu-timing #t”, connect to pstats, and select your GPU at the top, you should see the most time spent in your shader. So your first approach should probably be to optimize your shader, and not the render target setup.

The RenderTarget already uses a single fullscreen quad and sets various options on it to make sure no performance problems occur (like disabling culling on the camera and nodepath, etc). So I’m not sure what you expect to get when you use your own plane. If you really want to try it, you can use (but I don’t recommend it!), be sure to update to the latest commit:

target.remove_quad()
my_geometry.reparent_to(target.get_node())

Oh oh oh nono!

I was switching the card to the plane.egg and noticed a difference,
but it could have been something else so I will check again.
It’s only a minor change, so I don’t spend hours on it anyway. :slight_smile:

I try everything eventually, just to see what happens and how things change!
I haven’t yet fully realized how things work. I used this as bedtime story …
fgiesen.wordpress.com/2011/07/0 … 011-index/
… which is written by ryg from FarbRausch.

Thanks to this I’ve changed the whole code from one quad to one GL_Points and one vertex per pixel,
adaptable to one vertex for several pixel by adjusting the size. I wanted to see what happens! :slight_smile:
I had a weird idea, but for some reason I get wrong coordinates when I march in the vertex shader.

Yes, absolutely most time is spent in my shaders,
which is the reason why I need the rendertargets for!

PStats I haven’t looked at yet, but will soonish I guess.

I’m also totally looking forward to learn GPU assembler,
if that’s still a thing and supported!

I’m just weird like that.

Thanks for the update, I will check.
You did not answer my question though. :stuck_out_tongue: