water reflections without a PlaneNode?

I’ve seen examples of water reflections using PlaneNode.
They use a calculation which uses a PlaneNode method, getReflectionMat().
[water reflection) (rdb’s post)

I don’t want to use PlaneNode, though my mesh is still flat.

What to do instead of this?:

watercamera.setMat(base.cam.getMat(render) * waterplane.getReflectionMat())

Why would you want to avoid using a PlaneNode?

Actually I judt don’t understand what the PlaneNode does now that I think about it.

It exists so you can place a Plane in the scene graph, so that its coordinate space becomes defined and meaningful.

That doesn’t tell much.
There’s already a plane created with CardMaker. What does PlaneNode actually do which that plane can’t (I know it has a method which is used in the calculaton, but don’t know whats it for to know if it can be replaced)?
Say you have a flat mesh for your ocean but you don’t want water right below your island, you will make a grid and remove the faces right below the island.I’d like to know what PlaneNode is for to know what to do here.

A Plane is just a set of numbers, that have no relationship to the scene. What do the numbers represent? What are they relative to? In order to make that question meaningful, you have to put it somewhere in the scene graph. That’s what PlaneNode does.

I don’t think you understand my question.
What is it for? There’s already a visible plane made with CardMaker. The only place I see the PlaneNode NodePath used is here

tmpnp = NodePath('StateInitializer')

and here

self.wcamera.setMat(base.cam.getMat(render) * self.wplane.getReflectionMat())

I don’t understand what both do.

BTW, I’m not sure what the “StateInitializer” NodePath does either


I’ve already explained twice what it’s for. The Plane has to be defined in some sort of coordinate system. A Plane is not a scene graph object, so it has no meaningful coordinate system. That’s why PlaneNode is used to put it into the scene.

A Plane is comparable to a Point3 object, in a way; it holds a mathematical description of a point, but it doesn’t really specify in what coordinate space the point is, so you won’t know whether that point is relative to the camera, render, the player node, or any other node.
But when you apply it to a node, it suddenly makes sense - for instance, when you pass it to nodePath.setPos, you’re telling Panda that this point is relative to the parent of that node.

To elaborate, your Plane object holds the normal vector and origin of your water plane. But what are this normal vector and origin point relative to, to which node? The only way you can answer that in a meaningful way is by allowing one to put a Plane into the scene graph, which is what PlaneNode is for. For instance, if you reparent your PlaneNode to render, you can say “OK, so the normal vector and origin points of this plane are relative to the coordinate space of render.”

As for your StateInitializer question: well, when the other camera renders the scene for the reflection, it needs to render it a bit differently than normal. For one, instead of rendering the front faces, it should render the back faces, since it is rendering a reflection.
Secondly, it only needs to render geometry that is above the water plane, and not below. This is done by setting a clip plane attribute.
(NB: this is especially where the PlaneNode comes in; Panda requires a PlaneNode here because it needs to know in what coordinate space your Plane is. So, to reiterate: you can not avoid using PlaneNode, there is no reason to, and it would not make sense to.)

All of this information (clip planes, reverse culling) is held by a RenderState object, which in turn holds a number of RenderAttrib objects for individual attributes (ColorAttrib, ClipPlaneAttrib, TextureAttrib, etc).
The RenderState object is held under the PandaNode, and its attributes are what actually gets modified when you use setColor or setClipPlane or setAttrib.

Anyway, so to render the same scene differently using a different camera (with these different properties), setInitialState is used. This means that all of the nodes rendered by that camera will default to having the attributes specified by that RenderState. This ensures that when the reflection camera renders the scene, it is rendered using these different attributes (but you can still override individual attributes on a per-node level).

Now, you could create a RenderState object and add the appropriate RenderAttrib objects to make the desired state, and pass that to setInitialState. But the RenderState API is quite low-level and not easy to use. So we need a high-level wrapper around this RenderState that allows you to easily specify these attributes, and then get the underlying RenderState to pass to setInitialState.
But we already have this high-level wrapper, which is NodePath. So, we create a new node called StateInitializer (the name is unimportant), and using the NodePath interface, we set these attributes like the clip plane and CullFaceAttrib.
Then, get the underlying state object using getState(), and pass that to setInitialState.

As for why the CardMaker does not suffice: CardMaker exists to make two triangles to describe a visible, renderable quad. On the other hand, the Plane object is like a mathematical formula to allow you to calculate the reflection of a transformation (using getReflectionMat), which you need for calculating the proper matrix for the reflection camera (see next paragraph).

As for what the setMat call does: in order to be a reflection camera, it needs to be on the opposite side of the water plane; if the normal camera is two metres above the sea level, then the reflection camera is at the same point but two metres below the sea level.
Multiplying the transformation matrix of the camera by the reflection matrix of the plane mirrors it over the water plane, resulting in a new transformation matrix that can be applied to the reflection camera.

I wasn’t asking what PlaneNode is for as there is a Plane object already, I was asking why there is a Plane/PlaneNode when there’s already a geometry for the water. I understood it was used for some calculations, but was wondering if it’s possible to use the visible plane geometry for the calculation instead.
Well, you explained that now too, thanks.

A quad isn’t a plane. A plane is a normal vector and a point, basically - it is infinite and as such divides 3D space into two subsets of that 3D space. It’s purely mathematical.
A quad, on the other hand, is a set of two triangles. It has no further mathematical meaning, it is only useful for showing on a screen. It does not hold the right data for one to do linear algebraic calculations with.