I try to explain:
(1)
This has been my first experience with shaders, and I did work with try and error at first. Right in the beginning I had problems with passing multiple textures to a shader, so I tried several thing, among them setting texture priorities. But the problem has been somewhere else.
The priorities are still from this try & error phase, and they are useless. What is important is the order of adding texture stages. This manual page gives some info on what priorities are good for:
http://panda3d.org/manual/index.php/Texture_Order
(2)
I admit it would have been nicer to add the textures in one oder or another, e.g. in the oder of the elevation levels they are used for.
Please have a look at the terrain shader code:
...
in uniform sampler2D tex_0 : TEXUNIT0,
in uniform sampler2D tex_1 : TEXUNIT1,
in uniform sampler2D tex_2 : TEXUNIT2,
out float4 o_color : COLOR )
{
float4 dirtSample = tex2D( tex_0, l_texcoord0 );
float4 fungusSample = tex2D( tex_1, l_texcoord0 );
float4 grassSample = tex2D( tex_2, l_texcoord0 );
// texture blending
o_color = dirtSample;
o_color = o_color * l_blend.y + ( 1.0 - l_blend.y ) * grassSample;
o_color = o_color * l_blend.x + ( 1.0 - l_blend.x ) * fungusSample;
...
The first texture (“dirt.png”) is used in the shader as variable “tex_0” (bound to TEXUNIT0). Then I sample this texture I use explicit names again: “dirtSample”. And so on for the other two textures.
Then I blend the three samples, and since I have explicit names I didn’t realize that the textures are not in the same oder as the elevation ranges.
Hmm… by the way, now that I see the code again I think using lerp( ) would be faster than blending by hand.
(3)
Now this is a bit tricky. At first the water reflections had some flaws, among them the fact that parts of the terrain that are below the water surface have been reflected too.
My first approach to this problem has been to use an additional clip plane on the camera that renders the reflection. This would have worked, if I hadn’t used shaders for the terrain. ynij_jo found out that when using shaders clipping seems to happen at geom level and not at pixel level.
So if I want to clip away the underwater parts of the terrain I would have to divide the mesh into two parts, one over-water and one under-water. Not nice 
The work-around is in the shaders again. There are two shaders for the terrain: “terrainNormal.sha” and “terrainClipped.sha”. The only difference are these lines in “shaderClipped”:
// clipping
if ( l_mpos.z < 36.0f ) discard;
If the interpolated position of a pixel is below z=36 then it is not rendered. (Hmm… again something that should be fixed: hard-coding the water level in the shader to 36.0f is bad)
Fine so far, but now I have another problem: The main camera has to render the terrain using “shaderNormal.sha”, and the reflection camera has to render the terrain using “terrainClipped.sha”. This is what the tags are for.
If you look at the part of the demo where the terrain NodePath is created you will notice that I assign no shader to this NodePath, like I did for grassNO, skyNP and so on. But I assign two tags, “normal” and “clipped”. Now, whenever a camera sees the terrain NodePath, it assigns the render state that is associated with these tags. And in _setupCamera( ) is equip the two cameras with proper render states. The main camera (cam0) gets a render state containing the shader “terrainNormal.sha” associated with the tag “normal”, and the reflection camera (cam1) gets another render state containing the shader “terrainClipped.sha” associated to the tag “clipped”.
A very mighty instrument in my opinion. This manual page explains it too:
http://panda3d.org/manual/index.php/Multi-Pass_Rendering
I hope I was able to help with understanding the code.
enn0x