Instancing based flocking

Hi all,

        Since [url=http://www.panda3d.org/blog/?p=44]hardware based instancing[/url] has been introduced for Panda3D, I was wondering if I could do instancing based on flocking. I was trying to use the flocking from [url=https://discourse.panda3d.org/viewtopic.php?t=7192]PandAI[/url] and was wondering what could be the best way to get something like this done. 

My idea right now is to compute the direction vector and the orientation from the algorithm and compute the transformation on the shader and use instancing for creating large groups of flocks without much noticeable delay.

I have some questions as to,

Is this feasible? Also is there a better way to achieve the same? And also if this is a useful feature?

Hm, very interesting approach.

I think it may very well be feasible, but you need the latest CVS version of Panda3D to achieve this (support for passing arrays to shaders). Also, be prepared for some matrix math in the shader.

Actually I did it in this way:
I used a dummy nodePath (lot of nodes but no models attached) from which I got the matrices, and another nodePath (only one node with a model attached), parented to the render node. In this way nothing needs to be changed in Padai.

I have a questions though:
Why mat_modelview (the shader parameter) is different from transpose(nodePath.getNetTransform().getMat()) ?
To get things right I have to do:

out_pos = mat_modelproj * transpose(k.getNetTransform().getMat()) * vertex

and I don’t understand why this is different from:

out_pos = mat_projection * transpose(k.getNetTransform().getMat()) * vertex

since the modelview matrix of k (printed out from the app) is the identity.

By the way it seems like on my desktop I can’t pass an array bigger than 250/300 matrices4x4. That should be generally enough, but if you we want 1k Ralphs we need to send the data through a texture.

I don’t quite understand your questions. mat_modelview is a shorthand for trans_model_to_apiview, which translates to the get_external_transform()->get_mat() * _cs_transform->get_mat() in the GSG.

mat_projection is the transform from the apiview to the apiclip. So, mat_modelproj = mat_modelview * mat_projection.

Why are you transposing the net transform of the node, and right multiplying that with the projection matrix? That doesn’t make any sense to me.

I am right multiplying because:
position_clip = mul(mul(mat_projection, mat_modelview), vertex); or
position_clip = mul(vertex,mul(tps_projection, tps_modelview));

the modelview and projection matrix in panda3d are transposed when passed to the shader. I should probably get rid of the transpose too passing the matrix from the main app column-wise instead of row-wise.

My question basically is:
How do I get mat_modelview from a node in panda3d?

In case anyone was wondering if we did get the instancing based flocking working, please do take a look at it on our website @

http://www.etc.cmu.edu/projects/pandase/gallery.html