Manipulators / gizmos

Hey Craig,

Thanks for your feedback. The last axis used will stay highlighted so you can middle-mouse click and drag anywhere in the viewport and continue transforming. If you want to deselect an axis, press the key corresponding to the current gizmo again to select the default axis. I’ve tried to copy the behavior of Maya’s gizmos as much as possible.

Perhaps I don’t quite understand the other issues mentioned by Nemesis#13 as I was fairly sure I fixed those. Can you indulge me and give specific steps to repro? Thanks!

I see. A feature, not a bug.

Maybe I misunderstood his report. Anyway, I noticed that when scaling, after each scaling on an axis, the wigit snaps back to unit size (The scaled item is still scaled properly though). This might be the intended approach, though I think its a bit odd. I’m not a 3d modeler though, so I don’t know how that stuff normally is.

Also, my middle mouse button is mapped to doing something globally (and I don’t have a middle click on my laptop), perhaps right click could be set to do the same thing as middle click (maybe require a modifier key if you really want).

Bug: + and - still rescale the wigits even when there are none visible (so I can press - a few times with nothing showing, then next time I show one, its tiny)

If I hit - a bunch, the widgets turn kinda inside out (the translation and scaling ones anyway), and the selection of what I clicked gets flakey, and the dragging acts strange. Perhaps you should have a minimum (and maybe a maximum) size.

Where the wigits show up when I select multiple items seems random (if one of the multiple items already had the wigit, it stays there, otherwise it moves to some random one. Perhaps the wigits should go in the center, or average or something like that?

Anyway, very nice.

kurohyou,
Thanks for this. Excellent work and very useful to me. I got the latest version and am I reading your license correctly - that the code can be freely used in any type of project as long as the credit file is kept?

Great work at your site, too, BTW. By coincidence I had been learning about Unity recently and am very interested in what you are doing .

Thanks! Glad to hear you like it :smiley:

Yes, it should be a pretty liberal license (BSD - from memory, same as Panda) so long as you keep the file alongside the code, so feel free to use it in commercial projects. Don’t hesitate to email me about bugs either :slight_smile:

Thanks for the site feedback too. I should be putting up some more Panda related shenanigans shortly, specifically related to dynamically attaching scripts to node paths in a Unity-ish way.

I’ve been trying to make this work in my project. I’m not using a default panda window, so I’ve been stripping out all references to render, render2d, base.mouseWatcherNode, base.camera and such.

Anyway, I’m getting close to having it working. This is really going to help my application. Thanks!

If my changes end up being pretty general fixes, I’ll setup a public repository for it and upload my fixes there.

Hi Craig,

Interested to know the changes you are making, so please let me know if / when you post your code. From memory the latest release should allow you to supply your own root and camera nodes as I had what you’re describing in mind when I designed them initially. That being said, I haven’t tried using them outside of a base.render / base.cam environment.

Hi kurohyou,

Recently I started implementing a rotation gizmo for my own project, so I was interested in comparing it to yours. To hide the parts of the axis rings behind the rotation sphere, I clip them with a PlaneNode parented to the camera.
The way you do it - applying a billboard effect to half-circle arcs - is also very nice, but I see some problems.

As it seems that only up-vectors perpendicular to the Y-axis lead to the desired billboard effect, you are manually adjusting the transformation of the lookAt object (cameraHelper) to correctly orient the Y-axis ring, but that kinda defeats the purpose of an effect that is supposed to perform all of the necessary math implicitly.
To me, a better solution is the following:

  • treat the Y-axis ring as a special kind of Z-axis ring and use Vec3(0, 0, 1) as its up-vector;
  • create an additional NodePath, between the Arc NodePath and the one you normally reparent this Arc to;
  • set the pitch of this additional NodePath to 90 degrees. Done :slight_smile: !

There also seems to be a minor culling problem with the billboarded arcs; when you pan the camera such that the rotation gizmo is nearly out of view, it may happen that what remains visible of the arcs suddenly vanishes when it shouldn’t (e.g. as the gizmo gets close to the right edge of the window, the blue ring disappears unexpectedly). A bug in Panda3D, maybe?

Lastly, on your site I read this:

True, that would be slow, IF you needed to render the entire scene to a large buffer texture… but you don’t.
In fact, all you need is a “picking camera” with a very small field of view and a buffer texture consisting of… wait for it… ONE SINGLE PIXEL. Yup, for that one point right under the mouse cursor.
If you want to hear more, let me know :wink: .

Hi Epihaius,

Yes, you’re absolutely right! Why use a task to orient the helper when I can orient the rings themselves? Truth be told, I was quite chuffed when I worked out how to use billboards to get the effect but got stuck on the last one. From memory the last axis didn’t work as expected and I couldn’t work out if it was my code or a bug in Panda. For consistency I added the helper node you found and kept the billboards. Thank you for your solution, I’ll attempt a patch on my end.

I haven’t noticed the culling problem you’ve described but I will take another look. I would also be interested in seeing your version if at all possible.

Awesome :slight_smile: I hadn’t thought of that!

Most definitely. I want to use this kind of picking in my editor as using a ray will be too slow for complex scenes. I think I gave the idea away when I realised you would have to put a custom shader on every object which would defeat the purpose of having a wsywig editor. Is this not the case?

You can put a custom shader input (the special color) on each model that you want to be able to pick, then use one shader for the whole scene via camera.setInitialState for the one pixel camera. Of course, this requires going through the whole culling process, which can be kinda slow.

Depending on the scene, in some cases rendering an extra bitplane for picking might be faster, but that really would need custom shaders (or just adding 1 line if you use my shader meta language ( github.com/Craig-Macomber/Panda … -Generator ) , but no one does that.)

Actually, I’m thinking about making an example (gizmo’s + picking), but that could take some time.
Meanwhile, I will give you some pointers on how to set things up for the picking.

Well I’m not using shaders at all (not really my cup of tea, sorry Craig :smiley: ); instead, I’m applying the picking color to the vertices (since all of the models in my project are created procedurally).
For loaded models that could be a bit problematic, but as your main concern seems to be to allow users to select proxies for different kinds of objects, setting the vertex colors for those shouldn’t be a problem, right?
Anyway, this is how I use picking colors:

  • generate a unique picking color and assign it to the vertices using a GeomVertexWriter;
  • set the regular color (to be rendered by the main camera) using NodePath.setColor();
  • call setColorOff(), setLightOff() and setRenderModeThickness() for the NodePath whose state you’ll use for picking_camera.setInitialState().

Now about the picking camera itself.

The lens of this camera needs a very small field of view, since it would otherwise be far too imprecise (the picking color of a point relatively far from the cursor would be rendered to the one-pixel buffer, as if that point were directly under the mouse); a value of 0.1 seems to work very well(*). (Smaller values might cause problems due to rounding errors, so be careful.)

Since GeomPoints or GeomLines would only be detected when the picking camera is pointed straight at them (they are culled when outside the frustum, regardless of their render thickness), it would require pixel-perfect clicking to select them, and that is not very user-friendly. To remedy this, bigger culling bounds need to be set (depending on the desired render thickness). A good value for the field of view corresponding to the culling frustum is 1.0, which accommodates a render thickness of about 10 pixels(*).
You can set the FoV and the culling bounds like this:

    node = picking_cam.node()
    lens = node.getLens()
    lens.setFov(1.0) # this defines the size of the culling frustum
    cull_bounds = lens.makeBounds()
    lens.setFov(0.1) # this defines the size of the viewing frustum
    node.setCullBounds(cull_bounds)

(*)Please note that the given values work well with the default FoV of the main camera (40 degrees) and a window size of 800 x 600; other settings might require different values.

Rendering done by the picking camera + texture lookup seems to be fast enough to let it run in a task - useful for highlighting objects and changing cursors on mouseover.

Now I’m off to make that example - wish me luck!

You can find the promised example here - hopefully it will be of some use to you :slight_smile: .

My translation and scaling gizmos are a bit different from yours, in that they are more like those in 3dsmax. I also managed to get rotation working as expected when dragging outside of the gizmo. Feel free to adapt the code to make it work with yours also.

Enjoy :slight_smile: !