It really seems to me that the distance used to do LOD triggering should be in the coordinate system of the node on which those distances are specified (the LOD node). Instead its done in camera coordinates.
If I have a highly scaled item (in my case a deferred shaded light, but it really does not matter), it has its LOD at transitions at completely the wrong times. I can work around this if I know how big the lights are going to be when I create them, and include the scale via scaling the switches of setLodScale, but this is kinda messy, and breaks if I reparent things around, or rescale anything.
Is there a reason its done in camera coordinates? Can we have an option (a boolean flag on the LOD node) to do the computation on the LOD node’s space?
Either way, documenting it would be awesome. I can up date the manual; I’ll do so after we reach a conclusion here.
Non-uniform scaling makes things strange, but I don’t think it breaks anything here.
Also, are the dev docs out of date? I can’t find this anywhere, and apparently its committed: bugs.launchpad.net/panda3d/+bug/1009603 (You can do the same thing by scaling the camera, but this might be handy)
If I understand you correctly, you want an algorithm that more closely approximates the distance from the camera to the front-face of the object, rather than the distance from camera to the center of object. And you want to approximate this by taking scaling into account. e.g., the bigger the scaling of the object, the closer its front face is to the camera.
Your situation is not generalizable, and only works for objects whose size is comparable to the distance of the object to the camera. For example, if I have a object who size is 40 units big and the distance from camera to object-center is 100 units, then scaling the object by +/- 50% has a huge effect on the distance from camera to front-face of the object. However, what about other situations when the object is only 1 unit big? In this case, scaling the object by +/- 50% will have practically little effect on the distance from the camera to the object’s front-face.
The latter case is actually much more frequently encountered. For your specific situation, I suggest you just wrap whatever your doing in a custom python interface/class rather than make changes to the underlying engine.
If I have a tree that is 1 meter tall, I want it to cull when you are 20m away. If its 1000 meters tall (scaled by 1000), it still culls when you are 20m away. This is bad. You want it to cull (or switch to lower quality) when the tree is the same apparent size, and thus 2000m away. This is all relative to the centers of the objects, or the fronts, or any part. Thats the neat thing about coordinate systems, it just works out nice.
The goal of LOD is partly to keep the apparent triangle size on screen about the same, and thus what matters are: camera FOV, model size, a constant per model scale factor (to adjust for different models being different), camera distance to model (which is only correct if in model space, other wise scaling of model must be handled separately) and screen resolution, and optionally a quality setting.
I can deal with all of these globally by screwing with the camera, except for the model space distance/model scale.
If I scale my terrain to make everything in meters instead of feet for example, all my LOD nodes are off by a factor of ~3, and every single other thing in panda3d works great. It seems inconsistent. All my shader inputs, collision, LERPs etc all work exactly the same when I scale their parent nodes. LOD is the ONLY exception. Basically I can’t scale anything that contains LOD nodes, or it breaks. When attaching LOD nodes to things that are already scaled, I have to get the scale on the entire scene graph above it and manually apply that to my LOD nodes, every time I add one or rescale anything.
Also, I’m not asking to change existing behavior. I want a boolean flag to toggle how it works to enable a much more consistent and useful mode, and docs saying what it does. And I’d love to have access to some per camera LOD scale per setting like that patch I mentioned provides (since this would break the camera scaling hack which is the only way to get it currently)
Also, scale your camera, and the only things that change are near/far clip on the camera, and all your LOD nodes change quality, and otherwise everything looks identical. Thus, if I re-parent my camera to something thats scaled, it breaks all my LOD nodes. This is very annoying. I can compensate by doing the same thing as with the LOD nodes and setting their scale to a constant relative to render (and updating it when re-parenting or rescaling anything above them in the scene graph). This does prevent instancing though, which is annoying, since I need to be able to fix the scale of each independently.
As far as I know, there is no way to do this other than every frame run code to check the relative scale, and rescale the LOD node, or implement my own LOD node that also needs to run code every frame. Given that I need to serialize my models to bam files, and I have 1000s of LOD nodes, this would be a huge pain/performance hit, load time hit. This is a nasty thing to try and do from python. If there is a decent way that I’m missing though, I’d love to hear it.
And what happens to some other user whose tree is only 0.001 meter tall and 20 meters away and gets scaled by 1000, and doesn’t want a LOD change because 1m is still very small to him? It doesn’t seem me to that this is very generalizable.
As for your specific situation, your implying that your object is scaling very frame, because otherwise why can’t you just reset the LOD distance of that objects node when you scale the object? If your object is indeed scaling every frame, it seems like a very project specific problem to me which requires a specific C++ LOD node rather than a change to the basic LOD node itself.
I also understand that you just want a boolean flag. But the reality is that even a boolean flag is going to tack on extra stuff to the basic LOD node, and the next person who wants to extend the LOD node is going to have to deal with it. What you want, I think is a new class of LOD node not a boolean flag to the basic LOD node.
I don’t understand this statement. What is scaled relative to what? Scaling the player (assuming camera is parented to it), or node here have different effects, even though they would look identical if not for LOD nodes.
If you scale the player up (with a parented camera) the LOD level of objects you are looking at (even though they are the same size on screen) goes up. This is wrong.
If you scale up the object you are looking at, even though it is larger on screen, its LOD level stays the same. This is wrong.
Here is an example image showing the second issue:
Across the middle we have 3 nodes at different scales, which all render at the same quality (LOD) level. This is bad. Consider the node on the left: no matter how small you make it, even less than 1 pixel, it still renders at medium quality at this distance. I don’t see how this is considered a good design. Also, the one on the right, no matter how big you make it, its will never transition to high quality, even if only one poly of it covers the whole screen, or you are inside it.
Above we have a “fixed” node that renders at the correct LOD level according to my views. I manually set the LodScale value to 1/lodNode’s scale to compensate.
If I can get a new LOD node class that works with BAM files and does not have horrible performance issues, that would be a great solution. I don’t think I can get that without breaking into C++ though.