It is, take a look at the screenshot of a scene graph of an example scene:
I haven’t. Maybe I didn’t explain my issue enough, sorry. I’ll go into details. It is not that the skydome is not rendering. It does, but relative to the “rig” cameras, so from their point of view it looks like a floating sphere far away.
As long as the object from which I render the cube map is inside the skydome, it looks correct. But if the object gets out of bounds, the issue appears.
Hmm… One simple solution might be to duplicate the sky-dome, producing one dome per camera, and update their positions each frame much as you have been doing. You could then hide each from all but their own camera, so that they only appear in that camera.
(This can be done, if I’m not much mistaken, by setting a mask on the cameras, and then hiding the sky-domes from the combination of the masks of all cameras but theirs. See the API for details, looking for Camera’s “setCameraMask” method and NodePath’s “hide()” method.)
My first intention was moving the skydome around, but the drawing from different cameras is controlled by the makeCubeMap method of GraphicsOutput class, if I am not mistaken. So I will have to replicate its functionality in the python code for this. This could be slower than the built-in method.
So, I will have to go with duplication - create as many instances of skybox with it’s own bitmask as there are reflective objects. But still, they can be out of bounds, if the skysphere is not big enough.
I wonder how this is done in a professional-grade 3D games.
P.S. I hope you won’t abandon “A Door to the Mysts”. I enjoyed playing it last September/October.
As long as you update their positions to match those of your cube-map cameras, as you have been doing with the sky-sphere that’s rendered by the main camera, I would think that they shouldn’t go out of their camera’s bounds. However, I’ve not worked with Panda’s cube-map tools, so it’s possible that I’m missing something!
Thank you very much indeed–I really appreciate that!
And indeed, I currently have no intentions of abandoning it–work on it continues!
If you want to keep up-to-date on it, you can follow my Twitter account ( https://twitter.com/EbornIan/ ) or my devlog–the latter of which can be found on:
Great, thanks. I didn’t see there was a page on IndieDB, I am currently subscribed to your Youtube only.
Now, about cubemaps - if they are rendering from the center of the object, there won’t be no problems, indeed. For some reason I thought the 6 cameras are placed outside of the bounding volume of the object.
I also should mention this is an issue for dynamic cubemaps, with the pre-rendered cubemaps I can move the skybox around and create all the cubemap textures.
Hmm… fair enough–I’m honestly not sure; as I said, I’m not terribly familiar with Panda’s cube-map tools. I was imagining that they were rendering from the centre of their intended point-of-view, looking outwards. However, that may be entirely incorrect!
… That said, since a sky-box is intended to be outside of everything, I would still expect that placing said sky-box at the location of the cube-map camera should work–as long as you set up the sky-box to render behind everything else. But again, I may be mistaken in that!
A cubemap camera is really just six cameras with 90° FOV at 90° angles of each other, rendering from a single point. Parenting a skybox to it, with a mask applied to it so that only the cubemap camera renders it, should work.
Thanks for the reply, rdb. It’s a solution that I am currently implementing.
But I think there must be a better way. I think I need to reset skybox model matrix, so it would appear at zero coordinates for all the cameras. I think this can be done with a shader. Will this work?
Also, since you know Panda3d internals, maybe you can tell - is there already some flag/mask implemented into the renderer that does this?
I think something like that should work. There’s likely an easier way, but a simple approach might be something like this, done in the vertex-shader:
Add your camera’s position as a shader-input
Instead of applying the model-view-projection matrix, apply only the rotation and scale elements of the model-matrix
Add the resulting vertex-position to the camera-position
Apply the view-projection matrix to the result
I haven’t tested the above, but I think that it’s about right.
There might be some trickery that you could perform by working directly in clip-space, but I’m not sufficiently familiar with that–and am perhaps a bit too tired–to be confident in approaching it at the moment.
Honestly, unless you’re seeing performance degradation due to the duplicated sky-sphere, I’d suggest going with that as the simpler, easier option.
I think that you can likely use the “row/col_x_to_y” inputs from Panda’s CG support even in GLSL, but I may be mistaken.
I don’t want to reinvent the wheel. Skybox + reflection maps is a simple feature every modern game has. I guess this is how it is done in modern games, with a shader.
Although I like 2000 era programming tricks, it is not necessary to use them now and I want a clean and more universal solution.
Not necessarily: I think that you might be surprised at how much is done in simple ways when there’s no compelling reason to do it in a more-complex way.
The shader approach might be a little cleaner and more universal–but not by much. In addition, unless you think that you’re going to expand this feature significantly beyond its current use, there’s an argument to be made that building a more universal approach provides little benefit.
I think this would indeed do what you want. You probably want to disable culling on the model, since it’s no longer where Panda thinks it is, so it may be culled away if Panda thinks it’s out of view.
For a solution that would work for the fixed-function pipeline, you can set the “contents” value of the vertex column in the GeomVertexFormat to C_clip_point instead of C_point, which indicates that the points are already pre-transformed. I think this does not work with shaders, however, and it won’t respond to camera rotations, so this may be a non-starter unless your sky is a fixed, static background image.
One could alternatively experiment with using 4-component vertices and setting the fourth coordinate to 0. This will mean that the vertices will not be affected by any translation transformations, only by rotation and scale. I am not 100% sure whether this will work without trying it. If it does, it should work for both shaders and the fixed-function pipeline, and might be an elegant solution.
Short of than, an elegant way to support this in the engine might be to extend CompassEffect to allow setting coordinates relative to “whatever the current camera is”.
Oh, and if you’re going with shaders, one alternative approach is just to render a fullscreen quad behind everything and just do the projection of the cube map entirely in the shader itself.
Thank you for this tutorial. It’s a pity I can’t mark 2 posts as solutions! I will try both of them.
Short of than, an elegant way to support this in the engine might be to extend CompassEffect to allow setting coordinates relative to “whatever the current camera is”.
This would be a useful feature for the engine, sure.
When I am good enough at C++ coding and shaders, I will make a pull request, someday.