Here’s an illustration of what needs to be done:
A texture has to be projected from a point in space to a mirror (1 in the image) which will bounce the projection to a surface (2).
Here is the texture that is going to be projected although a real video file is going to be used.
The tricky thing is the mirror (the surface marked as 1 in the illustration) is not flat but curved as illustrated below:
The point is to see how the projected image or video will appear on the final surface (2) given a projection angle (fov) and a curved bounce mirror.
This is for a video for a project where I need to illustrate how a curved mirror deforms a projection from a video projector, laser projector and such and in the animation the shape of the mirror gradually changes (via shapekeys) which in turn changes the size and deformation of the projected image/video.
If Panda can do this somewhat accurately I can use it for simulations as well which is a plus because of the superior render quality compared to specialized CAD programs.
Also if this is possible with panda3d and we have an answer then it makes panda a powerful tool for people working with video projectors. I’ll write a simple simulation program from the answers here and send it to the admins at ProjectorCentral.
This isn’t much but offering 100 USD for a working solution.
I think raytracing is the only way to do it. Cast a ray per pixel of the video onto the first screen, see where it hits/reflects and cast another ray onto the final screen from that point. I imagine it could be done on opengl 4.x where you have image store, on older hardware things get tricky - the only idea I have is to use a lot of tiny, instanced quads
If by instanced quads you mean a colored fixed-ssize quad facing the camera for each pixel then that would work for me, provided 1920x1080 quads wouldn’t destroy my GTX1070. OpenGL 4-only solution would work too for me.
Sadly this shader programming is above my experience. The $100 offer is still up if anyone is interested. Will probably send more than that if it will get done soon.
here are some solutions Blender users provided to be used inside Blender. Maybe one of them can be ported to Panda.
blender.stackexchange.com/quest … -a-surface
I think I have an idea how to do it in a really simple way, that should run on a potato level hardware (OGL 1.3+), I should have a proof-o-concept by this time tomorrow, I just need to figure out how to do line-plane intersection in glsl (can’t be that hard, right?)
Well my idea didn’t work as good as I imagined it. I’ve made it the way I described it in my first post:
drive.google.com/file/d/0B81FE0 … sp=sharing
The reflection is made out of 262 144 tiny quads, some may overlap, some may have gaps, quality may be improved by making more of them and making a bigger buffer. You could also render the uv not the color, and then render the second screen into a texture and run some cellular automata in a ping-pong buffer setup to interpolate the uvs where there are gaps.
I had nothing good to make the screen from, so I just used the ‘smiley’ model and scale it a bit to get different curves, the texture is also projected on the back side of the screen model, that can be fixed by checking the dot product of the surface normal and the projector view vector… or making a model with one side.
Nice. So the only issue is artifacts on the projected frame in the form of overlaps or gaps between the “pixels of the projection” aka quads? Because the calculation of the reflections seems correct.
Overlap may be a game breaker, if the mirror would focus many rays in the same spot you’d get many, overlapping, z-fighting quads, not a hot-spot caustic. I’m not sure if rendering the quads with additive blending would help.
There’s also the border case where a ray is parallel to the screen plane or is on that plane, the raytrace shader should write some special value and the shader placing the quads should collapse the quad and/or discard its fragments.
Well I haven’t had much time to test this out but it seems good enough for a simulation. In any case simulations don’t take into account focusing or divergence (for laser beam steering type video projectors) so they are not 100% optically accurate from the get go.
Is there a way to set “lens shift” in Panda3D camera?
Can just add a black border to the top or bottom of the source image but maybe there’s an undocumented class method that can do this.
PM me your Paypal address.
I think you’ll get that effect if you scale down the projection texcords and give them an offset. I’ll add that to the code and put the code in a more convenient class tomorrow.
Actually now that you mentioned it that will probably work.
One last thing this might use is supporting more than one bounce mirror (source -> mirror -> mirror -> surface).
With this setup the first mirror can be any shape but anything else must be simple enough to run collision detection in a shader, so this can work for flat mirrors or mirrors that are part of a sphere.
Updated code is here:
drive.google.com/file/d/0B81FE0 … sp=sharing
It’s still using one mirror and one screen, I can add more flat reflection screens if you want, that’s easy enough, spherical screens will be harder but still doable - I think custom screen geometry is beyond me at this point.
One or two two flat “relay” mirrors are sometimes used before a spherical mirror to make sure the projection is big enough before it reaches the spherical mirror or to reposition the beam in the projection system (think two 45 degree oriented mirrors to change the offset of the beam in one axis). Something like that would be nice.
Custom screen geometry would be nice for previewing things like curved screens or half dome screens.
Still already a very useful tool for prototyping for video projector operators and video mappers as is.
I’ll add some GUI myself for loading custom mirrors, rotating, positioning and scaling them and inputing some beam parameters. Hows the obj and Collada import support in Panda right now? Only need a popular format which can store polygons and normals, nothing else needed.
I’m going to release my edited program as open source project so please add a license so you’ll be properly credited in the sourcecode, real name or username with optional link.
If you’re using a dev version of panda then there’s asimp, fbx and obj load with it quite good (never tested other formats, but there a lots supported).
Is ISC license ok? I like it because it’s short. If that’s ok, then that’s what the license the code above is under.
Copyright (c) 2017, wezu (firstname.lastname@example.org)
Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Any chance you can add visualization of the beam path ( lines on the 4 edges of the texture ) as well as support for more than 2 mirrors?