GLSL dithering filter help

OK. I’m not entirely sure how that will work, did you get it?

I think so; if you are rendering to a 24-bit color texture and need to render 24 1-bit images, you can render the scene once into the first bit, once into the second bit, once into the third bit, etc. from different angles using glLogicOp.

For example, the first time you render the scene, you do it with a render state or shader that only writes bit 1 for white and nothing for black, the second time you do it with one that writes bit 2 or nothing, etc. The logic op makes sure that when writing the result to the frame buffer it uses a logical OR operation, meaning only the specified bit is affected. So then you have your frames ready to be sent to the projector without needing a separate step to assemble the various frames into a 24-bit image to be sent to the projector.

Come to think of it, though, you’re using postproc dithering, which does need more information from the frame than just a 0 or 1 black/white value. That means you still need all the different color textures separately so you can do the dithering. You could use glLogicOp in the postprocessing pass to get the desired effect, but then you could also just write your postproc shader to process 24 frames at a time (hardware permitting) and essentially do the 24-bit image assembly in your postproc shader, which might even be more efficient. So I’m not sure if glLogicOp is that useful in your approach.

They say they used dithering shader too.

If I’m getting this then a dithering shader in the end would be pretty useless for a black-white posterized frame, so maybe it works some other way? I can ask him specific questions, the guy is being very helpful.

I think I can write the shader code for you, I just have to know some more details. Basically, each rendered frame contains the scene from 24 different angles, splitted over a 180 degree range, and packed into the bits of the scene output ?

How does the image have to get passed to the projector? Should panda just output it to the screen?
In which order should the angles be? Should bit 0 in the color texture be the angle 0 or 180?

Also, I guess you only have grayscale images when using 1 bit, right?

Hey, thanks!

Yes, that’s right. Scene is rendered 24 times offscreen, but packed into 1 frame.
And 120 of these kind of frames are rendered each second, which gives the 120*24 frame and refresh rate needed.

That’s right. Since the diffuser/reflector rotates around itself and not around one of its edges, it only needs to rotate 180 degrees to create a full volume.

Good question. According to the Lightcrafter/Pico developers you just have to set your GPU to 120 Hz, and it works just like any other video projector, except optionally breaking each 24 bit frame into 24 frames if set to work in 2880 Hz binary mode instead of 120 Hz 24bit rgb.
So as long as the GPU sends the frames to the projector via HDMI it should work.

I’ll need to check. But shouldn’t really matter, will just have to change what way the motor rotates.

Yeah, just black and white pixels, faking grayscale by using ordered dithering (shader code is in this thread, I haven’t made modifications).

As for the projector, I can turn red, green and blue LEDs on in binary mode to get black and white projections on the rotating mirror, or any combination of them. Blue+green on looks very nice, 2 of 3 previous projects used this combination.

I’ve had this idea of adding few low res cameras on the bottom of the case and a microphone for an example motion tracking based volumetric virtual pet program. Cameras could also help guess where to position the Panda3D lights dynamically. Getting ahead of myself, but didn’t have anything else to work on.

I have prepared a small sample (see attachment), just run main.py. It renders the scene from a 180 degree angle and outputs the 24 dithered frames to the screen. I’m not sure how your projector needs the frame, but for now, I pack the values from left to right (So having RGB (each channel 8 bit), the first 8 angles are in the B channel, the second 8 in the G and the third 8 in the R channel). You might have to disable V-Sync in your gpu configuration panel to get those 120 frames.
Let me know if it works for you! :slight_smile:

It uses my RenderTarget class to simplify the creation of the buffers. When you want to move the camera you will have to move / rotate the cameraRig instead of base.camera.

On my GTX 670 I get up to 220 frames per second, so up to 5280 total frames. I think this could be optimized further if needed
Dithering.zip (8.3 KB)

Awesome! Thank you. rdb, is this similar to what you had in mind?
Running on intergated graphics at 70 fps, I haven’t checked the code yet but if it’s right then it’s pretty good. But the projector resolution is 768x768, so it runs even faster than that.
I’ll get my hands on a GTX 970 Mini soon. That will likely be even faster.

I think Panda’s vsync won’t go over 60 Hz even if the GPU is set to 120 Hz, I think…
But vsync can be turned off and frame rate can be set to no more than 120 by doing

from panda3d.core import *
globalClock = ClockObject.getGlobalClock()
globalClock.setMode(ClockObject.MLimited)
globalClock.setFrameRate(120)
#then
import direct.directbase.DirectStart
# or your own ShowBase instance.

Also, I would also like to use Panda for everything, but have ability to playback prerendered scenes, created in a 3d modeller and frames processed into this kind of video later.
I checked for options and only found the H264 codec supporting lossless compression. I created a sample lossless H264 video and tried loading it in Panda. It failed, the first frame was loaded though.
What might be the issue? Are there alternative formats I don’t know about? It has to support 24 bit color, 120 framerate and lossless compression.
The file sizes of such videos are very large but since they are going to be < minute and looping it’s fine.

Yes, that is almost exactly like I was thinking.

Hmm, I’m not really certain why it wouldn’t load. We use ffmpeg for decoding video, which supports most formats out there. Do you get any error message? Perhaps you can send the video so that I can see whether it might be a bug?

Here’s the file: mediafire.com/watch/nxkebvlk … osless.avi
I’m running on Windows 7 64 bit with Panda3D 1.9.0 64 bit btw.

After getting the projectors dimensions, I have a rough design of the device.

I decided to use an external PSU to save space, a 12V DC one with 40A. Those are ugly so I’ll get a cover cut for it. Then a smaller and very thin DC->DC ATX adapter could easily fit inside the case.

The size of the dome will likely change and diffuser/ mirror dimensions are placeholders. After I learn more about the projector’s lens I’ll have a real clue how big the volume will be and the position/angle of the relay mirrors. The diffuser will also need a rigid holder attaching it to the rotating base cut.
I also don’t know the size of the ring motor I’ll be using. I can’t find those anywhere. A gear system with regular motor would also work but make tons of noise, harder to calibrate and maintain so I still hope to find those somewhere today.

But in any case the diameter of the base can be 36 cm and height,unless my estimated motor size is way off a bit over 12 cm and still allow air flow.

I can’t believe I missed this. The bolded part is wrong.
Each second, the scene is rendered 2880 times. There are 144 angles splitted over a 180 degree range, not 24. Here’s where the number 144 comes from:

I have chosen a 20 Hz refresh rate. Refresh rate of a swept volume volumetric display means the refresh rate of each of its individual 2d slices (2d frame) that combined form the 3d volume.
When your projectors refresh rate is 2880 Hz, and you want a 20 Hz refresh/frame rate for each of its slices, you have to simply do 2880/ 20 = 144 slices. Since the 2d plane rotates around itself, not its edges, only a 180 degree turn is needed to form a volume, not 360, hence the 180 degree range for the 144 slices and not 360.

Now, where does the need to pack 24 frames into 1 come from? That’s simply due to how the projector communicates with the PC. Via HDMI/GPU I can’t send 2880 monochrome frames each second. I can send 120 24bit RGB frames each second though. Since we can pack 24 1 bit frames into 1 24 bit, we can still send 2880 monochrome frames each second, by first packing each consequtive 24 frames into 1 24bit frame. In fact that’s how these kind of hi speed projectors work.

So the number 24 is here only as a workaround for sending the data we need via HDMI.
THe number of slices/renders which need to be sent each real frame is 144. or 144 * 120 (Hz) each second.

I hope it’s clear this time, but if not I can make an animated illustration.

Well thats no problem, I will update the code then, give me some time :slight_smile:

Thanks, sorry for misreading that part of the question earlier.

Shoot, a mistake again…

This should be

It’s still 24 frames packed into 1 frame. It’s just that 144 angles are splitted over 180 degree range, not 24. That’s 6 of these packed frames. The refresh rate of the PC/GPU is 120, for the device is 20 (120/20=6).
I need to get some sleep…

I have updated the sample, It now sends out 144 angles splitted over 6 frames :slight_smile: Hope that works for you
Dithering.zip (8.56 KB)

Almost, there are few things which need to be changed but I’ll handle the rest and post when everything is complete. Thanks a million.

However, there are two things I’d like to see which I don’t know how to implement:

  1. Floyd-steinberg dithering ( en.wikipedia.org/wiki/Floyd%E2% … _dithering )
    This looks way better than ordered dithering but for animations adds noise which can or cannot be worth it depending on the use case and is also a preference thing. FOr this since there is already flickering, noise won’t be an issue I think. Can’t find GLSL code for this one.

  2. 16x16 ordered dithering.
    Also looks smoother in my opinion than 4x4 or 8x8.

  3. What is the most accurate way to get current fps in Panda? I’d like to display a warning screen that scene is too complex when fps drops below 120 than mess instead of a 3d volume.

  1. The problem with that method is that it cannot be done in parallel. Each pixel depends on the result of its neighbour pixels. However, the gpu works in parallel, thus such an algorithm is very difficult to implement (if possible at all with reasonable performance)

  2. You just have to generate a new dither texture with CreateDitherTable.py, I’ve attached you a modified version which uses a 16x16 dither filter, so you don’t have to do that yourself

  3. I think globalClock.getAverageFrameRate() is what you are looking for, it reports the average frame rate over the past frames. If you want to look for peaks, have a look at (1000.0 / globalClock.getMaxFrameDuration()) which returns the lowest fps over the past frames.
    Dithering16x16.zip (9.86 KB)

Thanks. I think the code part of the project is ready. I’m thinking about what will be the fastest way to do geometric correction on the projected frames.

I will post the parts list and STL designs when I get the device built and running.
I wish we could hack a cheap video projector to do this, this way more people could build this kind of display, but sadly only way is a custom programmed chip (FPGA) and a custom daughter board for it which is expensive. So the Texas Instrument’s DLP prototyping projector is still the cheapest solution. I think if you’ll want to build one not doing the rendering itself and needing a PC connection to run you can get all the parts for around $2000. That’s including the projector, mirrors, motor+bearing+gears/belt and the CNC milled aluminum parts.

One last thing I would like to mention. A display which creates 3d volumes by rotating and changing what it displays rapidly is called a ‘swept volume volumetric display’. There are many ways to categorize the different types invented so far, but there are two main groups I’m puzzled is not mentioned much.

  1. Isotropic volumetric display
  2. Anisotropic volumetric dipslay

I’m going to build the second one. But changing it to isotropic is trivial: just swapping the material the final frame is projected onto from a reflector like mylar to a diffuser, and changing the code.
Anisotropic allows to have perspective scenes, without occlusion issues. It does not allow vertical parallax.
youtu.be/8gvPS1m40gw

Isotropic allows to have true horizontal and vertical parallax. But everything appears translucent. Nothing can be displayed completely solid and with proper occlusion.
youtube.com/watch?v=HUPn_FxDGeI

To get an isotropic volumetric display besides changing the reflector with a diffuser, we will need to change the rendering code. If you’re interested, we can try both and see how both look.

The code for isotropic diffuser will have to not simply render the scenes from different angles, but render slices of the scenes. Like an mri scan. With realtime geometry each rendered frame will look like an outline of a 3d geometry.
I’m assuming we could do this by using a orthogonal camera, rotating in the center of the sceen and having the near field and far field set to very close numbers.
If you have a better idea please let me know.
Dithering is going to be tricky with this method though, any ideas?

And again, thanks for all the help.

EDIT: Oh and about the unrelated question of loading losless video in Panda so my device can display very complex prerendered scenes from 3ds max/Maya/Blender/etc, but still use a Panda program for everything.
I still can’t load a losless H264 video.
An example file is here: mediafire.com/watch/nxkebvlk … osless.avi
Download button is on the top-right.

For what it’s worth, the video loads and plays just fine for me in Panda. What is the error message you’re getting, and on which platform are you trying it?

Its Windows 7, 64 bit, official 1.9.0 release. Probably something wrong on my part, I’ll recheck. Thanks.