# how to setup multipass?

hello, i am new to Panda3D and Python. there are alot to learn.
i just read about multipass in the manual, it can be a solution for my problem:
i want to create a game with visual range of 50km or more, while still see the nearby objects(1 meter or less) clearly. the Z-buffer is surely not enough for this large visual range.
so using multipass with several cameras to render the far away scene and the close scene may solve this problem.
my setting should be: all cameras located at the same position, and same angles, but the nearclip and farclip values should be different.
currently i only have the default camera in the scene.
how should i setup the cameras in Python language? thanks in advance.

I say, don’t make it complicated until you have to. Try the Z-buffer on your existing visual range. It doesn’t sound outrageous to me.

David

hi,
the reason that i said the z-buffer is not enough for a scene with nearclip 1 meter and farclip 50km , is that:
z-buffer has at most 24-bit precision, if i set nearclip=1meter and farclip=50000meters, the z-buffer can’t tell the difference between 1000meter and 1000.01 meter. so if 2 objects are close to each other, there is the z-fighting phenomenon.

if the camera is on uneven ground, this is not a big problem, as the visual range is small anyway. but if the camera is in the sky, viewing down the ground, this problem is apparent.

That may be true (although on my back-of-the-napkin arithmetic, 24 bits ought to be enough, but it’s close).

But still, the much simpler solution is to create your model such that there aren’t objects that are visible from the sky that are so close to each other that they require 1cm of depth precision to resolve them. That is, give them 2cm or 4cm of separation if necessary.

Or, if you really don’t want to do that, consider adjusting the near plane automatically when the camera lifts into the sky, so that you move the near plane farther out (maybe 2m or 4m) when the camera is likely to see many things from far away, and closer in (1m or smaller) when the camera is near the ground and likely to need to see things close to the camera.

It’s only a bit of advice, just because the alternative you propose–merging multiple parts of the scene rendered with different depth buffers–just sounds like so much trouble. It’s technically possible, but you’ll have a devil of a time keeping the scene separated into close things and distant things. And how will you handle objects that span across that gap?

David

i have considered adjusting clip values when my camera moves up and down. it can work to some degree.
in my code posted in another thread, there is an adjustment function when the camera zooms in. that is still in experiment.
i read about multipass in manual and thought it could solve the problem better than the method of adjusting clip values in different situations.
because i have seen the method of dividing the whole scene into near/medium/far scenes in some commercial games, and the result of that method is satisfying. imagine Quake type game with indoor scenes and the charactor also can fly into the sky to see the outdoor scene.

if an object spans into 2 distance range, normally the distance is judged by the pivot point of that object. but if the object is large like the terrain, i think it is judged by the spot closest to the camera. i haven’t thought about it clearly though.

Well, let me not stand in the way of your intended design.

The easiest way to do this thing in Panda is to open a second DisplayRegion that overlays the first. You can use camera masks to control which objects get drawn into which DisplayRegion, or you can create two completely different scene graphs, one for each region.

It will be up to your application code to decide when to move objects from one region to the other.

David

wow, this way is much easier than using graphic buffer and render-to-texture.
i divided the scene into 3 cameras(near/mid/far) and 3 display regions, and it solved my problem:
my terrain consists of a flat sea polygon and a geomipterrain which is partially under the sea. with only one camera, i always see flickering at the shore lines where triangles of the geomipterrain intersect with the sea polygon. now with 3 cameras, the flickering is almost totally gone.