since i’m in need of having more than the today’s usual 4 textures rendered on one surface (speaking of a terrain) i need to do a second renderpass for the terrain.
how to aproach that?
creating an offscreen buffer, render the terrain node to this buffer, take the buffer as texture with user-perspective textureprojection, render the whole scene again.?
just asking if this is the right way. guess i can code it myself… with some studies.
hope this works since 2 passes with each 4 textures would allow 3 actual textures a lightmap and… something else…
anyone who can confirm that this approach could work?
… shouldn’t be that much of an performance since only the terrain is rendered twice… or at least i hope so.
thx in advance
You could do the multipass thing and use some shaders. That will work. However, since it seems your already familiar with multi-texturing, it will be easier (and probably faster) to just use load a copy of the same terrain geometry and then multi-texture that with an alpha for places that you want underlying layers to show.
Of course you’ll get into a Z-fighting problems, so you may want to use the render bins and depth offsets to help.
… well the terrain is pretty much … huge… so loading it twice is out of question since you have to deal with the double amount of vertices,too. and i’d like to avoid z-fightings , too since it’s not part of my features list
(i’m pretty shure that having several terrain meshes is much slower than 2nd pass rendering,think of it- …3 textures would need 3 meshes which is aready 3times slower to render, then they are partly transparent->even slower… while multitexturing only costs a 2nd pass)
using shader, hm,nja… one possibility but i’d like to avoid them.
and it still would involve a second render-pass so i’ll need it anyway.
since i still havent found out how to render only a certain node i guess i’ll try a little more forcefull way and make all non-terrain nodes in a seperate tree which can be detached while rendering the first-terrain pass.
guess i still have to read more about panda’s render process,since i dont want the camera to be moved between the passes nor using images from different frames to be mixed…
but thx for shareing you thoughts.
i guess I’ll have most of it down… just one thing left:
how can i tell the renderer to render only certain texturestages- or- how can i switch texturestages to be rendered or not?
so far i inteded to use a offscreen buffer used as texture in a texturestage of the second pass (the on-screen one). setting the order of the renderpasses shouldn’t be the problem but how to solve the problem with different stages beeing rendered on each pass?
anyone any ideas?
If you want to use different texture stages for different render passes, render tags and renderAttributes are your answer. The interface is kind of clunky but it works. Essentially you tag cameras and make it so that a particular nodePath has one TextureAttrib on one camera and a different one another camera.
Then you’d have to use panda’s texture coordinate generator at each frame to generate the screen space projection.
Although, it may not be intuitive and I have no real data to back it up, I’m almost positive that having a multiple instances of the same geometry will be faster than multi-pass. I would love to be proved wrong. I’m simply speaking from past experience.
Modern as well as semi-modern graphics cards can handle literally 100,000 polys without even batting an eye, were as multi-pass rendering (depending on how you do it) may need to switch render contexts which is really slow.
As for memory consumption, if you use an instance as in loader.loadModelCopy(), it uses the same geom node and doesn’t take any more extra memory. When it’s called on hardware, all those vertices are cached and actually calls very quickly. As for alpha blending, the alpha blending takes only a very long time if panda has to sort the nodes. This can be fixed by putting them into their own seperate bin and then fixing it’s render order.
hey thx for the hint with the tags… im still unshure how to use them but it looks like it could work and i already have a vague idea of how it has to be done.
speaking of multiple meshes… well it definetly works but i still think it’s slower than using the features of the cards. and by the time i want to add a lightmap, i could run into trouble and end up with quite a lot meshes. as soon as you need lightmap and at least 3 textures you would have 4 meshes…
well i guess 2 textures and no lightmap wont hurt with using multiple meshes …but that even goes well with a single pass and one mesh.
once i’ll have the multitexturing code up i’ll do some benchmarking and check the point where multitexturing gives a performance bonus.
well thx for all your help. i’ll try to come up with some code soon (although i cant promise anything). if anyone else is intrested just ask for it.
once again thx for your help
for the ones intrested: this is part of a simple “see how far you can get” test. starting off with a blockwise-streamed terrain/world (including an editor).
2nd step would be some kind of interactive nature demo with all kind of stuff on the terrain(no physics, just mere “pick a apple from the tree” stuff).
3rd would include multiple players on the same terrain over network.
4-basic communication, item-traiding, fighting
5-completation to a scalable online rpg
i still hope to make it over step 1 =), cause this seems the “dry” part to code. anyone who wants to join in to kill some time, waste some code or whatever is welcome.
2 meshes are ok as long as you have no lightmap, since they affect both meshes it could be quite troublesome…
well since this is just a fall-back for cards with just 4 texture’s per pass it’ll be more comfortable to 2nd pass it since i dont want to code the terrain-loading stuff twice
guess i’ll have to write my applications to get to study next year, first.
perhaps i’ll spend some free ours on this part from time to time till i finished the paperwork, after all i have 5 week holidays left so there should be time to get stated
i’ll post results as soon as i have some
trouble… big touble … and very troublesome too… very troublesome indeed… one thing i didn take into account was that it can happen (if you see through a hill) to get faces rendered which are actually hidden behind others but visible due to the transprence and they are mixed down onto the solid layer… so you can see one or 2 textures though the ground ontop of the first one…
thats a real point for using 2 meshes- anyone know how to avoid z-fighting there?
(bdw… got some very intresting looking stuff while trying to multitexture in general)
having a sloution simmilar to the one used in dungeon siege would be another approach. they use4x4 or 4x8 meter 3d-patches (which are actually real 3dmeshes) tiled to gether to a world. i guess using them, a tileset of textures (still using multitexturing with decal mode for extra details) and a bunch of vegetation should do the job. with a fitting home-made editor one should be able to archive almost endless worlds in a reasonable amount of time. while keeping a maximum degree of freedom in mapdesign and performance. a fitting editor would allow even non-programmers to create new maps.
guess i’ll go down that way. skipping lightmaps =)
for the ones intrested … thats about the dungeon siege http://www.drizzle.com/~scottb/gdc/continuous-world.htm
anyway thanks for all your efforts and brainstorms.
bdw… scorched3d has some very intresting code if it comes to terrain rendering and optimizing… perhaps someone is skilled enough to bring it into panda.
at least this looks like 1 texture stage can store 2 alphachanels (one in real alpha the other as rgb-greyscale). i tried around with the stuff but i just got my normal texture darker, white or unchanged. neither changed opacy nor actual color(not into white or “darker-normal”).
can anyone explain how those bind modes are working? which source changes which texture stage? maybe a small working example code?
a big thanks in advance.
well i i got the obove one wrong and i cant acces alpha and rgb seperatly i have to fall back to my 2nd pass stuff…
stupid me figured out that rendering the bottom layers (the complete opaque one) first solves the through-ground problem. another problem is the texture size… 1024x786 aren’t that good for textures … actually no screen resolution really is.
so anyone know how to use these advanced blend modes?
im gratefull for any hints…
if you wanna know what i’m needing the stuff for:
-sory no panda ingame… just a blender render, ecept for the self-shadows this is my goal (never mind trees bushes and the viewing rande… put this together in a real hurry,everything… terrain is what matters here)
Your first question: Z-fighting
Z-Fighting is not a trivial thing to solve. The easiest way is actually to use the ms flag. This is ok for most things, but it does degrade the texture quality somewhat. I would just recommend this. You can’t really tell unless your absolutely looking for it.
Basically you can use setAlphaMode(EggRenderMode.AMMs)
The next thing you can do is be very careful about the model and make sure your model can be shifted up slightly so that they don’t intersect. This way you can put them into them into a “fixed” render bin and just set the render order manually.
As for your texture blending modes. It’s pretty tricky. I’ve honestly never combined two alpha channels using multi-texturing. It may not work. But I do know that you can specify alpha channel textures to get the effect you want. Just make sure your color texture doesn’t actually have alpha. To help, think about the texture modes like this:
Color is a value from 0.0 to 1.0 and color is stored in RGBA where alpha is 0 for transparent and 1.0 for opaque so (0,0,0,1) is black and (1,1,1,1) is white. If you add 2 textures you get (0,1,0) + (1,0,0) which is green + red = (1,1,0). For alpha its the same thing only think about it in terms of transparency. So alpha 0.5+0.5 = 1.0 which is fully opaque where 0.5*0.5 = 0.25 which is 25% opaque.
thx for your awnser… well i guess 2 meshes are no solution for me.
your awnser was pertty good and understandable… but my problems are not texture blending itself but texture combining. (normal blend modes are working fine but i cant get this combining stuff working)
since i’m not trying to combine 2 aplha textures, but trying to get the 2 aplhatextures out of a a normal texture with alpha channel its even more tricky.
a small ilustration might explain it better…
but no matter what i tried, those advanced blend modes didnt behave in any way they should or even could.
i would’nt mind adding a alphachannel to the normal textures as long as it works.
can anyone try and look if those combine modes are working properly at all?
thx again for any ideas
I’ve never had a problem using the texture combine modes. They are a little complicated to use, and there are limits to the way you can combine them (these limits are imposed by the graphics hardware). Some combinations of texture operations are simply not possible without using a shader.
In particular, I don’t think you’ll be able to combine four texture the way you’ve drawn the picture, without using a shader. But you can combine three of them, say the first two green textures in your picture. This will require three texture stages.
The first two texture stages will be your green textures 1 and 2. These will be applied in mode CMReplace. You will need to call setSavedResult(True) on the first texture stage, so you can get a handle to it later.
The third texture will be a grayscale (or alpha) blobby texture shown as “RGB taken as alpha”, and it will be applied in CMInterpolate. Its three sources should be, in order, CSLastSavedResult (to reference the first texture stage), CSPrevious (to reference the second texture stage), and CSTexture (to reference this grayscale texture, which switches between the first two).
The reason that you can’t do this again to get another layer, is because DirectX doesn’t support multiple different saved result textures, so there’s no way to reference a texture other than the first texture or the most recent texture in a later grayscale texture stage. OpenGL does support this on some graphics cards, but Panda tries to present the same interface for both API’s, which means Panda can’t take advantage of OpenGL’s flexibility. In any case, it would require five texture stages, even if you could use the OpenGL functionality, but you could do it in four with a shader (by combining the two grayscale textures into a single RGB/alpha texture, as you have drawn your picture).
hey thx for the detailed infos.
i’m using opengl since i’m in love with linux^^
hmm… well at least i can blame DirectX for beeing not felxible enough
so i cant acces rbg and aplha seperatly with these texture combining modes?
hm thats bad… i really do need 3 textures.
i have no idea how to write a shader at all, and the other thing is that panda samples with shaders wont work(not a single one and i know that my card and the driver support shaders)
well i guess i’ll have to try around a little more…
drwr… would you mind posting a small code shnipset how to use these combine modes? the manual still gives me headaces.
To me, it totally sounds like you should use shaders. You can even cut down on the required number of textures: you can do something weird like have multiple alpha textures stored in one by putting them in R and G or something. Probably a little annoying, but it might save you a pass!
… well i dont need to cut it down further… 2 aplhas in one texture are enough for 3 actual textures ( i dont need transparent terrain).
well the problem remains… i neither know how to write such a shader nor do panda recognize that my gracard can use shaders (gf4ti4200 certanly do have shaders (both, vertex and pixel)( i guess v1.1 or something like this).
i saw an article where someone used a color map, a lightmap (stored in vertex-colors) and 3 blended detail maps (grasscale) with a geforce2 in a single pass using the single color-channels to mix the textures.
a geforce2 only have 2 textures per pass so it !should! be definetly possible to get my stuff working without shaders (guess you would have to modify the engine a little… but same would go for my shader problem)
Though the GeForce 4200 does indeed support simple shaders, it doesn’t support the more sophisticated shaders used by the demo programs that ship with Panda. See this thread for more discussion on this card.
It also, I believe, only supports 4 texture stages, so you wouldn’t be able to use the 5-texture-stage approach to your texture layering, even if Panda gave you the interfaces for it. But you can use the 3-texture-stage approach to switch between two different color textures. Here is some code that demonstrates this:
from direct.directbase.DirectStart import *
from pandac.PandaModules import *
s = loader.loadModel('smiley.egg')
# Two textures to provide color
tex1 = loader.loadTexture('maps/smiley.rgb')
tex2 = loader.loadTexture('maps/frowney.rgb')
# One grayscale texture to switch between the different color layers
swtex1 = loader.loadTexture('maps/grid.rgb')
# A TextureStage for each color layer.
ts1 = TextureStage('ts1')
ts2 = TextureStage('ts2')
# A TextureStage for the switch layer, to choose between ts1 and ts2.
swts1 = TextureStage('swts1')
# Now apply all the textures. Use an override of 1 to replace the
# texture that is already on the model.
s.setTexture(ts1, tex1, 1)
s.setTexture(ts2, tex2, 1)
s.setTexture(swts1, swtex1, 1)
thought i could use a certain texture stage insteaed of the “previous” and “last” stuff…
guess i’ll use neither shader nor multitexturing in that sense.
my fallback was using a tileset simmilar to the “good old strategy games” but i was too stupid to use them the right way at first. ended up with thousands of nodes instead of one with several geonodes. head-on-table-bang
guess i’ll have a look at this again when i should try to add spec-maps or shadows or whatever.
once again thx for all your efforts.
i’ll finally consider my problem as solved or worked-around.
conclusion: either shaders, better graficcard or changes on the renderer of panda.