project once?

np.projectTexture() seems to start a task which gets updated every frame. I just want to project something and leave it like that and use my projector node to project other things.

I am completely unable to get the same results as projectTexture with setTexProjector() or setTexTransform(). And calling clearTexProjector() will remove the effect completely (remove texture). Help. … Screen.php

This may be what you’re looking for.

Although I’m not sure how to set it up, if it does what I think it does, then I was looking for something like that too, thanks.
But I also want to project on an actor in this case, so this probably won’t work or will be slow.

Yes, it would be too slow for dynamic things. But then, if you want to project onto Actors, you actually want it to be updated every frame. Unless I’m missing something.

I mean project and keep the projection. Why does it need to be updated? Like blood on a character.

BTW, does the ProjectionScreen create a new polygon on the place of the projection, like in the Overgrowth game we talked about? It’s cool that Panda has something like this built in, but I can’t really get how to use it. any tips? i think yuo posted a code of your editor which did something similar.

Also, this is off-topic, but hows your editor going? It seems to support a lot from what I’ve seen. Any game you are working on which uses it? Sounds very interesting.

Ah, so you misunderstood the projection screen and I misunderstood your misunderstanding… If that makes any sense.

The projection screen is not like the Overgrowth decals. It would be nice if it was, but that’s something I had to write by myself :stuck_out_tongue:.

It’s exactly what it says – projective texturing. It sets texture coordinates projected according to its lens on everything below its node. It doesn’t create any additional geometry.

This means that it’s not good for projecting stuff dynamically onto moving objects (it’s very slow), but it could work for projecting stuff once every now and then. Even onto an actor, because it’s just texcoords – the projected texture will then be “animated” normally.

However, I wouldn’t advice it for stuff like blood, because you’ll quickly run out of texture stages. It would simply be one stain == one texture stage and one texture coords set. If you’re willing to delve into the area of shaders, you might be able to implement a system for injuries like the one Valve used in Left 4 Dead 2. You could simplify it and easily support simple wounds. Other than that, I don’t know how you could approach it.

It’s shaping up very well, thanks. Yes, I’m building it with a specific game in mind (which doesn’t mean it won’t be useful outside of that, obviously, but it will be best suited for FPP/TPP games). I’m now in the process of building the pipeline from the editor to the game, which was using Blender and Eggs for levels before. That means I have some work to do on the game’s side, in level loading and lighting (to use custom shaders, directional lightmaps and ambient probes in place of Panda’s lights and shader generator).

Odd, the doc string of generateScreen() says:

Synthesizes a polygon mesh based on the projection area of the indicated projector. 

What does it mean then? I’m totally confused with this class.

As for the 4 textureStage limit of the fixed-function, I thought I could use this:

BTW, you are taking the lead: [Screenshot thread)
I wish these would be added to the gallery. the irrlicht gallery gets updated with user screenshots every month, but by looking at Panda gallery alone it feels like nobody has used the engine since 2007.

I don’t use that method. I even managed to forget its there. You shouldn’t concern yourself with it. It doesn’t do what you think it does.

What you need to do is this:

  1. Parent all stuff that you need to project texture onto to the ProjectionScreen node (or rather to the nodepath, to be exact).
  2. Call projectionScreen.recompute() to generate new texture coordinates.
  3. Reparent stuff back to where it belongs.

You should also set the lens to the right position and rotation, but that’s rather obviously the first thing to do.

I’m still not convinced you’ll be able to use the ProjectionScreen (or the MultitexReducer, for that matter) to do blood stains. I’m just not sure how to explain why… Every time you recompute your texture coords with the PS (which will happen every time a character is hit), the 0-1 set will be inside the lens. And there’s just once lens. This means that you will only have one stain at a time.

I don’t think the multitex reducer is smart enough to merge textures that use different texture coords correctly, and even if it was, I don’t think it’s worth it in this case.

Plus, I think both systems may be too slow for that use, especially if you wanted to have more than a few characters running around with assault rifles.

What I would advise doing, is just using a PNMImage and drawing blood on that. It’s fast, simple and for blood stains you don’t need much more. Of course, in case of a character you would need to translate the 3D hit point to 2D texture coordinates of a pixel (set of pixels) you need to color, but that’s fairly easy and I can help with that.

The last solution I proposed has one big disadvantage, though. That’s memory inefficiency. You would need a PNMImage for every character and a also lookup table to store the 3D -> 2D translation, because computing that on the fly could slow you down too much. The good news is that the “blood texture” could be relatively small (assuming your actors are low poly enough and assuming optimal unwrapping).

That’s probably the best and the simplest (I like it when the two come together) solution you can get without using shaders. In fact, with shaders you would practically do the same, just with better and faster results.

I know, feels good to know others like my work :slight_smile:.

Well I don’t think you should remove an option because you don’t feel it will work. Unless you could find an explanation. I think there is high chance for it too be too slow. But then again, using PNMImage might not be faster.

Anyway, you said you know how to convert 3d hit points to 2d coordinates for the pnmimage’d texture. How?

Also, I’m still not sure what the ProjectionScreen does that NodePath.projectTexture() doesn’t, or why they are different.

No, no, I’m not talking about removing anything. You should experiment.

I’m just trying to help you find the best possible solution, showing you other approaches that might work (or not). Although I still think the best option is shaders.

Anyway, I think PNMImage could still be faster. At best, you would only need to change a few pixels and reload the texture. Assuming the texture uses an optimal UV set, it can be small and it should be fast.

With the ProjectionScreen, you would need to recompute the texture coordinates every time your character is hit. I’m not sure which approach would be faster, but I would assume it’s PNMImage.

Besides, if you want to use the AutoShader, I would assume that every time you use the ProjectionScreen, the shader would be recomputed. With the PNMImage approach, that would not happen.

Linear interpolation. I have code for that, although it would probably need to be optimized (and maybe “cythonized”) to work better in real time. Plus, there would need to be a method for quickly finding the area that was hit.

The ProjectionScreen is just projecting texture, or to be more precise, projecting texture coordinates. That’s it. It does that on the CPU and it does that once (when you call recompute, or when the scene graph below it changes IIRC). It projects onto everything that’s below it’s node in the scene graph.

Just try it out. Take a simple cube with a texture, make a ProjectionScreen, parent the cube to it and call recompute. That should project the texture coordinates on the cube according to the ProjectionScreen’s lens. And that’s all it does.