Texture Painting

I’m trying to create an app that allows you to paint your textures in a 3D environment. A picture paints a thousand words, so for a better explenation:

The problem I’ve encountered is how to convert the 3D position to the pixel position on the texture. I’ve tried a few methodes, without big success. Currently I’m painting to a PNMImage and putting that over the original texture as a new layer, but thats very inaccurate.

Can someone give me a suggestion where to look for answers or give me an educated guess ?

Maybe you could use the following manual pages:
panda3d.org/manual/index.php/Click … 3D_Objects
panda3d.org/manual/index.php/Examp … 3D_Objects
This example demonstrates how to get a 3D point from a 2D mouse position.

Finding the 3D intersection of a mouse click is not a problem, rather, where that 3D position would be on my texture.

thanks for the reply though

I think you would have to walk through the vertex list of the object you clicked on, find the nearest vertex to the 3-D point of intersection, and use the UV’s from that vertex.

That would get you close. To get precisely there, you would have to figure out the surrounding three vertices, and interpolate between them to a point in the middle of the triangle.

It might be asking a bit much to do this sort of math in Python, every time you click with the mouse. However, you might be able to precompute a lot of this work: when you load the object, run through its vertices once, and build up a table of some kind mapping 3-D coordinates (vertex position) to texture coordinates. Then you can consult that table when you get the 3-D position of the mouse click.

Of course, then it comes down to representing this table efficiently. :slight_smile:

David

Thanks, I didn’t know it was posible to get access to the raw vertice positions. Going to try your suggestion :slight_smile:

I don’t supose its posible to just project the lines over the spot and then somehow merge that with the texture?

-Edit-
Just to be sure.
Is the vertex position returned through the vertexreader in local space (relative to the model root)?
The texture coordinates asociated with the vertex, is that the relative position on the texture? (seems to run from 0-1)

Nope, sorry. That’s not what your graphics hardware is built to do. :slight_smile:

Yes.

Right. See the chapter on Texturing in the manual for more specifics.

David

Here’s a crazy idea: create a texture that’s black in the lower left corner, red in the lower right, green in the upper left, and yellow in the upper right, and a smooth gradient across the rest of it. Replace the regular texture with this gradient texture. The color of the pixel now corresponds to the texture coordinate of the pixel. You can use an offscreen buffer for this operation, so you don’t see all the crazy colors.

Problem: only 8-bit accuracy. Perhaps you would need to do it in two passes: once to get a rough estimate, then again with a texture-coordinate transform to get a more precise measurement.

I actually though about indexing by colours, but then by using all RGB values (255255255):

But I haven’t seen a clue as to how I could retrieve that value from the panda render window.

Ah, that is pretty good! You could actually apply this same trick twice, using two offscreen buffers, to get better accuracy. The first buffer has the texture mapped in the usual scale, 1 : 1. The second buffer has the texture mapped at the scale of 1 : 256 (with wrap mode set to repeat, of course). Thus, the pixel color in the first buffer provides the high eight bits, and the pixel color in the second buffer provides the low eight bits.

Repeat with more buffers as necessary to get arbitrary precision. :slight_smile:

David

You’re getting into some advanced rendering tricks, here, but you would render to an offscreen buffer. You would associate a texture with that buffer using the buffer.addRenderTexture(). You would have to specify GraphicsOutput.RTMCopyRam to the addRenderTexture() call, which would make the texture image available to your process.

Then you could copy the texture image to a PNMImage with texture.store(), and the PNMImage class has methods to read the color of any particular pixel.

You would only have to redo this PNMImage copy operation every time the viewpoint changed, not with every mouse click.

David

isnt that one of the nice new blender features in version 2.43 :wink: ?

awesome, the colour mask works great!

here is the test code I used to get this working, perhaps someone else will find it usefull someday:

def createBackgroundRender(self):
        # setup an offscreen buffer for the colour index
        self.pickTex = Texture()
        self.pickLayer = PNMImage()
        self.buffer = base.win.makeTextureBuffer("pickBuffer", 800, 600)
        self.buffer.addRenderTexture(self.pickTex, GraphicsOutput.RTMCopyRam)

        self.backcam = base.makeCamera(self.buffer, sort=-10)
        self.background = NodePath("background")
        self.backcam.reparentTo(self.background)
        self.backcam.setPos(0,-2,1)
        self.background.setLightOff()
        
        tester = loader.loadModel('models/female')
        tester.reparentTo(self.background)
        tester.find("**/avatar/head").setTexture(loader.loadTexture("index.png"),1)
        tester.find("**/avatar/upper").setTexture(loader.loadTexture("index.png"),1)
        tester.find("**/avatar/lower").setTexture(loader.loadTexture("index.png"),1)
        base.graphicsEngine.renderFrame()
        self.pickTex.store(self.pickLayer)

def paint(self):
        if not base.mouseWatcherNode.hasMouse():
            return

        mpos = base.mouseWatcherNode.getMouse()
        x = int(((mpos.getX()+1)/2)*800)
        y = 600 - int(((mpos.getY()+1)/2)*600)
        
        p = self.pickLayer.getRedVal(x,y)
        p += self.pickLayer.getGreenVal(x,y) * 256
        p += self.pickLayer.getBlueVal(x,y) * 256 * 256
        
        x = int(p % 256)
        y = int(p / 512)
        
        for i in range(6):
            ny = i + y
            if ny < 512:
                for j in range(6):
                    nx = j + x
                    if nx < 512:
                        self.workLayer.setXel(nx,ny,0,0,0)

        # display the modified texture
        self.workTex.load(self.workLayer)

it could be used for a terrain editor,too =) for example to paint opacy maps when using splatted textures, or simlpy painting the heighfield image… quite some posibilitys

Feel free to use the code I’ve pasted. Though I think it would be ill suited for painting heightfields, since you wouldn’t see the changes in real time.

But for vertex painting, perhaps it would be usefull…

if the terrain algorithm supports updates its no problem. and you can do this with many algoirthms =)

I rather meant, it would be very CPU heavy to draw a pixel, load in the new texture, and rebuild the terrain. It would in real time. In that case you might be best of manipulating the vertices directly, and save the height data to a heightmap image.

Just my two cents.

Anyway, I’ve looked further into the parasite buffer. Since it lowers my frame rate alot, I decided to only render once when the camera position changed. I’m using the ParasiteBuffer.setOneShot flag for this, but some of the API on this subject is not clear to me.

Now I’m wondering if this effectively destroyes the buffer, or does it keep the thing in memory waiting to be reattached?
Also, what happens to any camera’s and node’s associated with this buffer?

Besides those things, is the framerate capped somehow? because even at a blanck screen, it won’t go beyond 75.1 fps according to the base.setFrameRateMeter(True) line.

As a side note: I believe this is the way the ‘old’ Isometric tile games work. Wonderful to see this old techniques being reused.

The setOneShot() call will destroy the buffer after its one frame has rendered. If you want to keep it around and re-use it from time to time, you should use camera.setActive(0) or displayRegion.setActive(0) to temporarily stop rendering.

By default, the frame rate is set to cap at your video sync rate, which appears to be 75Hz in your case. This is a good thing, because there’s not much point in generating frames faster than your monitor can display them, and syncing with the video refresh prevents some visual artifacts that can occur if you don’t sync, and the frame changes while the monitor is redrawing. You can, of course, turn this video sync off if it really bugs you not to be able to see numbers like “215 fps” in the corner of your screen. :slight_smile:

David

great, that should help. At the moment, I’m recreating the buffer each time I need an index update, put’s a domper on the precious framerate.

Talking about framerate, it bugs me in the sence that I have a mid / highend system. I’d like this app to be useable on a lower end system aswell. It helps to be able to see the true framerate there. :slight_smile:

And yes, having 215 fps in the corner does spice up the screenshots :wink:

Put:

sync-video 0

in your Config.prc to disable to normal sync-to-video-rate behavior and allow any frame rate.

David