Texture Painting

I created some working demo out of the code-snipplet above. It may be of use to anyone trying to implement this in his application, or those who just want to try it out.

You have to zoom out before you can see something. (as in pview)
v toggles the color picking buffer

http://public.nouser.org/~rspoerri/Development/python/panda3d/examples/uv-texture-pickling_v0.3.zip

I have also included a script which generates the required color-images for any size up to 4096x4096pixels.

I’m interested in this. Could you repost your working demo hypnos? It seems to be gone now.

/me thinks blenders texture painting is much better…

@ treeform: while this is true, in-panda texture painting can be quite intresting. for example to paint road, textures, or heigh of a terrain itself directly in panda. sorta realtime^^

i uploaded it:

http://public.nouser.org/~rspoerri/Development/python/panda3d/examples/uv-texture-pickling_v0.3.zip

it still works, just zoom out to see the object to paint on. One thing that changed since i released is the mouse-down-repeat thing, so you may want to have a look at the code to be able to continuously paint on the object. (currently you can only put 1 pixel)

Works over here, cheers! It’d be fun to develop into more of a 3D MS Paint.

This looks really interesting,

lots of runtime possibilities,
creating character new skins on the fly, mhmm

c.

Can someone give a hint on how to generate the gradient image discussed above?

Also, Hypnos’s link is dead.

Just create a PNMImage and fill it in one pixel at a time with a pair of nested for loops.

David

If it was a simple horizontal or vertical gradient that info would be enough. How do we generate a 4 color gradient like that properly?

Simple. The red value corresponds to the X coordinate, and the green value corresponds to the Y coordinate (flipped upside-down, because of Panda’s v=up convention). Or the other way around, I’m not sure how the original code was set up.

Doesn’t sound simple to me.

from panda3d.core import PNMImage

x_size = 256
y_size = 256
img = PNMImage(x_size, y_size)

x_fac = 1.0 / (x_size - 1)
y_fac = 1.0 / (y_size - 1)

for x in range(x_size):
    for y in range(y_size):
        red = x * x_fac
        green = (y_size - y) * y_fac
        blue = 0
        img.set_xel(x, y, red, green, blue)

img.write("gradient.png")

This script produces an image that is black in the lower-left corner, red in the lower-right corner, green in the upper-left corner, and yellow in the upper-right corner.

I find the thread hard to follow. Would you mind describing what exactly you need?

Is it a PNMImage with a gradient of four colors in it?

This is what he needs:

And the code I gave generates exactly that. Or, use this more compact and more monolithic version that is perhaps easier to understand:

from panda3d.core import PNMImage

img = PNMImage(256, 256)
for x in range(256):
    for y in range(256):
        img.set_xel_val(x, y, x, 255 - y, 0)

img.write("gradient.png")

The idea being to map colour values to U, V coordinates. For any sampled colour, then the red value will correspond to the U coordinate and the green value will correspond to the V coordinate.

Thanks rdb.
Do we have to create an “invisible” copy of our Actor/NodePath for the offscreen buffer, or can we have a TextureStage only be set to be rendered by a specific camera/display region?

This is explained here:
Multi-Pass Rendering

Long story short, you can use something like this:

dummy = NodePath("tmp-state-dummy")
dummy.set_texture(gradient, 1000)

second_cam.node().set_tag_state_key("uv-painting")
second_cam.node().set_tag_state("on", dummy.get_state())

Now, if you want a particular node to be UV-painted on, you simply need to set the tag:

node.set_tag("uv-painting", "on")

This will cause it to get the gradient texture when rendered by the second camera.

If you want all nodes rendered by the secondary camera to have this gradient texture, then you can simply use set_initial_state instead of set_tag_state to set the state, as described by the linked manual page. You may actually wish to combine these approaches to use set_texture_off(999) on all other nodes being rendered, or alternatively you may use draw masks or simply set the scene node for the camera appropriately in order to limit what your second camera is rendering.

The link to Hypnos file seems temporarily broken, but you can achieve the file still here:
http://www.nouser.org/WD/Development/python/panda3d/examples/uv-texture-pickling_v0.3.zip

(Beside, I got some php error messages when I visit the site.)