Is there a way to draw sharp rounded edges for alpha cutouts on low resolution textures?

My Question is pretty much what the title says. In my project I am trying to use 3d-textures to draw damage into a vessel’s volume and to define the layout of damageable systems. To save on memory and keep damage calculations reasonable, I would like to go rather low resolution with that.

Is there a way to rework the edges of alpha cutouts on polygons to make these look less grid-based?

It is not possible to make images such as 32X32 as detailed as 2048X2048. Corny you don’t have enough pixels for this.

That’s too bad. I do like the concept thus far, though. Gonna try to optimise the damage system and then take whatever resolutions I can get without too much impact on performance.

Thanks for the info.

I haven’t tried this, so it may not work, but if you’re using shaders then you might be able to do something involving the fragment UV, the calculated UV of the texture pixel-centre, and the distance between the two–and possibly samples from neighbouring pixels to determine whether to employ such rounding, and in which direction.

It is actually possible. It’s called signed distance fields. You just have to blur the texture while using a binary cutoff mode (can do with setTransparency(TransparencyAttrib.M_binary)) This allows high quality curves with low quality textures.

You can add a shader with a smoothstep function to antialias the edge if desired.

1 Like

To get the average value of neighboring pixels, you will need pixels again, but there are not many of them. I wonder how it works.

I was curious and checked.

from direct.showbase.ShowBase import ShowBase
from panda3d.core import TransparencyAttrib
from direct.gui.DirectGui import DirectFrame

class MyApp(ShowBase):

    def __init__(self):

        img = DirectFrame(frameColor = (1, 1, 1, 0), image=(loader.loadTexture('1.png')), scale=(1))

app = MyApp()




Anti-aliasing of course, but there is a small, perhaps the result will be better if there are neighboring pixels of a different color.

But if you blur the texture, producing something like this:
You get a result (when using the “M_binary” mode) that looks like this:

Which is quite amazing for so low-resolution a texture! 0_0

A comparison using a slightly less-simple shape, at the same resolution:

Input image:

Input image:

Quite impressive, I think!


Of course it’s cool, but the same texture will be blurry. But I am now haunted by where this information is stored, I suspect that a new image is being created with a higher resolution.

I don’t think so–I think that this result is just calculated from the blurry low-resolution texture. The curves are inferred, not stored.

I tried it now on a model. TransparencyAttrib.M_binary seems to do just a little. I also tried a few filter settings to blur it out, wich didn’t do much either.
Maybe it doesn’t work so well with 3d-Textures in generell. The pixel already get kinda plury there from how the texure transitions between pixels.
Or the resolutios I am trying to work with are just too low for any of this.

texture to model scale: 1 pixel per 1 cubic meter

texture to model scale: 8 pixels per 1 cubic meter

Looking at your approach in the damage system, the idea arises to make additional divisions in the geometry. If it is damaged, just delete parts of the polygon. You can use math to recalculate vertexes for smooth shapes, and when combined, you can get almost any shape without blurring the texture.

For example, you can make a grid of hexes and remove hexes when damaged, it won’t look bad.

Just to check: Did you try it with a pre-blurred texture? That is, not blurred via texture-filtering (as by a min-filter setting), but with a texture that has itself been blurred?

I too had that thought, before. But I’m not sure how I would make an somewhat efficent way to pick specific faces, especially with later meshes that will have some internal structures modeled into them.
The program is currently using a mesh collider for testing, but eventually I want to wrap models in collision shapes to detect projectiles in close range and then calculate potential damage from there.

Well, kind of? I have a beam prototype that writes a few pixels at runtime, with a little weaker coloring at the edges.
I should have tested with a preprared texture layout sooner.

The Edges are still not getter much better even with strongly blured textures. But if burn spots and holes are set somewhat large in scale, a density of 8 pixels would look descend enough. Maybe I can then do more about it when I take a closer look at shaders, eventually.

Hmm… Perhaps I’m missing something, but it strikes me as odd that your burn-patterns still show the blurriness of the original texture: in alpha-cutout mode I would expect sharp edges, as shown in the tests posted above. Perhaps alpha-cutout mode isn’t being applied as expected…?

That’s possible. Not sure though where exactly the problem would be then.
Seraga’s program works fine on my panda installation, so it’s probably not specific to the engine version I’m working with. Also, turning off autoshader doesn’t change anything either.

edit: Strangely, I’m still seeing alpha cutouts even with transparancy explicitly turned off.

I think I found my issue. For some reason TransparencyAttrib.M_binary doesn’t work with the material settings I used on my model’s hull. I still will have to look closer for the exact settings that cause problems, but I think I can work with this.

Thanks for all the help.

Texture to Model scale: 1 pixel per 1 cubic meter

Edit: I kinda wish, I could mark more than one post as solution. It took me a few steps to learn, how to put little holes into a model.

Edit 2: Looks like Panda takes an issue with materials that are being associated with a texture in the egg file (At least on models exported from blender). When all textures are being applyed at runtime, M_binary works fine though.

1 Like

Your damage-holes are looking good, I think; I’m glad that you’ve made progress with them! :slight_smile:

1 Like

All the additional information is stored in the intermediate alpha values between 0 and 1 of the pixels around the edge. In effect, in the blurred texture, the alpha value of each pixel doesn’t just store whether it is inside or outside the shape, but it stores the distance to the nearest edge of the shape.

The fragment shader runs at a much higher resolution than the original texture when rendering the object. When it samples the pixel from the texture, instead of reading a binary value, it is reading an approximation of the distance value to the nearest edge, and it can use that to much more accurately calculate whether a given pixel coordinate should be considered inside or outside the shape.

This trick, of course, only works for binary alpha. It does not work for anything that needs semi-transparent areas (ie. 0 < a < 1). It also struggles with sharp corners, which always end up being rounded, as can be seen in @Thaumaturge’s example.

(There are, for the curious, variations of this method that do support sharp corners, such as this.)