My Question is pretty much what the title says. In my project I am trying to use 3d-textures to draw damage into a vessel’s volume and to define the layout of damageable systems. To save on memory and keep damage calculations reasonable, I would like to go rather low resolution with that.
Is there a way to rework the edges of alpha cutouts on polygons to make these look less grid-based?
I haven’t tried this, so it may not work, but if you’re using shaders then you might be able to do something involving the fragment UV, the calculated UV of the texture pixel-centre, and the distance between the two–and possibly samples from neighbouring pixels to determine whether to employ such rounding, and in which direction.
It is actually possible. It’s called signed distance fields. You just have to blur the texture while using a binary cutoff mode (can do with setTransparency(TransparencyAttrib.M_binary)) This allows high quality curves with low quality textures.
You can add a shader with a smoothstep function to antialias the edge if desired.
I tried it now on a model. TransparencyAttrib.M_binary seems to do just a little. I also tried a few filter settings to blur it out, wich didn’t do much either.
Maybe it doesn’t work so well with 3d-Textures in generell. The pixel already get kinda plury there from how the texure transitions between pixels.
Or the resolutios I am trying to work with are just too low for any of this.
Looking at your approach in the damage system, the idea arises to make additional divisions in the geometry. If it is damaged, just delete parts of the polygon. You can use math to recalculate vertexes for smooth shapes, and when combined, you can get almost any shape without blurring the texture.
For example, you can make a grid of hexes and remove hexes when damaged, it won’t look bad.
I too had that thought, before. But I’m not sure how I would make an somewhat efficent way to pick specific faces, especially with later meshes that will have some internal structures modeled into them.
The program is currently using a mesh collider for testing, but eventually I want to wrap models in collision shapes to detect projectiles in close range and then calculate potential damage from there.
Well, kind of? I have a beam prototype that writes a few pixels at runtime, with a little weaker coloring at the edges.
I should have tested with a preprared texture layout sooner.
The Edges are still not getter much better even with strongly blured textures. But if burn spots and holes are set somewhat large in scale, a density of 8 pixels would look descend enough. Maybe I can then do more about it when I take a closer look at shaders, eventually.
Hmm… Perhaps I’m missing something, but it strikes me as odd that your burn-patterns still show the blurriness of the original texture: in alpha-cutout mode I would expect sharp edges, as shown in the tests posted above. Perhaps alpha-cutout mode isn’t being applied as expected…?
That’s possible. Not sure though where exactly the problem would be then.
Seraga’s program works fine on my panda installation, so it’s probably not specific to the engine version I’m working with. Also, turning off autoshader doesn’t change anything either.
edit: Strangely, I’m still seeing alpha cutouts even with transparancy explicitly turned off.
I think I found my issue. For some reason TransparencyAttrib.M_binary doesn’t work with the material settings I used on my model’s hull. I still will have to look closer for the exact settings that cause problems, but I think I can work with this.
Edit: I kinda wish, I could mark more than one post as solution. It took me a few steps to learn, how to put little holes into a model.
Edit 2: Looks like Panda takes an issue with materials that are being associated with a texture in the egg file (At least on models exported from blender). When all textures are being applyed at runtime, M_binary works fine though.
All the additional information is stored in the intermediate alpha values between 0 and 1 of the pixels around the edge. In effect, in the blurred texture, the alpha value of each pixel doesn’t just store whether it is inside or outside the shape, but it stores the distance to the nearest edge of the shape.
The fragment shader runs at a much higher resolution than the original texture when rendering the object. When it samples the pixel from the texture, instead of reading a binary value, it is reading an approximation of the distance value to the nearest edge, and it can use that to much more accurately calculate whether a given pixel coordinate should be considered inside or outside the shape.
This trick, of course, only works for binary alpha. It does not work for anything that needs semi-transparent areas (ie. 0 < a < 1). It also struggles with sharp corners, which always end up being rounded, as can be seen in @Thaumaturge’s example.
(There are, for the curious, variations of this method that do support sharp corners, such as this.)