Stutter when loading textures dynamically

This is an old thread, but it’s the closest I can find to touching on my issue. I’m experimenting with my own take on a terrain/lod system (with it’s own strengths and weaknesses compared to every other terrain/lod system out there of course.)

Anyway, I’m dynamically loading a /lot/ of textures on the fly and noticing stutters.

I am doing the trick where I create a separate geometry/texture loader thread. In that thread I load the texture into a PNMImage() first and then make the texture (code snippets, pretend the file is always there):

    with open(file, "rb") as f:
        data =
        p = PNMImage()
    tex = Texture()

Then I attach that texture to my geometry and send the node back to the main render thread to attach to the scene graph (simultaneous to removing other lower level detail stuff.) The worker thread seems fine and doesn’t seem to hit the render thread update rate at all.

However, when I attach the new node to the main render graph, I get a stutter. My theory is that this is due to making the mipmaps. When I turn off mipmaps the stutter length drops significantly (but still can drop a frame or two).

I saw advice to make/load an egg file and the egg file loader knows how to keep the model/texture loading transparent to the render thread, but I’m creating some of my geometry on the fly. I saw somewhere else that it’s possible to create a memory-only egg file and attach it?

So my question is: Are egg file formats still the way to go? Will this address my issue, or am I just stuck with stuttering when paging textures over to the video card? If I can do egg files, I want to do these in memory only. I have found some low level egg file api documentation, but for the life of me, I haven’t found the higher level explanation of how to build the egg file in memory.

My needs are super simple. I have a pile of triangles (not even strips or fans), I have the texture, I have the texture coordinates and vertex normals. I just need some tutorial or example of how to make the egg file format in memory.

Or is there a way to make the mipmaps and page in the texture in my loader thread so all that work is already done and the render thread doesn’t have to take the performance hit?

Sorry if this is all dumb questions and explained somewhere already … I have been searching but not finding much. Thanks in advance for any thoughts, hints, or pointers to “for dummies level” documentation. :slight_smile:

In fact, panda has a native way to store an instance of a class in byte format. You can save the texture by generating mipmaps levels in advance.

from panda3d.core import TexturePool, Texture, SamplerState

tex = TexturePool.load_texture("noise.png")


At the moment I don’t have any code with the generation of level mipmaps, I don’t even know if it works. Alternatively, you can use the dds format, which can store mipmaps levels.

Any instance object can be loaded.

file = BamFile()
file.open_read("file directory")
instance_object = file.read_object()

Or we save it.

file = BamFile()
file.open_write("file directory")
writer = file.get_writer()

Thanks Serega,

I did some more hunting and added a call to tex.generateRamMipmapImages() in my loader thread. That seemed to help to some degree. I still get stutters with each new node I add/remove to the scene graph, but they are at least shorter stutters.

Random comments: The structure of panda3d documentation is that a lot of useful docs are filed under their c_plus_plus_function_names() and not their PythonFunctionNames() and that has me searching for the wrong things a lot of times when I don’t remember that. (Leading to me probably asking more than my fair of dumb questions here.)

I’m really hoping to avoid a step that generates the mipmaps, writes out a bam or egg or dds type file, just to read it back in and presumably delete it from the disk. I already have the top level (no mip map) textures saved as a jpg or png so I’m hoping to avoid wasting more disk space with more intermediate versions … unless that’s the only path to smoothly paging textures, then you do what you have to do I guess … (but I don’t know myself if it’s worth chasing that down … if it would help or hurt … or I’ll just end up discovering some other aspect of connecting in a new node to the scene graph is causing stutters.)

I was mostly asking about egg files because I saw a reference that suggested I could create a memory only egg file using the egg file builder api and then connect that to the scene graph and allegedly all the magic would happen from that point on … I just couldn’t find sufficient documentation for how to do this, so I haven’t been able to try it. Is there a tutorial or example out there that I’m missing? I just have a simple pile of triangles, vertices, normals, texcoords … really basic stuff from the model/node perspective.


The egg file is just a structure that stores a description of the model in the form of tag texts, something like html. Then it is parsed at boot and the corresponding instances are created on the C++ side. From this we can conclude that this is a waste of CPU time. You can immediately use the generation of a procedural model without an intermediary in the form of egg.

Here is an example of use, this is what the egg loader does under the hood.

from panda3d.core import GeomVertexData, GeomVertexFormat, Geom, GeomTriangles, GeomVertexWriter, GeomNode, Texture, TextureAttrib, NodePath, RenderState, ModelRoot
from direct.showbase.ShowBase import ShowBase

class MyApp(ShowBase):

    def __init__(self):

        # Creating vertex data.
        vdata = GeomVertexData('name', GeomVertexFormat.getV3n3t2(), Geom.UHStatic)

        vertex = GeomVertexWriter(vdata, 'vertex')
        normal = GeomVertexWriter(vdata, 'normal')
        texcoord = GeomVertexWriter(vdata, 'texcoord')

        # Adding vertex data.
        vertex.addData3(-1, -1, 0)
        vertex.addData3(1, -1, 0)
        vertex.addData3(1, 1, 0)
        vertex.addData3(-1, 1, 0)

        normal.addData3(0, 0, 1)
        normal.addData3(0, 0, 1)
        normal.addData3(0, 0, 1)
        normal.addData3(0, 0, 1)
        texcoord.addData2(0, 0)
        texcoord.addData2(1, 0)
        texcoord.addData2(1, 1)
        texcoord.addData2(0, 1)

        # ------------------------

        vertex.addData3(-1, 1.40914, 0)
        vertex.addData3(1, 1.40914, 0)
        vertex.addData3(1, 3.40914, 0)
        vertex.addData3(-1, 3.40914, 0)

        normal.addData3(0, 0, 1)
        normal.addData3(0, 0, 1)
        normal.addData3(0, 0, 1)
        normal.addData3(0, 0, 1)

        texcoord.addData2(0, 0)
        texcoord.addData2(1, 0)
        texcoord.addData2(1, 1)
        texcoord.addData2(0, 1)

        # Creating primitive - a.
        prim_a = GeomTriangles(Geom.UHStatic)
        prim_a.addVertices(0, 1, 2)
        prim_a.addVertices(0, 2, 3)

        geom1 = Geom(vdata)

        # Creating primitive - b.
        prim_b = GeomTriangles(Geom.UHStatic)
        prim_b.addVertices(4, 5, 6)
        prim_b.addVertices(4, 6, 7)

        geom2 = Geom(vdata)

        # Load texture.
        tex1 = Texture("Texture1")

        tex2 = Texture("Texture2")

        # Create new geom state.
        state_a = RenderState.make(TextureAttrib.make(tex1))
        state_b = RenderState.make(TextureAttrib.make(tex2))

        # Create geom node.
        geom_node = GeomNode('Plane')
        geom_node.add_geom(geom1, state_a)
        geom_node.add_geom(geom2, state_b)

        # Attach geom node.
        root = NodePath(geom_node)

app = MyApp()

I think if you transfer the generation of the landscape and loading chunks to the C++ side, you can still reduce the stuttering time of the stream.

However, storing the texture as a byte code is preferable, since you do not waste CPU time unpacking jpeg or png container data. But you have to pay for it with the increased size of the data that uses your disk.

As for the stuttering of the stream, I think the problem is with the transformations that occur through a chain reaction to the bottom of the NodePath hierarchy. Also updating the render attributes and regenerating shaders, if they were on the node.

At the moment, panda is doing a lot of work right in the core, for dynamic data generation for shading. For example, data is created and updated:

p3d_Material and p3d_LightSource

I think this is a bad practice, so this should be handled by the plug-in(module) to generate this data. At the moment, panda does this by default. I don’t know yet how to get rid of NodePath methods that are executed hierarchically for all nodes. The situation worsens if you use .setCollideMask(BitMask32.bit(1)) or something else on nodes parents.

I think we should have an empty NodePath with no hierarchy, just to add to the graph for rendering mesh data.

However, only rdb can give an answer to such complex reflections

I moved your posts to a new thread. In the future, please create a new thread for a new question.

Normally mipmap generation is done by the driver. It’s possible that the driver is doing this synchronously, rather than doing this on a separate graphics queue, so forcing Panda to generate the mipmaps may indeed be faster - but note that calling generateRamMipmapImages() does not release the Python GIL (this is something we could change in Panda) so it can’t actually happen concurrently with other operations that don’t release the GIL (but it could happen concurrently with rendering). If you set driver-generate-mipmaps false in Config.prc I believe that it will happen as part of the texture load operation, so if that happens asynchronously then the mipmap generation will be too.

To analyse where exactly the stutter is exactly coming from, you would need a version of PStats from a development build of Panda3D, which has a timeline view that can allow you to view individual start/stop events.

All C++ methods are also available under their original function_names() nowadays, since many users prefer snake_case - it’s up to you which style you prefer.

I presume you mean a .txo file (which is a .bam file but for textures). These can be faster to read, by virtue of them directly storing the in-memory representation, including any generated mipmaps, at the cost of no compression. You can generate them using egg2bam. However, I do not think that doing this is required to solve your problem.

What you could consider is on-demand asynchronous texture loading. If you enable this, a Texture object is created containing a low-resolution version of the texture plus the filename that the full version can be loaded from, and the moment that a texture object comes into view, Panda will automatically load the texture in a background thread:

As for loading models, Panda can also already do model loading asynchronously in a background thread via a blocking=False parameter, this is also explained on the page above. Panda normally also automatically creates a cached .bam version of any .egg model, so normally the egg loader is not even invoked except the first time you load a particular model. If you want to create models dynamically, do not use the egg loader, but instead construct a GeomVertexData and friends.

I don’t understand @serega-kkz’s last post very well, it does not sound related to the problem at hand.


I just wanted to say that there is no point in creating an egg file to analyze it later and get GeomVertexData again.

However, features like loading from the cache and loading textures when entering the camera’s field of view are useless when streaming chunks and textures.

The question arises what is the essence of the cache, if over time all the chunks of the geometry of the landscape are added there. What immediately prevents you from using bam?

Or a texture that was loaded in one location (the winter biome), but since then you have been in the dune biome. The question arises what to do with the texture for the winter biome that is in memory. It turns out that you need to search for it in TexturePool and delete it manually. And at the moment, the advantage of this approach is not clear.

Somehow this can be used with the BamFile interface? For example, I create an instance of BulletTriangleMesh and save it in the RAM view. Can it also be loaded asynchronously?

There is no need to delete textures from the TexturePool, because Panda deletes the RAM image (by default) from RAM when it is uploaded to the GPU. The Texture object continues to exist but it only contains some metadata about the size and which file it can be reloaded from, while the actual RAM image data is deleted. Panda also has features to unload the texture from GPU memory automatically when GPU memory becomes full.

If you activate that feature with graphics-memory-limit, the least recently used textures will be unloaded first. So if you enter the dune biome, and you run out of GPU memory, (some of) the winter biome textures will be unloaded to make space. If you then reenter the winter biome, the winter textures will be reloaded from disk.

Without graphics-memory-limit, you can still use this technique by manually calling releaseAll() on the Texture object (and clearRamImage() possibly if it has one).

The biggest cost of loading is the latency of disk access, so asynchronous loading exists to mitigate that. So no, there is no automatic feature to asynchronously load from memory - you would have to use a thread for this.