Graceful loading levels

Okay, if I do

import direct.directbase.DirectStart
from pandac.PandaModules import *

def myFunction():
    print "Model finished loading"

model = loader.loadModel('panda', callback = myFunction)
model.reparentTo(render)

run()

i get “Callback instance has no attribute ‘reparentTo’”.
im guessing the optional callback arguemt makes loadModel return a different class instance. Why?

From direct/src/showbase/Loader.py:

    def loadModel(self, modelPath, loaderOptions = None, noCache = None,
                  allowInstance = False, okMissing = None,
                  callback = None, extraArgs = [], priority = None):
        """
Attempts to load a model or models from one or more relative
pathnames.  If the input modelPath is a string (a single model
pathname), the return value will be a NodePath to the model
loaded if the load was successful, or None otherwise.  If the
input modelPath is a list of pathnames, the return value will
be a list of NodePaths and/or Nones.

loaderOptions may optionally be passed in to control details
about the way the model is searched and loaded.  See the
LoaderOptions class for more.

The default is to look in the ModelPool (RAM) cache first, and
return a copy from that if the model can be found there.  If
the bam cache is enabled (via the model-cache-dir config
variable), then that will be consulted next, and if both
caches fail, the file will be loaded from disk.  If noCache is
True, then neither cache will be consulted or updated.

If allowInstance is True, a shared instance may be returned
from the ModelPool.  This is dangerous, since it is easy to
accidentally modify the shared instance, and invalidate future
load attempts of the same model.  Normally, you should leave
allowInstance set to False, which will always return a unique
copy.

If okMissing is True, None is returned if the model is not
found or cannot be read, and no error message is printed.
Otherwise, an IOError is raised if the model is not found or
cannot be read (similar to attempting to open a nonexistent
file).  (If modelPath is a list of filenames, then IOError is
raised if *any* of the models could not be loaded.)

If callback is not None, then the model load will be performed
asynchronously.  In this case, loadModel() will initiate a
background load and return immediately.  The return value will
be an object that may later be passed to
loader.cancelRequest() to cancel the asynchronous request.  At
some later point, when the requested model(s) have finished
loading, the callback function will be invoked with the n
loaded models passed as its parameter list.  It is possible
that the callback will be invoked immediately, even before
loadModel() returns.  If you use callback, you may also
specify a priority, which specifies the relative importance
over this model over all of the other asynchronous load
requests (higher numbers are loaded first).

True asynchronous model loading requires Panda to have been
compiled with threading support enabled (you can test
Thread.isThreadingSupported()).  In the absence of threading
support, the asynchronous interface still exists and still
behaves exactly as described, except that loadModel() might
not return immediately.
        """

I don’t know why this documentation isn’t showing up in the generated API docs.

David

its quoted in the manual though. I just dont understand this paragraph very well.

Okay, what do you mean by model(s)? You can load multiple models with a single loadModel?
“the callback function will be invoked with the n
loaded models passed as its parameter list”
So the arguent(s) of my function must be an the model references?

Thread.isThreadingSupported() returns 1, which is True, right?

You can pass a list of model filenames to loader.loadModel, instead of just a single filename, if you like. This is especially useful to do when you are using callbacks to load aynchronously, because in this case you can’t call loadModel() repeatedly for each new model you want to load. But you can also pass a list even if you aren’t using callbacks; in this case, loader.loadModel() will just return a list of models instead of returning a single model.

When you pass callback = fn, you change the way loader.loadModel() works. In this new mode, instead of returning the loaded model, it calls fn(model) instead. If you pass a list of three filenames, it will call fn(model1, model2, model3).

When you pass callback = fn, the return value of loader.loadModel() is not a model. Of course it can’t be, because the function returns before the model has been loaded! Instead, the return value is a handle to the loading request, which you can just ignore if you like. Most times you don’t need to care about the return value, and it’s probably better to forget about it for now, until you have a clearer grasp of the asynchronous load process. When you’re ready to think about the return value, you can use it to stop the models from loading if you change your mind about it later for some reason (for instance, if the user clicks a button to quit the game or something).

David

I did some experimenting and it looks like it actually passes one list argument, not an argument list where you would use *args.

Modification to Anon’s code:

import direct.directbase.DirectStart 
from pandac.PandaModules import * 

def myFunction(models):
    for model in models:
        model.reparentTo(render)
    print "Models finished loading" 

loader.loadModel(['smiley', 'frowney', 'jack', 'teapot'], callback = myFunction)

run()

Or if you like, here is some uncommented and probably hard to understand code I put together last night with two versions of a “fade and load” function.
asyncLoadWithFade is the asynchronous version in which I used an EventGroup to to ensure things happen in the correct order. stLoadWithFade does it the regular way, but suffers some of the problems discussed above. With many/large models it can lag enough that the second fade is skipped entirely.

from direct.showbase.ShowBase import CardMaker, Vec4
from direct.showbase.EventGroup import EventGroup

def asyncLoadWithFade(filelist):
    cm = CardMaker('BlackScreenCard')
    cm.setFrameFullscreenQuad()
    fsq = render2d.attachNewNode( cm.generate() )
    fsq.setColor(Vec4(0,0,0,1))
    fsq.setTransparency(1)
    fii = fsq.colorScaleInterval(3, Vec4(1,1,1,1), Vec4(1,1,1,0))
    foi = fsq.colorScaleInterval(3, Vec4(1,1,1,0), Vec4(1,1,1,1))
    fii.setDoneEvent('FadeInDone')
    foi.setDoneEvent('FadeOutDone')
    base._fadeEG = EventGroup('FadeAndLoad', ('ModelsLoaded', 'FadeInDone'), 'FadeReady')
    
    def dowork():
        print "dowork called"
        for node in base._nodes:
            if node:
                node.reparentTo(render)
        foi.start()
    
    def cleanup():
        print "cleanup called"
        base._fadeEG.destroy()
        del base._fadeEG
        del base._nodes
        fsq.removeNode()
    
    def saveNodes(nodes):
        print "saveNodes called"
        base._nodes = nodes
        messenger.send('ModelsLoaded')
    
    base.acceptOnce('FadeReady', dowork)
    base.acceptOnce('FadeOutDone', cleanup)
    fii.start()
    loader.loadModel(filelist, okMissing=True, callback=saveNodes)


def stLoadWithFade(filelist):
    cm = CardMaker('BlackScreenCard')
    cm.setFrameFullscreenQuad()
    fsq = render2d.attachNewNode( cm.generate() )
    fsq.setColor(Vec4(0,0,0,1))
    fsq.setTransparency(1)
    fii = fsq.colorScaleInterval(3, Vec4(1,1,1,1), Vec4(1,1,1,0))
    foi = fsq.colorScaleInterval(3, Vec4(1,1,1,0), Vec4(1,1,1,1))
    fii.setDoneEvent('FadeInDone')
    foi.setDoneEvent('FadeOutDone')
    
    def dowork():
        print "dowork called"
        for node in loader.loadModel(filelist, okMissing=True):
            if node:
                node.reparentTo(render)
        foi.start()
    
    def cleanup():
        print "cleanup called"
        fsq.removeNode()
    
    base.acceptOnce('FadeInDone', dowork)
    base.acceptOnce('FadeOutDone', cleanup)
    fii.start()

Note: This only contains the functions. If you want to try it out you’ll have to write some code to call them.

You know, your modified snippet gives an error.

@drwr, thats more clear. The first thing that comes to my mind now is, are those models referenced then?

Sorry. Works for me. Unless I moved one of those models previously. Is the teapot usually in the models folder? Oh well.

No, your first snippet gives the same error message as mine…

Question… if they’re being ran by another thread (the task chain) isn’t that what you want to mess with to fixs the problem of the lag with many/larg models?

Also… is there any way to pause a thread untill the main says it’s ok manually (even if the thread in the middle of loading a model)? I know you can call it after the fact, but I mean like right in the middle.

I seem to be surfurring from this when my game loads and starts the preloading proccess. The player can’t seem to type in the texts boxs untill the preloading is done.

I don’t know what you mean by “referenced” in this context. You mean the reference count is held so they won’t be deleted? Of course that’s still true.

What error is that, exactly? Come on, Anon, you’ve been doing this for a while now. You really should know better by know than to be so imprecise when you describe an error. How can anyone hope to help you otherwise?

Then something’s not right with your threading. Note that if you’re writing your own thread functions, you need to call Thread.considerYield() from time to time within those functions, or they will hog all of the CPU until they’re done. This shouldn’t be an issue if you’re just using the callback feature of loader.loadModel(), as described in this thread.

David

I know. I’m using the code described here within my own code. It works fine for smaller objects, but when it gets to the larger one, it’ll still freez the user from being able to type texts within the message boxs. Sometimes it’ll just lag where you can type but nothing well show until the nexts frame (from what I can guess). From what I have read (witch I may have miss-understood), the callback creats a subthread for the models to be loaded in. This is why I ask about some how stoping or lowering the way it works, sorta like .considerYield() would work, but manually so if the text boxs has focus then it’ll stop untill it losses focus. By stop I mean pause/unpause

Well, the only error message I posted was this:

its few posts above…

EDIT: Wow, the problem was I had accidentally edited the default model directory. :confused: