Creating an audio only node object

Hello,
I am very new to panda3d and am looking for a way to create a node object without needing to supply an image? I just want to have a sound for the object. What would be the best way of doing this?
I was thinking I could subclass either model or actor, but there has to be a better way.
thank you,

You can make a node that has no geometry, nor any visual representation.
Use:

my_node = render.attachNewNode('my_node')

or

my_node = NodePath('my_node') 

That worked great!
So do I need to render something without an image? Does render do more than display to the screen?

This maybe should go in a new topic, but the attached sounds to a node come out as relative to the listener, not in direct space. So in the following code, the 0,0,0 is on top of the listener when it should be to the left.

#Note that at 100 the sound is almost at 0 but not quite. at y0, x 1 the sound is on the right ear and y0 x -1 has sound in the left ear.


from direct.showbase.ShowBase import ShowBase
app = ShowBase()
from direct.showbase import Audio3DManager

app.camera.setPos(5, 5, 0)

item = app.render.attachNewNode('my node')
x = 0
y = 20
item.setPos(x, y, 0)

audio3d = Audio3DManager.Audio3DManager(base.sfxManagerList[0], app.camera)
audio3d.attachListener(app.camera)

sound = audio3d.loadSfx('step.ogg')
audio3d.attachSoundToObject(sound, item)

def cy(num):
	global y
	y += num
	item.setPos(x, y, 0)

def cx(num):
	global x
	x += num
	item.setPos(x, y, 0)

def printer():
	print("X: %s, Y: %s" % (x, y))

app.accept('space', sound.play)
app.accept('arrow_up', cy, [1])
app.accept('arrow_down', cy, [-1])
app.accept('arrow_right', cx, [1])
app.accept('arrow_left', cx, [-1])
app.accept('p', printer)

app.run()

Try base.disableMouse() before setting up initial camera position. It’s disables the default mouse control, otherwise camera.setPos will not affect on real camera position, more specifically, the position will be overwritten by default controller.

As for the previous question, can you explain what you need?

That worked great!
But still, when the sound object is on a direct y access with the listener, it goes instantly into only the ear for the side it is on.
So (5, 10, 0) for the object when the player is (1, 10, 0) will sound only out of the left ear where as it should really have both still.
Is there a way to:
When the object reaches the same y access as the player, jump it to +1 or -1?
When the object is behind the player, add the Doppler shift rather than just making it go back forward.
I can do this by hand, but as this is pretty common 3d world stuff, I would think it already would have these things.

Is there also a way to change the facing direction of the player? So instead of north, I would like to face east. It seems as if z is up and down…
I am wanting to create
Audio games
so everything panda 3d has, but with graphics removed. So physics, networking, sound, event handling… I will be creating everything out of blank nodes. Objects like swords, monsters and whatnot.

Is there a way to edit posts?
So some programmers put extra stuff in the render area because they expect everything to be rendered. If I would like everything to be treated the same, just without pictures, do I need to put them in the render tree?

Here is a bug I think:
For normal audio sounds do not pan when using openAl, but one can get really close with 3d.

from direct.showbase.ShowBase import ShowBase
app = ShowBase()

#does not work
sound = app.loader.loadSfx('step.ogg')
sound.setBalance(1)

app.accept('space', sound.play)
app.run()

OK, found how to control the camera, but I am wondering if I can change the default actions with the keyboard and mouse without needing to deal with positioning? For example,
app.useDrive()

has a great movement and turning script by default, but it is weird and doesn’t stop turning when a key up event is triggered. Where can I get control of this default movement script? I would like to be able to change the velocity and set a footstep sound task to play when move is True.
Is it normal for people to over-ride the typical camera movement? Also, what is the difference between doing what the
Camera documentation says and totally disable the default actions and what the tutorial does?
If I need to overwrite it, it seems as if there are processor functions specially made for the camera, the camera tutorial does not directly talk about them though.
I can just use the hpr and position functions in my custom movement code but if I do that, how do I get the current position of a node? I don’t see an API for nodes anywhere.
Are there also already built movement functions that check for collisions, remove sounds that are a certain distance away and whatnot?
The hardest thing as a new person coming to panda3d is being hit with all the new stuff. It is very detailed and not very clean. For example, why is
from direct.showbase.ShowBase import ShowBase
on the top of every script? Why couldn’t it be something like:
import panda3d
app = panda3d.App(title=“my app”)

Also, CamelCase for methods, variables and functions is not very pythonic.

Keep in mind that “render” isn’t a particularly special node; it’s just a regular node, usually used to represent the root of the scene graph. It can contain both renderable and non-renderable (ie. collision, audio reference nodes, etc) nodes.

Audio isn’t 3-D by default, you need to load it as such and attach it to the scene graph if you want 3-D audio to work. For more information, visit the manual page on 3D Audio.

It is indeed normal to disable the typical camera movement as in practice, pretty much every game will eventually need a custom camera control interface anyway.

To get a position from a node:
panda3d.org/manual/index.ph … te_Changes

Or the NodePath API reference for more detailed information on the operations you can do on nodes:
panda3d.org/reference/devel … e.NodePath

The reason for camelCase is mostly historical. We’re trying to transition to a snake_case style, with every module imported from “panda3d.core” already having snake_case methods as an alternative to camelCase. For instance, NodePath has a get_pos alternative to getPos.

The idea to have a panda3d.App class is not a bad one. We might consider renaming ShowBase. :slight_smile:

That is good to know thanks!
Are there basic movement equations I can utilize that do the math for movement so all I need to do is add the numbers? I have my own, but I would like to stay with panda3d modules as much as possible.
So I can do something like:
app.camera.move.forward()
and it will calculate the squares I walk based on my app.camera.get_h() value and update my app.camera.set_x and app.camera.set_y values based on the set velocity. It would be nice as well if it through an event if there was collision as well.

BTW, I am not getting emails when I get a reply here even though the box is checked.

Not exactly, but you can do this:

app.camera.set_pos(app.camera, (0, 1, 0))

This will move it 1 unit on the Y axis relative to its own coordinate system, ie. 1 unit forward. In reality, you’d want to do this in a task, and replace 1 with globalClock.get_dt() times the desired speed.

Hmm, are the e-mails perhaps getting in your spam box?

That does not seem to work for me.
Also, why multiply the speed by dt? What is the global dt?
Here is what I have that should work, but doesn’t:

from direct.showbase.ShowBase import ShowBase
from direct.showbase import Audio3DManager
app = ShowBase()
base.disableMouse()
audio3d = Audio3DManager.Audio3DManager(base.sfxManagerList[0], app.camera)
sound = app.loader.loadSfx('step.ogg')

app.moving = False

def forward(task):
	sound.play()
	app.camera.set_pos(app.camera, (0, 1, 0))
	return task.again

def stop():
	app.taskMgr.remove('step')

def start():
	app.taskMgr.doMethodLater(0.3, forward, "step")

app.accept('arrow_up', start)
app.accept('arrow_up-up', stop)

def spk(stuff):
    print(stuff)

app.accept('space', sound.play)
app.accept('enter', spk, extraArgs=["x: %s, y: %s, facing: %s" % (app.camera.get_x(), app.camera.get_y(), app.camera.get_h())])

app.run()

Also, back to the 3d sound, I am saying that the behind code doesn’t work very well and when you have something right to your side, it is too strong in one ear, it makes me feel huge…
And the panning code for typical sound objects doesn’t work with OpenAl.

There is also a bug I found on my windows 7 64 bit, the window will not accept key presses after focus has been switched away and returned. And after running about 100 panda windows, it changed, so now it only accepts key presses after I switch to another window and go back to the panda3d window.
I believe the key code is different sometimes for keys after focus has been returned to a window after being away. This is a bug in pygame as well.

Do I take it correctly that you have no visible geometry, and are determining whether the camera has moved by listening to the sound that you load?

If so, then I think that part of the problem may be the way in which you’re loading that sound: as rdb mentioned, sound is by default not 3D. In order to load the clip as a 3D sound, I think that the proper procedure is to load the clip through the 3D audio manager that you create (rather than through the default audio manager), and then also place the resultant NodePath into the scene graph by parenting it to another (such as render). At the moment you appear to be loading the clip through the default audio manager, and not placing its NodePath into the scene graph.

If you’re not relying on the sound to determine camera position, are you printing out the camera’s position? If you’re thoroughly stuck, perhaps try adding some “print” commands into your events and your movement task in order to check that they’re being called as expected.

I am printing and it is printing 0.0 for everything, x, y and h, no matter how many times the move function is called.
Basically the above is just a walk scenario without any 3d, I just want to get walking down before I add in sounds.
Question though, does that movement function have collision detection for any parts of panda?

Ah, I see the printout now (sorry, I was tired when I posted my previous message ^^; )–and that seems to be where the problem is.

Specifically, the message that you’re providing into “extraArgs” contains your calls to the camera’s get-methods (“get_x”, “get_y” and “get_h” in this case), which are then executed immediately upon the “accept” line being reached, and the results–the camera’s initial position–stored in the string that you place into “extraArgs”. This means that get_x, get_y and get_h are being called only once, before you move the camera, rather than with each press of the enter key, as you seem to intend.

Rather than calling those methods where you are, I see two options:

  1. Have the function called by the event (“spk”) take references to the get-methods as parameters.
    Or, more simply:
  2. Since “app” is a global reference, have the function just call the get-methods itself. Something like this:
def spk():
    print "x:", app.camera.get_x(), "y:", app.camera.get_y(), "facing:", app.camera.get_h()

app.accept('enter', spk)

Oh wow! Sorry, I am so not used to dealing with functions to get values from classes! Duh!
Something weird though, doing globalClock.get_dt()
does not throw an error, even though I have not imported globalClock? I am lost…
Also, there is some kind of mention in collision detection about a drive function, did I understand correctly? Something that will do colision detection turn and move?