[SOLVED] Get actual sounds listened by one or more listeners

Hi,

I’m looking for some example or tip about get the current sound that a listener is hearing at moment. For example, in a multiplayer game I can create a camera and so get its output using a camera buffer to simulate a vision of a player, but I still don’t realized how to do the same with audio.

I’ve read that I can use a AudioManager to get positional sound from a given position in the world (panda3d.org/manual/index.php/3D_Audio), but when sound plays it goes direct to my speaker while I’d like get the exact sound (including attenuation of volume because distance from source, etc) reaching this AudioManager AND any other AudioManagers in my world, so I could treat them separately. In other words, every Audio Manager would be representing an ear in the world.

Would have someone here some idea or example (including multiplayer games) on how get the particular sounds reaching the “ears” of a player in the scene?

Thanks in advance

Why do you need multiple listeners?
I am making a multiplayer game, so every player has his own game client and his own camera.

Perhaps for this you need to activate FMOD in Config.prc.

Because actually players won’t be humans, but robots… Sorry if I was not clear in my first post… I’ll explain better…

I’m creating a “game” which AI researchers could simulate biological senses with low budget.

For example, to simulate vision: every player, let’s name “creature”, has a root node called CreatureNP which has 2 children: Eye1NP and Eye2NP. Each Eye NP has a Camera node and an Eye node (class coded by me which will get the image of the camera at a frame and prepare to a 2d matrix to be used to feeding a neural network). Why do I need 2 eyes for test vision? Because vision features like deepness of objects need of the combination of 2 eyes input.

The same above is true for hearing which I do need 2 ears for test spatial localization of the objects emmiting sounds. So I need 2 audio listeners to get the sound spectrum reaching each ear (i.e. audio listener) and create a distinct 2d matrix for each one based in these spectrums.

I could simulate every creature in a client to gain performance, but still so I do need that every client be able to handle 2 or more listeners for the same creature (i.e. player).

Edit:
I updated the title changing “players” to “listeners” to avoid confusion: a same creature can have one or more audio listeners, i.e. its ears.

Ok, but how get the particular sound reaching an audio listener? AudioManager and Audio3DManagers classes don’t have methods to get the transformed sound (with doppler and other effects applied).

Api deprecated in the textbook. See here: panda3d.org/reference/1.9.1 … Properties

Correspondingly, the code is from the tutorial, by reverb. Will so:

from panda3d.core import loadPrcFileData
loadPrcFileData("", "audio-library-name p3fmod_audio")
 
import direct.directbase.DirectStart
from panda3d.core import FilterProperties
 
mySound = loader.loadSfx("models/audio/sfx/GUI_rollover.wav")
mySound.setLoop(True)
mySound.play()
 
fp = FilterProperties()
fp.addSfxreverb(0.6, 0.5, 0.1, 0.1, 0.1)
base.sfxManagerList[0].configureFilters(fp)
 
base.run()

I’m trying understand how your suggestion apply to my context but I feel that you misunderstood my question.

:blush:

from panda3d.core import loadPrcFileData, CollisionTraverser
loadPrcFileData("", "audio-library-name p3fmod_audio")
 
from direct.showbase import Audio3DManager

import direct.directbase.DirectStart

point_audio = loader.loadModel('teapot')
point_audio.reparentTo(render)


audio3d = Audio3DManager.Audio3DManager(base.sfxManagerList[0], camera)
mySound = audio3d.loadSfx('sound.wav')
mySound.setLoop(True)
mySound.play()

audio3d.attachSoundToObject(mySound, point_audio)

base.cTrav = CollisionTraverser()

audio3d.setSoundVelocityAuto(mySound)

base.run()

I’m kind of hearing the doppler effect!
What is the problem?

And in general, you have a strange formulation. Exact sound…

Ok, no problem. Now your last code is close to mine.

Let’s think that we have a ball (B) with a sound attached to it. (“Sine.wav”). In same scene we have 2 audio listeners placed in different positions: the former (L1) in the far left of the ball and the latter (L2) in the far rigth of the ball.

Suppose that we move the ball close to L1. In the natural world, this one will hear the ball sound louder than the other listener. Right? Ok… When I simulate this in Panda, my physical sound speaker emmits the sound perceived by L1 but ignores the sound perceived by L2. I would like to get the rendered sounds for every listener even that this require get data direct from sound card.

I understand you, after the answer.

If briefly: You want to know the name of the sound being played, its volume and the panorama.

However, the problem is that the methods are:

getVolume() and getBalance()

They transmit the old data before initialization, It looks like a bug, developers need to report this.

I don’t think you need 2 listeners per ‘agent’, you got the right and left audio channels. For multiple listeners you would need multiple audio outputs. I’m not sure what kind of signal you’re after. Do you need the actual sound or just the volume/pitch shift/delay?
Either way it might be a good idea to write your own system using collision detection, or just the distance to the sound source for each ‘ear’ of each agent.

Hi guys, now it’s getting better with some tips…

Hm… Why not? Spatial localization of sounds is performed mainly because our 2 ears:

About more than one creature per simulation, no problem in simulate each creature in a separate client machine. It’s a good idea from a performance perspective.

Forgive me, I still have stupid advice, use a stereo file. And not mono, then two ears are not needed … :smiley:

I edited my last post with an image of why I need 2 listeners by creature in order to the brain calculate diferential between ears and get the sound source.

You understand that you already have one channel quieter in another louder. There is a difference, but how do you determine where the sound came from?

Notice the wave propagation in the image, and the theory works. However, the PC processor does not know about this.

Let me ask a question: even that a same mono sound reach a listener, will the left and right channels have diferent spectrums depending of the listener position? If so, this solve my problem!

It is necessary to contact the FMOD developer. However, you can record sound from Panda3D and do a spectral analysis of the channel signals.

hmm… I’ll check this… but I believe that left and right channels have the same spectrum… May I am wrong… :confused:

me visited by a strange idea, change the sound depending on where he plays. For example, add location information to the spectrum. And the bot should implement reading this range to get the data.