I’m looking for some example or tip about get the current sound that a listener is hearing at moment. For example, in a multiplayer game I can create a camera and so get its output using a camera buffer to simulate a vision of a player, but I still don’t realized how to do the same with audio.
I’ve read that I can use a AudioManager to get positional sound from a given position in the world (panda3d.org/manual/index.php/3D_Audio), but when sound plays it goes direct to my speaker while I’d like get the exact sound (including attenuation of volume because distance from source, etc) reaching this AudioManager AND any other AudioManagers in my world, so I could treat them separately. In other words, every Audio Manager would be representing an ear in the world.
Would have someone here some idea or example (including multiplayer games) on how get the particular sounds reaching the “ears” of a player in the scene?
Because actually players won’t be humans, but robots… Sorry if I was not clear in my first post… I’ll explain better…
I’m creating a “game” which AI researchers could simulate biological senses with low budget.
For example, to simulate vision: every player, let’s name “creature”, has a root node called CreatureNP which has 2 children: Eye1NP and Eye2NP. Each Eye NP has a Camera node and an Eye node (class coded by me which will get the image of the camera at a frame and prepare to a 2d matrix to be used to feeding a neural network). Why do I need 2 eyes for test vision? Because vision features like deepness of objects need of the combination of 2 eyes input.
The same above is true for hearing which I do need 2 ears for test spatial localization of the objects emmiting sounds. So I need 2 audio listeners to get the sound spectrum reaching each ear (i.e. audio listener) and create a distinct 2d matrix for each one based in these spectrums.
I could simulate every creature in a client to gain performance, but still so I do need that every client be able to handle 2 or more listeners for the same creature (i.e. player).
Edit:
I updated the title changing “players” to “listeners” to avoid confusion: a same creature can have one or more audio listeners, i.e. its ears.
Ok, but how get the particular sound reaching an audio listener? AudioManager and Audio3DManagers classes don’t have methods to get the transformed sound (with doppler and other effects applied).
Ok, no problem. Now your last code is close to mine.
Let’s think that we have a ball (B) with a sound attached to it. (“Sine.wav”). In same scene we have 2 audio listeners placed in different positions: the former (L1) in the far left of the ball and the latter (L2) in the far rigth of the ball.
Suppose that we move the ball close to L1. In the natural world, this one will hear the ball sound louder than the other listener. Right? Ok… When I simulate this in Panda, my physical sound speaker emmits the sound perceived by L1 but ignores the sound perceived by L2. I would like to get the rendered sounds for every listener even that this require get data direct from sound card.
I don’t think you need 2 listeners per ‘agent’, you got the right and left audio channels. For multiple listeners you would need multiple audio outputs. I’m not sure what kind of signal you’re after. Do you need the actual sound or just the volume/pitch shift/delay?
Either way it might be a good idea to write your own system using collision detection, or just the distance to the sound source for each ‘ear’ of each agent.
Hm… Why not? Spatial localization of sounds is performed mainly because our 2 ears:
About more than one creature per simulation, no problem in simulate each creature in a separate client machine. It’s a good idea from a performance perspective.
I edited my last post with an image of why I need 2 listeners by creature in order to the brain calculate diferential between ears and get the sound source.
Let me ask a question: even that a same mono sound reach a listener, will the left and right channels have diferent spectrums depending of the listener position? If so, this solve my problem!
me visited by a strange idea, change the sound depending on where he plays. For example, add location information to the spectrum. And the bot should implement reading this range to get the data.