multiple audio3d listeners

Hi everyone,
I have 3d audio working nicely in my game. Now I would like to make a multiplayer option. Is it possible to simply add another listener to my audio3d manager, or do I need to create a new one for any additional listeners I want to have?

I can tell you with FMOD it only supports one listener. Setting a listener on any one audio manager will change it for all of them.
It is not implemented in Panda, but FMOD API does support multiple listeners. It only output mono sound in this mode, though.
OpenAL may be different, I am not familiar with the internals.

I am using OpenAL, I believe. Does anyone know if this is possible with OpenAL?

After 6 years and still nobody has some answer… :cry:

My related question is here:

I do not understand why need to add listeners when need to add sound sources for them. :question:

Do you want to handle server-side sound?

If you want to have a few listeners, it’s very simple.

from panda3d.core import loadPrcFileData, CollisionTraverser, TextNode

from direct.showbase import Audio3DManager
from direct.gui.OnscreenText import OnscreenText 
from direct.gui.DirectGui import *

import direct.directbase.DirectStart

base.disableMouse()
base.cTrav = CollisionTraverser()

camera.setPos(0, -80, 0)

point_audio = loader.loadModel('Res/models/sound')
point_audio.reparentTo(render)

user1 = loader.loadModel('Res/models/user1')
user1.setPos(-20, 0, 0)
user1.reparentTo(render)

user2 = loader.loadModel('Res/models/user2')
user2.setPos(20, 0, 0)
user2.reparentTo(render)

user3 = loader.loadModel('Res/models/user3')
user3.setPos(0, 0, -20)
user3.reparentTo(render)

listener = [user1, user2, user3]

audio3d = Audio3DManager.Audio3DManager(base.sfxManagerList[0], listener[0])

textObject = OnscreenText(text = "listener: 1", pos = (0, 0.6), scale = 0.1, align=TextNode.ACenter, mayChange=1)

def itemSel(arg):
    textObject.setText("listener: "+str(int(arg)+1))
    audio3d = Audio3DManager.Audio3DManager(base.sfxManagerList[0], listener[int(arg)])

mySound = audio3d.loadSfx('Res/sound/hot.ogg')
mySound.setLoop(True)
mySound.play()

audio3d.attachSoundToObject(mySound, point_audio)
audio3d.setSoundVelocityAuto(mySound)

menu = DirectOptionMenu(text="options", scale = 0.1, items = ["0","1","2"], initialitem = 0, highlightColor = (0.65,0.65,0.65,1), command = itemSel)
menu.setPos(-0.6, 0, 0.6)

base.run()

Res.zip (954 KB)

In this case, you still have a single listener, not multiple listeners hearing at same time. The difference is that you can change which user will hear…

You confuse the reality, I shock you. If you run multiple player audio, or programs. Then they all work in turn.
You also can in the intervals of tact, switch the order of the ears for recording.

Maybe it’s closer to the idea.

from direct.directbase.DirectStart import *
from pandac.PandaModules import *
from direct.showbase import Audio3DManager

base.disableMouse()

base.win.setClearColor((1, 1, 1, 1))

render2 = NodePath('render2')

# Distance from the first microphone to the second
distY = 5000

# Cameras act as a microphone

# Microphone 1
camera.setPos(0, -30, 0)

# Microphone 2
cam2 = render2.attachNewNode(Camera('cam2'))
cam2.setPos(0, distY-30, 0)

# Sourse sound 1
audio3d = Audio3DManager.Audio3DManager(base.sfxManagerList[0], camera)
mySound = audio3d.loadSfx('sound.ogg')
mySound.setLoop(True)
mySound.play()

# Sourse sound 2
audio3d1 = Audio3DManager.Audio3DManager(base.sfxManagerList[0], cam2)
mySound1 = audio3d1.loadSfx('sound1.ogg')
mySound1.setLoop(True)
mySound1.play()

# Model as a sound source 1
user1 = loader.loadModel('Res/models/user1')
user1.reparentTo(render)
audio3d.attachSoundToObject(mySound, user1)

# Model as a sound source 2
user2 = loader.loadModel('Res/models/user2')
user2.setPos(0, distY, 0)
user2.reparentTo(render2)
audio3d1.attachSoundToObject(mySound1, user2)

dr2 = base.win.makeDisplayRegion()
dr2.setClearColor(VBase4(0, 0, 0, 1))
dr2.setClearColorActive(True)
dr2.setClearDepthActive(True)
dr2.setCamera(cam2)

screen = base.cam.node().getDisplayRegion(0)
screen2 = cam2.node().getDisplayRegion(0)

screen.setDimensions(0, 1, 0, 1)
screen2.setDimensions(0.5, 1, 0.5, 1)
 
base.cam.node().getLens().setAspectRatio(float(screen.getPixelWidth()) / float(screen.getPixelHeight()))
cam2.node().getLens().setAspectRatio(float(screen2.getPixelWidth()) / float(screen2.getPixelHeight()))

def w():
    camera.setPos(0, camera.getY()+5, 0)
    
def s():
    camera.setPos(0, camera.getY()-5, 0)
    
def q():
    cam2.setPos(0, cam2.getY()+5, 0)
    
def a():
    cam2.setPos(0, cam2.getY()-5, 0)
    
base.accept("w", w)
base.accept("s", s)

base.accept("q", q)
base.accept("a", a)

base.run()

Split.zip (204 KB)

I find out that that have multiple listeners in Panda3d is possible bypassing the default 3D audio manager and creating your own manager using the FMOD library (through python bindings) as suggested earlier. With it you can easily create a Sound object and one or more Listener objects, pass their respective positions and velocities and then get 3d sound reaching all listeners. However… according to FMOD documentation (https://www.fmod.org/docs/content/generated/overview/3dsound.html), effects like doppler and others are disabled to avoid confusion which makes multiple listeners less attractive (at least to me):

I upload some example about how use FMOD in python here:

Of if you don’t care about spectrums, this is simpler:

import sys
import time
import pyfmodex
from pyfmodex.constants import FMOD_SOFTWARE, FMOD_LOOP_NORMAL, FMOD_3D

if __name__ == '__main__':

    def change_listener(listener):
        current_listener.position = listener
        fmod.update()

    ## FMOD initialization
    fmod = pyfmodex.System()
    fmod.init()

    ## Load the sound
    sound1 = fmod.create_sound("sine.wav",
                               mode=FMOD_LOOP_NORMAL | FMOD_3D | FMOD_SOFTWARE)

    ## Play the sound
    channel = sound1.play()
    channel.volume = 0.7
    channel.min_distance = 50
    channel.max_distance = 10000  ## Need this for sound fall off

    ## Create listeners positions
   listener1 = (0, 0, 0)
   listener2 = (0, 10, 0)

    ## Create a FMOD listener in the center of the scene
    current_listener = fmod.listener(id=0)
    change_listener(listener1)

    ## Walk the sound around your head
    min_x = -30
    max_x = 30
    sound_pos = (max_x, 3, 0)
    x = min_x
    inc = 1
    while True:
        if x == min_x:
            inc = 1
        elif x == max_x:
            inc = -1
        x += inc
        channel.position = [x, sound_pos[1], sound_pos[2]]
        fmod.update()
        print("Playing at %r" % str(channel.position))
        time.sleep(0.1)

Put your headphones, run the code above and enjoy… :wink:

PS:

In this case, you still have a single listener, not multiple listeners hearing at same time. The difference is that you can change which user will hear…

Yes, I created the function change_listener to fill this role.

You’ve done a lot of research. This is a very useful work.

However, my example works, but the problem is that the sound is output directly to the audio output. At the same time there is distortion to avoid this, just need to send each source to the mixer, and then to the audio output.

But by default in panda there is no such possibility, it is possible with the help of your method?

Yes, it is possible. FMOD allows you set the output of the sound: this could be a specific mixer or even a .wav file. If you don’t want write the output to a .wav file, you can implement a DSP (a kind of a callback class) and pass to FMOD and then receive directly the output buffer and handle it. Look this: stackoverflow.com/questions/697 … t-to-files

Here a C++ example on how create the DSP: github.com/kengonakajima/moyai/ … m/main.cpp . You should realize how adapt the code to pyfmodex though.

There is also a twin method of getSpectrum(): its name is getWaveData(). I never tested it but it seems contain the sound buffer with the effects.