A buffer is basically any image that is being rendered/loaded/cached. Some buffers do get rendered directly to the screen, but this need not be the general case. The window buffer in Panda is by default filled with the image being rendered off the primary camera. Depending on how you set it up, a buffer may also contain other ‘bitplanes’ - typically invisible ‘images’ comprised of metadata on each pixel- depth, masking properties, etc.
Any time you refill a buffer as a function of doing math on a particular source (e.g. a camera or certain blending/postprocessing operations), it’s a render pass. You may be able to set up your render pipeline so a buffer is processed back into itself to save on buffer allocation, but that won’t cut down on render passes so long as the processing still needs to be done.
You can probably change a buffer’s source on the fly, by associating it with a different camera, loading a different file into it, etc., but apart from old graphics cards short on texture memory, that may not gain you much over setting up multiple cameras with independent buffers and just disabling the cameras you aren’t using.
For your situation, you could in theory set up one buffer and switch between associating it with 2 different cameras, but I don’t think you could associate it with 2 cameras simultaneously and get 2 independent images out of it barring some ugly manual futzing with target texture coordinates, in which case you’d have an easier time with 2 independent buffers anyway. And if you’re going to juggle one buffer between two cameras, why not make one camera and just switch its position as needed? Likewise, you could keep one camera and switch the buffer it renders to on the fly, which would possibly be useful if you, say, had 2 objects, one textured with renderTextureA and one with renderTextureB and only one or the other needed to update at any given time, but that’s the sort of trick I’d only expect to come up as an elegant solution to a specific problem, not as a general case usage.
I recently toyed with an application that had 5 cameras rendering to 5 texture buffers, with those 5 buffers passed as textures into a shader which composited them in a particular way to fill a UV space on a card mapped to the final screen output. Other applications use multiple buffers to get different types of specialty data, e.g. rendering a scene from the perspective of a light source, tagging the textures/UVs accessed in the process, and using that data to paint a light-map/shadow-map onto the scene as rendered by a second camera. You can probably find other theory/examples on The Internet.