Python PandaNode.decodeFromBamStream hangs script

I am using the SDK development build Panda3D-SDK-1.10.0pre-71e18eb (32-bit Python 2.7 build) on my Windows 7 64-bit machine. I use the 32-bit version since I am aiming to write an application that works in both 32-bit and 64-bit environments from identical source.

I am trying to receive a bam stream and decode it into a node so I can visualize it in the viewer as in the below.

I understand what the assertion is directly saying but I’m new to Panda and not sure what it exactly means with respect to my use of it. Does anyone know what is happening or has any suggestions? Thanks and much appreciated in advance!

node = PandaNode.decodeFromBamStream(inputStream)
if node == None:
    #do error things
     #do things I want

Sometimes, this method spams assertion failures from putil\datagramBuffer.cxx to the console with the below error messages. With print statements, I’ve narrowed it down to the decodeFromBamStream method eventually causing the assertion failures. I also verified that the inputStream is not an empty string, not None, and actually has data. Throwing this call into a try/except statement also does not prevent the assertion from hanging the script.

Assertion failed : _read_offset + num_bytes <= _data.size() at line 130 of c:\buildslave\sdk-windows-i386\build\panda\src\putil\datagramBuffer.cxx

Eventually, if it spams long enough, it mixes this one in as well.

Assertion failed: (uint64_t)num_bytes == num_bytes_64 at line 126 of c:\buildslave\sdk-windows-i386\build\panda\src\putil\datagramBuffer.cxx

It means that the stream is corrupted. Specifically, there seems to be a datagram with a size field in the header that exceeds the actual data that is available in the stream.

Are you seeing this with a .bam file produced by Panda tools? Or are you using a custom constructed .bam stream? What is the origin of the .bam stream? There might be a better way to do what you are trying to do.

There are two parts to my application. I have a server running code in C++ that is generating the bam stream. The script referred to is receiving the bam stream over TCP/IP. Both use the same version of Panda3D. To reiterate, this pipeline does work as I can see the received data in the script viewer window at times without problem. This hanging happens on some occasions (which forces me to restart the script) and I am trying to pinpoint the source that spurs it on.

The bam stream is created from Panda Tools - an encoded NodePath into an ostringstream to be broken up into packets and sent to the Python script.

bool writeBamStreamFromNodePath(NodePath& objNodePath) {
std::ostringstream bamData;

The receiving script reconstructs the stream from each received packet and saves it out when it sees the last packet for decoding. There is some logic to ensure that it is appending the correct packets to reconstruct the stream. Since it is over TCP/IP, the script admittedly does not check for order of the packets nor currently confirms that it starts with the first packet. I will try to add some checks to ensure it receives packets in the correct order with no duplicates.

If the bam stream passed into decodeFromBamStream() is an invalid node (but not corrupted in the way we are describing with too little data), it should return None to Python, correct? Would having too much data in the input bam stream also throw asserts or be determined as invalid?

After adding those checks, it does seem that the packets are not coming in the proper order even though the server does not see that a send error has occurred. I thought order was guaranteed by TCP/IP. For some reason, it is skipping packets which seems to lead to having less data than expected.

The script now discards “reconstructed” streams that were not built from the correct order of packets so that it won’t get decoded. It seems that I cannot recreate the failure with the same way I saw it before (at least for now).

I am still curious about those two questions I posed in the previous reply though. :slight_smile:

The .bam stream is divided up into datagrams, each being preceded by a 32-bit size integer that indicates its size. It seems like the error is caused when the end of buffer is being reached before a complete datagram could be read.

I’ve done .bam streaming over the network before with success, but I’ve used Panda3D’s own networking system. The low-level BamReader actually accepts any DatagramGenerator as datagram source, and the one that is designed for streaming is DatagramGeneratorNet, while the one used in decode_from_bam_stream is DatagramBuffer (or DatagramInputFile when reading from a file).

This is what decode_from_bam_stream actually does, more or less:

DatagramBuffer buffer(std::move(data));

// Read BAM magic, arguably unimportant for streaming
std::string head;
if (!buffer.read_header(head, 6)) {

BamReader reader(&buffer);

// Read .bam header
if (!reader.init()) {

// Read top-level PandaNode object
PT(PandaNode) node = DCAST(PandaNode, reader.read_object());

// Resolve remaining references
if (!reader.resolve()) {

Here is some example code demonstrating how to do this in Python, using a DatagramGeneratorNet as the datagram source:

If you do not want to use Panda’s networking system, then you will need to ensure that the datagrams in the buffer are complete, as DatagramBuffer is not designed to stall waiting for more data if it is missing anything. If you were to need this feature, though, we could consider adding it.

Thanks for the info. With more testing, it was found that the server was the real culprit here. I plan to fix the behavior there as well as maintain my packet order verifications on the client side.

Thanks for the fast response and help, rdb!