FRgb is one of the many texture formats supported by Panda. This just defines how the bytes are arranged within the texture buffer. The F is just a prefix that stands for “format”, to differentiate these symbols from those for, say, wrap mode or filter type or whatever. Panda uses a similar convention for all of its enumerated types. The full set of available formats is listed in the source code in texture.h:
enum Format {
F_depth_stencil = 1,
F_color_index,
F_red,
F_green,
F_blue,
F_alpha,
F_rgb, // any suitable RGB mode, whatever the hardware prefers
// The following request a particular number of bits for the GSG's
// internal_format (as stored in the framebuffer), but this
// request is not related to the pixel storage within the Texture
// object itself, which is always get_num_components() *
// get_component_width().
F_rgb5, // 5 bits per R,G,B channel
F_rgb8, // 8 bits per R,G,B channel
F_rgb12, // 12 bits per R,G,B channel
F_rgb332, // 3 bits per R & G, 2 bits for B
F_rgba, // any suitable RGBA mode, whatever the hardware prefers
// Again, the following bitdepth requests are only for the GSG;
// within the Texture object itself, these are all equivalent.
F_rgbm, // as above, but only requires 1 bit for alpha (i.e. mask)
F_rgba4, // 4 bits per R,G,B,A channel
F_rgba5, // 5 bits per R,G,B channel, 1 bit alpha
F_rgba8, // 8 bits per R,G,B,A channel
F_rgba12, // 12 bits per R,G,B,A channel
F_luminance,
F_luminance_alpha, // 8 bits luminance, 8 bits alpha
F_luminance_alphamask, // 8 bits luminance, only needs 1 bit of alpha
F_rgba16, // 16 bits per R,G,B,A channel
F_rgba32, // 32 bits per R,G,B,A channel
F_depth_component,
F_depth_component16,
F_depth_component24,
F_depth_component32,
};
Unfortunately, format FRgb (or F_rgb as it appears in C++ code) implicitly means that there are only three components, not four, so if you set a four-channel image to FRgb you will indeed get weird results.
If you have a texture that contains an alpha channel with bad data (i.e. all black), and you don’t want this bad data to influence the rendering, then you have exactly two choices: (a) correct or remove the bad data, or (b) tell Panda to ignore it.
For (a), you can either correct the problem before it gets into Panda, which you say you can’t do; or you can save the image into an in-memory PNMImage, remove the alpha, and then load it back. This operation is too slow to do every frame, but you can do it every once in a while without too much trouble.
For (b), you have lots of options, but all of them involve disabling conventional transparency. If other parts of the same texture have good data in the alpha channel, then you will have to separate the parts of your model that reference the good data from the parts of your model that reference the bad data, and enable transparency only on the parts of the model that reference the good data.
David