Issues with the built-in shadows

Just to clarify, its not possible at all to render vertices with two matrices at one time. If you do what rdb suggested, (setInstanceCount), your geometry basically gets duplicated on the GPU, and you render one set of the vertices with your first matrix, and the second set of your vertices with the second one.

This is due to how the rasterizer / general fragment pipeline works: It expects exactly one output position, which it then interpolates and uses to rasterize your geometry. The rasterizer is not capable of rasterizing geometry using two or more different matrices, which is why your approach of writing two depths would not work at all.

You could use layered rendering / viewport arrays, however that will be slow since you need to use a geometry shader (except on AMD cards which allow writing to gl_Layer in the vertex shader).

As a recommendation, which might change with newer hardware and architectures:

First of all, I really do recommend using the standard depth texture generation supported by the FFP. It will most likely be the fastest you can get. Writing to a color channel, with a blending attribute for example, will be much slower (and consume more memory).

If you need to generate multiple shadow maps, I would render them to the same FBO but using different display regions (and cameras ofc). This way you can benefit from culling, and you only have to bind one texture to your shaders. You can also render to multiple FBOs, but even that will very likely be faster than using some layered rendering (again, I’m only talking from my experience here, different setups and architectures might perform totally different).