whats the point of using floats in python?

I just discovered that panda3d uses floats 32 bit for most of the computations. Why is that? Panda has perfect Vec3D 64bit floats classes but they don’t fit any place? I understand some graphics cards cant handle more then 32 bit but the software rotations/translations can! This is why i am getting rounding errors and other wobbling crap now.

Several reasons.

(1) Memory size. Vertex tables can consume a lot of memory. Say you have a scene of 10,000,000 vertices total, counting all of your environments and avatars (not unreasonable for modern scenes), each of which is format v3n3t2. That’s 8 floating-point numbers per vertex, or 320MB total with 32-bit floats. Good luck fitting all that on a 256MB card, and you haven’t even started on the textures. Make it 64-bit floats and suddenly you have to deal with 640MB.

(2) Bandwidth. Uploading all these vertex tables to the graphics card takes time. If you have a lot of animated characters, you have to upload a lot of vertices. Cutting the float size in half makes it upload twice as fast.

(3) Even though CPU’s can sling around 64-bit floats easily when performing transforms on the CPU, modern graphics cards prefer to perform the transforms on the card. It’s generally faster that way. But if you do it on the card, you’re limited to 32-bit floats.

(4) We could store the matrices on the nodes in 64-bit precision, and do matrix math in 64-bit. But then we’d have to downconvert when sending the matrices to the card. That might not be altogether bad. But still, the matrix will end up downconverted in the end anyway, so the extra precision doesn’t buy you all that much.

(5) Using 32-bit floats generally isn’t so bad, as long as you’re aware of their limitations and don’t overuse functions like wrtReparentTo() and nodePath.setPos(other, x, y, z). It’s kind of like working with JPEG images. As long as you don’t repeatedly load and re-save the same JPEG image, you won’t introduce too many errors. Similarly, as long as you don’t repeatedly decompose and recompose the same 32-bit matrix, you shouldn’t see too many roundoff issues.

What sort of problems are you seeing exactly?


i am working on planet rendering, so getting very close and very far is important.

I have solved the problem by using 64 bit math and then only setting panda3d float32 positions after the calculations in the 64 bit land.

I am mainly concerned about the software matrix computations being in 32 bit. I am always running into the 32 bit problems round off problems which i know could be save easily without much extra slowdowns.

I perfectly understand why vertex data has to be 32 bit. But for positions of nodes, scales, and the rotations i would want 64 bit.

Tell more more about the specific problems you are running into.


when i am at the surface of a planet I have to move in the speed range of 1e-6 per frame.


i get around it by basically getting the camera vector forward and adding it to a 64bit pos vector and the setting the camera position based on the vector:

        forward = base.camera.getPos()
        forward = Vec3D(forward[0],forward[1],forward[2])
        self.camPos += forward*speed

The other problem i had is with the galaxy. I had entire galaxy with stars and planets. Well you could not fly around very well. If you flew to far a everything beings to wobble and if you get to close to star to see the planets (speed slows down as in this case) you get wobbles to the planets that are even pretty close to the galaxy center. For that i started using python long int and casting them to the Vec3’s but i think 64bit floats Vec3D’s would have worked if i knew about them.

treeform, I don’t think any current general purpose game engine will do what you want. It sounds like you want to represent an entire universe in absolute coordinates, from human scale up to galactic scale, and you’re pushing the bounds of numerical precision.

A slightly less clean way would be to use several scaling factors in the game based on situation:

on planet surface 1.0 = 1 meter
in planetary solar system 1.0 = km
in galaxy 1.0 = 1000 km

and so on. With a system like this the units would be a better match for the speed of movement anyways, since I doubt you want to move at light speed across a planet surface, or at 30 mph between star systems.

I’ve dealt with a somewhat similar problem in a game I’m finishing where the fixed point arithmetic native to the system processor simply couldn’t cope with very large environments, so I had to change my scaling factor. However if you scale your coordinate system to match the limits of the CURRENT environment, you should be okay.

Arkaein, i used 2 scaling factors to for the galaxy (galaxy,system) but still round off errors as soon as you get 1000 units in any direction crap starts happening. Its just bothers me that 64bit number would do but we still stuck at 32.

if you may hear my humble opinion… seperate your stuff.
create 3 seperate scenes. one withi your galaxy, one with your solar system and one with your planet.
just render them ontop of each other. usualy the objects are far apart each other so there wont be any sorting issues between the rendered results.
swithing objects between the scenes would be like the fade-lod node or if mayb just like switch it on and of between scenes.

advantage. each scene can have it’s own scaled-factor.and since. and in each scene you can keep the camera near the center. so less precision issues.
to actually make it work it migh require some twaking but it should be theoretically and should allow distance-wise unlimited-sized worlds/space if you keep your coordinate system in a way it’s possible to set all stuff relative to each other.

so… 3 scenes, 3cameras, 3 scales, 3 renderresults ontop of each other… should be fine^^
not that easy but definetly better than putting everything into absolute space.

just my 2 cents…

ThomasEgi, yes this is what i am doing. My pain is that i cant do it in just 3. I have to do it in 6 or 10.

I have been thinking & playing around with such large scale differences as well. And had the idea (not tried it) not only to modify the positon, but also the scale of objects (like planets far away). Eighter you define a distance ( like 1’000’000) from which on the objects get scaled down and not moved anymore. Or you could use a logarithmic approach the scale, position.

The actual problem with this approach would be to convert the global coordinates to player coordinates first and then do a conversion before sending the data to the graphics card. Though this might be not that complicated if you work with a nice model-view-controller scheme…

But as long as i have not tried it, i cant really tell if any of these approaches really work.

I agree with this idea… it seems fairly simplistic and very sensible. Problems might arise if you want to render things extremely far away from the player. At those ranges, I could only imagine rendering planets and stars and other things which could probably be rendered as a scaled-down, closer version or some other workaround.

If you have time, I would check out the source code for this project: http://www.eliteclub.org.uk/jjffe/about.htm
I don’t actually know anything about the project more than that it’s a reverse-engineered version of Frontier: First Encounters (Elite 3). I remember that game did a very good job of having everything in a realistic scale… it would take a month or so to fly across a solar system, hurtling at millions of kilometers per hour, and yet you could still land on planetside spaceports. I figure that they had to manage some work around as they probably couldn’t just render everything absolutely with 32 bit floats.

checked out the celestia space-simulator source already? they are able to render stuff from whole galaxies up to tiny details on space-ships. all totaly seamless. might not be easy to figure out what they do but it defiently works great^ :slight_smile: