Is Panda right for the task?

I still don’t know what you mean precisely by a tile. For one thing, Panda doesn’t support SVG primitives directly, but you can render your SVG to an image, and then apply that image to a polygon. You can then draw that polygon whatever size you like, and that size can change every frame; there is no “resize” cost for making a smaller or bigger version of the polygon.

The cost that you pay for is mostly in the number of different objects you have. (There might be other bottlenecks as well, but the number of objects is by far the most common.) So if all of your tiles are independently scalable, it might be expensive. But if they are all scaled as a unit, so that you can consider them just one object with a lot of detail, then it might be fast.

So, of your three choices, probably (2) is the best; but again you are not really thinking in terms of what a 3-d engine does yet.

David

Each surface is going to come from one of eight bitmaps that I also have the SVG code for, and which are just lines, beziers, and circles. They’ll have one of two background colors for a total of 16. They’ll be sparse, that is there will be whitespace. There will be a total number of say 10-50 different sizes, which each surface can occur at, which means the drawing vocabulary will be 150-800 items big.

The surfaces I experimented with were about 400x300 in size; there’s the problem.

How should I break it down?

I’d forgotten about mipmapping, so that explains some of my disbelief. Even so, with only power-of-2 sizes of the mipmaps, there’s no way resizing could be a totally free operation. Well, where does it fit in?

What?

What?

Whah?

Break what down?

Where does What fit in?

Your questions seem to be completely unrelated to your previous posts.

In order to ask a question, one must know a greater part of the answer.

I suggest you put a lot more effort into formulating exactly what you want to ask about and try again.
Either that, or you are just playing around.

Let me see if I have understood you correctly: you are constructing a surface with some 400x300 (=120,000) tiles, each of which could be any one of as many as 800 different images. Once assembled, your surface is static and unchanging, but may be panned and zoomed considerably.

What I would do, then, is render each of these images out to a separate bitmap, then use a tool like egg-texture-cards to construct a big egg file that references all of them individually.

Then pass that egg file through egg-palettize to group those ~800 images into as few individual images as possible (egg-palettize does this by assembling multiple images onto a larger palette of images, and then changing the model to index into that palette). Depending on your graphics hardware and the number of pixels you need to keep in each image, you may be able to get away with very few individual images indeed.

Then, in Python code, load up your egg-texture-cards model, find all of the individual tiles within it, and assemble them according to your needs. Then call flattenStrong() on the whole mess to flatten it into as few objects as possible (presumably no more objects than your individual number of images produced by egg-palettize).

This may or may not give you adequate performance. Whether it does or not depends largely on the number of individual images you were able to reduce it to, which is based on the maximum size you need for your individual images. If you need to be able to zoom these images to fullscreen and maintain full clarity, you may need a different trick (perhaps switching in a full-resolution image based on LOD).

Then again, you may not–even with 800 different full-resolution images, you might do OK with the just the flattenStrong(), omitting the egg-palettize step. Again, this depends on your hardware and your performance needs.

Resizing is absolutely and completely free, in the sense that it costs the same to draw the reduced image as it costs to draw it in full size. Actually, it’s a tiny bit faster to draw it reduced, because you get better caching on your memory. This all works because your graphics card has special vector hardware to scan pixels really, really fast, and when you’re running a 3-D engine like Panda3D, it uses this hardware all the time, whether you’re drawing the image full-resolution or some other scale (or at any rotation and shear, for that matter). So it all costs the same, no matter what transform you apply. (Think about it a minute–this is an essential requirement for rendering 3-D scenes, where almost no polygons are directly facing the camera, and all of your textures will be scaled and skewed in some way or another.)

I think at this point you should just try some of this stuff out and see what you find. It will perhaps help you to understand how the world of 3-D graphics is different from what you might be used to.

David

I am a breadth-first thinker. Too bad computers are so intolerant of it.

Here’s my picture of my program.

import panda

texture0= panda.loadtexture( 'tile.png' )

#draw model
panda.lighting= ambient 100%
for y in range( 500 ):
  for x in range( 1000 ):
    place_tile( texture0, x, y, z=0 )

#create view
create_disp_region( panda.whole_window )
initial_camera( 500, 250, -100 )

#game loop
while 1:
  # UI events
  panda.get_event( )
  if mousedrag:
    move_camera( x= mousex, y= mousey )
  if mousewheel:
    move_camera( z= mousez + 10 )
  if mouse_selecting:
    outline_box( select_region, black )
  if mouse_select:
    place_camera( x= sel_centerx,
            y= sel_centery, z= sel_size )
  if mouseclick:
    reset_views( all )
    # add simple horiz. split-screen
    for i in range( num_cameras + 1 ):
      create_disp_region( horiz_section / i )
      pack( sash= enabled )
      place_camera( 500, 250, -100 )
  if doubleclick:
    obj= hit_test( )
    place_camera( center_on_object )
  # non-UI processing
  next_obj_in_queue = decide which tile to toggle
  toggle_texture_flood_color( next_obj, x, y, z= 0 )

Regarding the event matching, I’ll have to distinguish them uniquely somehow, say with a modifier key or a choice of mouse buttons. There. Now compile and run, right? Not bloody likely, but how bad is the damage?

For the ‘draw model’ section, it might be something more like this:

for obj in all_objects:
  place_tile( textures[ obj.texid ], obj.x, obj.y, z=0 )

It’s a little simpler than that. You won’t have a game loop (Panda owns that); instead, you’ll create a task that is called every frame. You aren’t responsible for issuing draw calls. Instead, you simply put everything you want to be drawn in the scene graph, and Panda will be responsible for drawing it.

All you need to do is move the camera around.

David

I see. Scene graph.

So, I’ll be able to change the, what I was calling flood colors of my 2-D objects, as more or less attributes of the scene graph. It still sounds more attractive to render the objects as line art rather than textures-- significantly faster and not much harder; I may need some dissuading.

Regarding textures, can I control, what’s the term, resampling of them if I go that way? I understand different methods of resizing have different drawbacks (such as say, turning a grid completely solid). Regarding the line art method, I’d like to remove the line art and just leave the background at very small result sizes, it’s just that sounds extremely picky. I gather the engine does all the optimizing with pre-computing in this case beziers and stuff.

And I’m still okay with having multiple cameras and changing that “IRTUA”, in response to user actions, right? I’m considering inlaying one in a corner, as such at least one of the cameras will need a chunk masked/clipped out.

The Bump Mapping demo is pretty impressive. Almost eerily realistic. It’s the closest thing to a terrain flyover I saw, closest to my project. Performance definitely does not appear to be a problem there.

You can make your tiles with line art if you insist, but this is likely to be slower than using textures (though it does all depend on a lot of factors). Certainly it is more likely to produce aliasing artifacts when you get really small.

As to the texture sampling, there are a handful of different controls you have over the texture filtering method. Generally, mipmapping takes care of the Nyquist sampling problem (such as turning a grid black); and using mipmapping in Panda is just a matter of asking for it.

Removing geometry at very small sizes is easily handled by LOD’s; that’s another staple of 3-D rendering.

You can certainly have multiple cameras and render one of them inlaid in the larger window. This is the DisplayRegion metaphor in Panda.

David

Should I use a MeshDrawer or await the MeshDrawer2D, that is, the docs for it? Or should I make some Geoms and a GeomNode? Or should I bite the bullet and try to use Blender? Blender really seems overkill since my “terrain” will be programmatic.

Also, should I use DirectObject or ShowBase? The Hello World in the Manual uses ShowBase but the samples use DirectObject. Render or Render2D?

Thanks.

Here, go through this tutorial:
discourse.panda3d.org/viewtopic.php?t=7918

I know it’s not exactly what you need - but after you complete it - you will have a better Overall picture of what you can do in panda3d.

I am not doing any collision testing.