Is Panda right for the task?

Hi,

I have a project in mind and I want to know if Panda is right for it.

The main feature is the zoom capacity. The camera will be able to zoom all the way from 1x to 1000x or more. So, most of the objects most of the time will be very small or off the screen.

The objects themselves are either bitmaps or lines, and all occur in the same plane.

Thanks sincerely.

of course panda is capable of doing everything that has to do with visualization, but it’s main purpose are interactive 3D applications.
your request sounds more like you’re looking for a SVG app, or at least something with basic 2D specialization. that said you’ll most probably have a hard time with panda if you decide to use it for your project.

Try this research project:
seadragon.com/

Thanks for the reference, drozzy. I got to the part about “image tile pyramids” in Creating Content. Unfortunately there will be a game loop. Objects will be toggling color, that is toggling between two bitmaps, but not moving.

Sorry my friend, I don’t know nothing about seadragon - I just read about it once :slight_smile:
So if it fits you - great!
If not - you gotta come up with a better solution.

My earlier attempt was in Python with a wrapper around SDL (Simple DirectMedia Layer), and using a numerics package. I found I had to move most of it to C, eliminating objects too small or off-screen, to get the performance I was picturing.

The total number of bitmaps will be pretty small, though each will occur at different sizes for any given zoom level, times two colors each. They will only be renderings of SVGs I found on Wikipedia, but a straight blit could outrun recalculating each one, depending on the SVG core. Do you know of something closer to this niche on the optimizations spectrum, if you’ll permit the inquiry, or will Panda do? Or is it too soon to be picking an engine?

Forgive my ignorance, I am a newb in any kind of game development or imaging business, but How did you zoom-out your images?
Or
Do you mean to tell me that you loaded 1000 full sized images during the program startup? And then scaled them?

Note: Mandatory ignorance quotas strictly enforced by radar.

I tried some different strategies. In an earlier version, I loaded bitmaps, which were about 400x300, at start-up. Then, when the user changed the zoom factor, I scaled each of them to each of the sizes (and angles) that the next zoom factor called for, storing them, then blitting them to draw and when the user scrolled. The interpolation wasn’t too bad at full-screen sizes. The amount of pre-computation was pretty small; I did most of it upon zoom.

My other attempt was to just hard-code the drawing commands for each object. Scaling and rotating was consequentially faster, but I just created bitmaps out of them, rather than storing the list of pixels, because flood filling their bounding boxes on angles was really hard to think about.

Unfortunately, I had to discard the ones that were “too big”, for some threshold, in both strategies entirely, even if a portion was on-screen. I’d prefer to just discard off-screen pixels.

I want stuff like, favoring the background color for very small objects, possibly blending them if more than one occupies a single pixel, but still making changes to their flood colors visible. I figure a multipley split screen would be nice for navigation, as well as a miniature view in a corner, as well as hit-testing for mouse rollovers and clicks.

Drawing a “zoom box” when the user dragged the mouse was pretty hard to do efficiently-- I just ended up redrawing the whole screen. I haven’t decided whether to require scrolling concurrently with drawing a zoom box.

I’m not sure where I cross the boundary of “correct” data structures, as a comp. scientist would see them, into the land of over-optimization.

Well, that’s my wish list, at least. Really you can’t beat straight C with such specific requirements, but I want an engine. It would be nice to give all this meta-information to an engine, and have it draw the SVGs with a minimum of branching and repetition.

Seadragon was definitely on the right track, aside from the zoom box and the game loop, even though it doesn’t amount to anything but color changes.

I am still not sure what you mean by “color changes” and “blending” things with the background.
Is that part of your requirements - or are you just thinking of optimizations?

How about mipmaps - just wandering if that’s something you considered:
en.wikipedia.org/wiki/Mipmap

Not sure how applicable it is in your case, but at the cost of memory you can save on processing.

Regarding not-drawing images outside the screen, any game engine will do culling/clippng automatically for you, so you should be good there:
gamedev.net/reference/articl … le1212.asp
Now - I am not sure about how panda handles it.

Excuse me if I am repeating things for you…

Memory wasn’t at all an issue in the trials I ran with either strategy, as I recall. I seem to have been wasting many processor cycles. (Sounds like my love life.) Just from a glance, mipmapping could save a little bit on the zooming, but not on the scrolling. If I recall correctly, I was even storing every zoom level of the original rasters I had created so far, and the cache lookup actually ended up taking more time than just creating them anew on every zoom action.

There will only be two colors that the bitmaps will “be”, with some flexibility in exactly how they will be them. (Not to confuse us.) The color of each occurrence will be independent; I need the ability to change them at will. The rest of the stuff is lower priority.

For the blending with the background, it may be I’ll have several items occupying the same pixel. When one changes, the pixel should change color. The foreground of their SVGs shouldn’t be drawn, just the flood color, though that’s flexible too, for instance if it would make one or another engine a cinch for the job. The SVG foregrounds shouldn’t start to be drawn until the item is a few pixels big.

I wouldn’t achieve my goal if I frustrated everybody, but I do want to keep my post abstract enough that others can find it useful. That said, to remove some of the mystery, my prototype was able to just barely function, and I started the C extension from scratch; so it wasn’t robust, or mature, and was ad hoc and rickety. I’d like to drop a library into its place.

I think the key feature is that most of the time, I won’t be changing objects’ model coordinates, with some possible exceptions under user direction. Rather, the camera will only be moving.

If I started with a terrain simulator, I would add whitespace, shrink triangles quite a bit, and change triangles to SVGs, which are lines, beziers, and circles only.

I checked on the number of rasters; it’s around 10,000 in a really small case. I could see that getting as high as a million, which would each be 5-10 SVG primitives. I realize I won’t be running the program on an Xbox, and I don’t need to squeeze up every processor cycle, but most would be nice.

Are you, or were you a manager at any point in your life? :smiley:

Funny you should ask. Now as for whether it would be worth my time to give it a shot, it would be especially hard to determine that from what other people know, my being a newcomer. If the group and I had something to go on, some sort of shared experience or shared reference point, that could serve as a basis for the answer, but we don’t. Too bad there’s no such thing as time estimates anyway.

That said, I might be satisfied with a quick outline of how to get to the bottleneck. That is, what problems are pretty easy to code that are about equivalent to my end product?

Wait wait, I thought of something clever. I am always a manager.

Alright, well, forgive my irreverence.

Hm… Well either what you are trying to convey is way over my head, or it is so abstract that no solution can be proposed for it.

From the general gist of things, I think that any game or 3d engine should be more than adequate for the task of displaying large number of textured polygons (images) on the screen.
en.wikipedia.org/wiki/List_of_game_engines

Just googled something that might be of interest to you:
[PDF] Interactive Display Of Very Large Textures

i didnt follow all your whishes and explenations on what exactly you want to do. but basically it sounds like something compareable to google-maps where you can zoom around a huge 2d-like object.

if so, pretty much every engine will do the job if you organize your data correctly. i’d suggest a quadtree based scene-structure with heavy use of LOD-nodes.

It’s really hard to tell what would be best for this job. It seems like it would be easy enough to just try them all, since it’s a finite task, but it sounds daunting. Maybe I would learn what to look for after a few. I started here because I’m sort of in Python “mode” with programming.

I might be hung up on the bezier function. I didn’t see it in the Panda docs. And I didn’t see anything about split screen.

My other major hang-up I think is this weird trade-off that should be an engineer’s decision: how to render the individual objects. Is it faster to flood the rectangle in-place on screen, then write the pixels in the SVG? Or create a bitmap that contains the rectangle and SVG? If the rectangle is on an angle, the bitmap is up to 2x too large. Or should I store an array of coordinates?

Thanks for entertaining this so far.

splitscreen in panda is no problem, it’s done using display regions. i dunno how beizer and nurbs are related. but that should be quite possible.

There’s a trivial, well-understood mapping from Bezier to NURBS–any Bezier can be easily convert to an equivalent NURBS, and Panda supports NURBS directly.

As to your other question, it sounds like you’re thinking in terms of a 2-d engine. A 3-d engine like Panda doesn’t work with pixels, it works with triangles and meshes. You can put any texture you like on those triangles, and you get pretty much infinite scalability for free. The bottlenecks on a 3-d engine are usually not pixel processing, but instead the number of individual objects you have to deal with.

David

Will it be faster to:

  1. Resize my original “tiles” by hand, and issue their positions,
  2. Issue the original tiles directly, resulting in resizing them thousands of times, or
  3. Draw their contents in SVG primitives directly?