a drag box for selecting an area

Hello,

I’m writing a “drag box” for selecting an area. When the drag completes, I want to position the camera in the center of the box. I also want to change its Y coordinate so that the screen fills one of the dimensions of the box. I tried searching the manual & API for trans_view_to_world et al, but they are only used in shader language.

Thank you.

i am making somthing similar to this although in order to use it i had to change my lens to orthigraphic, although I suppose you could change it back after your done…

basically you need to use picking to find both spots you need the start and end of the drag then use the average between the 2 points for where your camera goes so (X+X)/2,(Y+Y)/2 and use this as a lookat point too then use a set film size so set the display area of the camera using x-x and y-y to set the area this will setup the camera to the display size you want and voila you have the cam qwhere you want it. I do not have code right now but will be hopefully displaying this code tonight on snippets I just have to fine tuen it first.

Note that in Panda the camera is a node like anything else. If you want to move the camera, you simply set the position of the camera node. There are no explicit “view” operations in Panda.

David

Hi all,

Well, here’s the solution I got, if anyone else is having the same problem.

		self.mouse1x2=xmo=base.mouseWatcherNode.getMouseX()
		self.mouse1y2=ymo=base.mouseWatcherNode.getMouseY()
		xmo=xmo*base.cam.node().getLens().getFilmSize()[0]
		ymo=ymo*base.cam.node().getLens().getFilmSize()[1]
		x1,y1,z1= base.cam.getX(),base.cam.getY(),base.cam.getZ()
		x2,y2,z2= x1+xmo/2,y1+base.cam.node().getLens().getFocalLength(),z1+ymo/2
		r= (0-y1)/(y2-y1)
		x3,z3= x1+r*(x2-x1),z1+r*(z2-z1)

Then, ( x3, z3 ) are the coordinates of the mouse pointer on the y=0 plane. It makes some pretty heavy assumptions, but maybe you can use it. It does use a Perspective lens. Not tested rigorously.

Any ideas on setting the Y position such that the four model corner coordinates are on the corners of the window?

It looks to me like you’ve got the right basic idea, if you don’t want to change the lens properties. Basically, compute the appropriate distance based on the camera’s field of view. But I think the right distance would be something more like:

radius / math.atan(deg2Rad(fov / 2.0))

Where fov is the camera’s field-of-view, and radius is the computed radius of your model. Use either horizontal or vertical values, but be consistent.

If you are willing to change the lens properties, you can fairly easily zoom your lens to fill the model without moving the camera, using Lens.setFrustumFromCorners(). You’d have to compute the four corners in the space of the camera, using camera.getRelativePoint(). But probably you don’t want to modify the lens properties; that’s kind of a weird thing to do.

David

Ta da, like a charm.

I ended up defining ‘screen_to_yplane’ because I didn’t know about np.getRelativePoint and it looks confusing anyway. Then, with corner points p00-p11:

		xposH= abs( p11[0]-p00[0] )/ 2
		yposH= abs( p11[2]-p00[2] )/ 2
		xlenH= xposH/ math.atan( math.radians( base.cam.node().getLens().getFov( )[ 0 ]/ 2 ) )
		ylenH= yposH/ math.atan( math.radians( base.cam.node().getLens().getFov( )[ 1 ]/ 2 ) )
		lenH= min( abs( xlenH ), abs( ylenH ) )
		base.cam.setY( -lenH )

Or however you write it civilized.

And for a grand finale of multi-posting, my textures are disappearing when the camera is less than 1 unit away. That is, the screen is going blank.

-1.16363346577
-1.04727005959
#<--- everything vanishes here
-0.942543029785
-0.848288714886
-0.763459861279
-0.687113881111
-0.763459861279
-0.848288714886
-0.942543029785
#<--- and reappears here
-1.04727005959

for these printed values of cam.getY(). Very mysterious.

That’s the near plane, which is by default set at 1 unit. You can move it smaller (to any value greater than 0, no matter how small), and allow your camera to get closer to the model without clipping; but you trade Z precision for this. See “Lenses and Field of View” in the manual for more information.

David

Oh yeah, duh. You only read about something 100 times before you start to forget it you know.

While I’m at it, base.cam seems particularly resistant to setting the aspect ratio, such as in the ‘splitScreen’ example code in the Display Regions section. I’d really like some sizers and sashes of course but that doesn’t seem present.

There is an automatic task that resets the aspect ratio of base.cam every time the user resizes the window. If you want to disable it, just put an explicit aspect ratio in your Config.prc file, e.g. “aspect-ratio 1.333”. Or, you could call base.ignore(‘window-event’) to remove the event altogether, but this does a few other things too.

Sizers and sashes are not provided by default, but people have implemented them in Python without too much trouble in the past. They’re pretty simple things, really.

David

That’s not as I recall what was happening. I called the operation that sets the aspect ratio in a split screen, on each of two cameras, but it only worked on one, the non-primary one. It came right out of the demo, and the demo looked a little disproportionate too.

cam.node().getLens().setAspectRatio( float(dr.getPixelWidth()) / float(dr.getPixelHeight()))

I certainly don’t know why a camera would fail to accept the requested aspect ratio. Can you provide a sample that demonstrates something misbehaving?

David

Yeah, sure. If you’ll forgive my being a little casual about it:

def makeNewDr():
	dr2 = base.win.makeDisplayRegion( )
	dr2.setClearColor(Vec4(1, 1, 1, 1))
	dr2.setClearColorActive(True)
	dr2.setClearDepthActive(True)

	render2 = NodePath('render2')
	cam2 = render.attachNewNode(Camera('cam2'))
	dr2.setCamera(cam2)
	dr2.setSort( 100 )

	cam2.setPos( 7, -25, 5 )
	return cam2

def splitScreen(cam, cam2):
	dr = cam.node().getDisplayRegion(0)
	dr2 = cam2.node().getDisplayRegion(0)

	dr.setDimensions(0, .5, 0, 1)
	dr2.setDimensions(0.5, 1, .5, 1)
	
	#honor setAR for default base.cam?
	cam.node().getLens().setAspectRatio( float(dr.getPixelWidth()) / float(dr.getPixelHeight()))
	cam2.node().getLens().setAspectRatio( float(dr2.getPixelWidth()) / float(dr2.getPixelHeight()))

This comes right out of the manual. But when I run it, the textures on the left (cam) are about twice as thin as the textures on the right (cam2). I’ll upload a screenshot if you give me a server.

You’re setting the DisplayRegion on the right to the upper-right quadrant: (0.5, 1, .5, 1), which is a different aspect ratio than the right half of the screen. If you meant to set it to the right half of the screen it would be (0.5, 1, 0, 1).

But also, I don’t see you disabling the window-event on ShowBase. Are you sure that isn’t messing with your aspect ratio after you set it? You can always get the Camera node and check that it still has the aspect ratio you think it has.

David

I wasn’t satisfied with your suggestion, I’m sheepish to admit-- I wanted a solution that didn’t mess with events to fix my AR, but I do get stubborn. To get our testable predictions straight, the left half should have the same aspect ratio as the TR quadrant. The TR quadrant should have the same AR as the original. The AR is the thing that’s going to be making your rasters look thinner or fatter. Correct?

The left half is tall and skinny. The top right quadrant is only half as tall, so it should have twice the aspect ratio as the left half.

Still, the sample code is computing the aspect ratio directly from the DisplayRegion’s size, so it should compute it correctly, even if the two DisplayRegions don’t compute the same result. And, in fact, this code works fine when I paste it in and run it.

So I’m left to surmise what’s going wrong in your case, and all I can do is throw out guesses.

Perhaps you can post a complete program that I can run, unchanged, that demonstrates the problem you are seeing?

David

I am starting to sound lunatic. Like a lunatic, that is. Regardless.

The square appears half the width in this example.

from direct.directbase.DirectStart import *
from pandac.PandaModules import *
from panda3d.core import CardMaker
from panda3d.core import AmbientLight
from panda3d.core import NodePath

cm = CardMaker('card')
card = cm.generate()
square0path= NodePath( card )
square0path.setPos( 0, 0, 0 )
square0path.reparentTo( render )

alight = AmbientLight('alight')
alight.setColor(Vec4(1,1,1,1))
alnp = render.attachNewNode(alight)
render.setLight(alnp)

base.cam.setPos( 0, -20, 0 )
cam=base.cam
dr = cam.node().getDisplayRegion(0)
dr.setDimensions(0, .5, 0, 1)
cam.node().getLens().setAspectRatio( float(dr.getPixelWidth()) / float(dr.getPixelHeight()))

run( )

Thanks for your continuing time.

Ah, right. This happens because you are setting the aspect ratio before the first frame renders (and thus before the first window-event is processed). When the first frame renders, it throws the window-event associated with opening the window initially, and ShowBase therefore resets the aspect ratio of the default camera.

You can avoid this by either (a) using some camera other than base.cam, or (b) setting a fixed aspect ratio, as I suggested earlier, which disables this automatic feature of ShowBase. For instance, I solved this problem in your example code by replacing the startup sequence thusly:

from pandac.PandaModules import *
loadPrcFileData('', 'aspect-ratio 1')

from direct.directbase.DirectStart import *
from panda3d.core import CardMaker
from panda3d.core import AmbientLight
from panda3d.core import NodePath

The particular value you specify for aspect-ratio doesn’t matter, because you are hardcoding your own value in code anyway.

David

While I’ve got your ear, the ‘extrude’ method seems to claim to map screen points to world rays, but I gave up testing it. Am I interpreting that description right?

Or,

pickerRay.setFromLens(base.camNode, mpos.getX(), mpos.getY())

seems to do most of what I want. Can I get at its guts somehow? I couldn’t find ‘setFromLens’ anywhere in the source.