paulrhayes.com/experiments/cube-3d/ - This example is a conventional method to rotate (or should I say scroll?) around different faces of the cube, with them being Front, Back, Left, Right, Top, Bottom. Let’s reparent the camera to the center of the cube and make it lookAt it. The starting camera positions, for the purpose of this exercise for this topic, are based on the HPR of the center node:
Front - (0, 0, 0)
Back - (180, 0, 0)
Left - (90, 0, 0)
Right - (-90, 0, 0)
Top - (0, 90, 0)
Bottom - (0, -90, 0)
Now, imagine I wanted to scroll through the different 6 faces with just 4 arrow keys, where looking at the Left face, coming from the Top one, which in turn came from the Front - resulting in Hpr(90, 0, 90) - meant that when I was in looking at the Top face (0, 90, 0) and I pressed right, instead of going to (0, 90, 90), it went straight into (90, 0, 90).
Why am I asking this? It’s because when you’re looking at the faces and you press a direction, it would seem more intuitive to go to that face instead of rotating the axis. But does this depend on pre-set coordinates you have to apply through If / Else conditions? If you have trouble visualizing it, please tell me and I’ll make a video. Also, I know I’d probably be better off with Quaternions, this is just a topic about the camera design, if it’s worth going after or not.
Maybe I’m asking this the wrong way… I’ll try again:
Do you know when in Solidworks you press the Arrow Keys to rotate the object from its current position to another, by having each input stay relative to how the object faces you? That’s the effect I want to do with my cube. When I’m looking at the Top Face and I press Right it should go to the Right face and not rotate along the Z Axis;
when the Y Axis is pointing up in Solidworks, Left and Right arrow keys change Rotation, and Up and Down arrow keys change Pitch;
when the Z Axis is pointing up in Solidworks, Left and Right arrow keys change Heading, and Up and Down arrow keys change Pitch;
when the X axis is pointing up in Solidworks, Left and Right arrow keys change Pitch, and Up and Down arrow keys change Heading.
As it seems it should probably call for extensive If Else conditions, for both the keys being pressed and the relative Axis we want to rotate around. This is simply a Design topic, for now I won’t post code, but if you could help me search for a solution, I’d be grateful.
I think (and if I understand your scenario correctly) that you could do this with relative rotations and a NodePath parented to your camera.
Specifically, your rotations seem to me to always be the same relative to the camera itself: pressing “right” always rotates you counter-clockwise around the camera’s up-axis (whichever way that may be pointing in world-space), for example.
So, what I suggest is this:
(I’m presuming that you want to animate from one orientation to the next; if not, you should be able to change the last step to simply assign the orientation rather than lerping it.)
Create a NodePath attached to the camera; let’s call it “orientationNode”.
When a diectional key is pressed, set “orientationNode”'s H, P and R according to the desired rotation.
[list]Since “orientationNode” is directly beneath the camera in the node hierarchy, its transformation is relative to the camera’s transformation, and thus we should be able to simply use the (static) relative rotation relevant to the key pressed.
Next, get “orientationNode”'s orientation relative to the camera’s parent.[/:m]
Finally, lerp between the camera’s current orientation and the orientation just calculated.[/:m][/list:u]
Note that this is–I think–probably better done using quaternions (such as acquired via the “getQuat” method of NodePath) than by HPR, save for the second step above, in which case H, P and R are probably more convenient.
I’ll mess around with Quaterions then, I think the other methods are too unnatural to be of any good. Don’t even ask what I was trying (if else conditions… ). However, I have to say I was overwhelmed by the amount of math involved with Quaterions whenever I googled them, when I only thought of them visually. I’ve looked at the API reference and converting from HPR and back caught my attention, as I’m now experimenting to see what it means both in Vectors and in Space by trying to simulate what I want. I just have to ask this beforehand: is it better to mess with HPR and then convert to Quat, or to simply introduce a new Orientation to the Quat itself?
Hmm… While a reasonably full understanding of quaternions might be useful, I’m not sure that it’s important to your immediate problem, so don’t let the maths scare you off just yet.
Regarding HPR versus quaternions, if you take the approach that I outlined I think that it would likely be better to use HPR only for the initial step of setting the relative rotation for the sub-node; the lerp in particular may be better handled use quaternions alone, and the final assignment can easily enough be handled via the “setQuat” method (if I recall correctly), obviating the conversion to HPR.
(As to lerping between quaternions, take a look at this thread–it’s actually fairly straightforward. (But note that you should probably place the multiplication by “t” after the “(B-A)”.))
And now I get the Quat equivalent of its coordinates through this:
self.camQuat = self.camCenter.getQuat()
Which prints this: “(0.707107 + 0.707107i + 0j + 0k)”. I checked that arccos(0.707107) = 45º aproximatedly, which means the Orientation vector is rotated at that angle, right? When I ask for the orientation vector it gives me the X coordinate of the vector part of the Quaternion pair (if we think of it like w, v). So far, so good, but I’m having trouble visualizing how these coordinates make up the same rotation as the initial Hpr that the Quat was based on.
Also, I need to feed the LerpQuatInterval with a float, but the Quat has imaginary parts that don’t add up to a float, given the format it was printed. I’m trying to figure out how these work without much knowledge of Quaternions, but I’d prefer to see the result of turning the camera node to the equivalent of Hpr(90, 0, 90) first before realizing the math behind it.
I honestly think that you’re making this more difficult for yourself than it could be. ^^;
Offhand, I’m not sure of how to interpret the four-component form of a quaternion, so I’m not sure of whether your calculations there are correct, I’m afraid.
However, if you want to better visualise the quaternions, perhaps you’d find the axis-angle representation more intuitive? I believe that you can get that by calling “getAxis” and “getAngle” on your quaternion; something like this:
# Presume that we already have a Quat named "quat"
axis = quat.getAxis()
angle = quat.getAngle()
print axis, angle
# Prints something along the lines of "Vec3(0.707, 0.707, 0) 80", which should be
# an 80-degree rotation around the axis (0.707, 0.707, 0), if I'm correct
Regarding LerpQuatInterval, are you sure that you’re not attempting to pass your destination parameter where you’re supposed to pass the duration of the interval? The parameters to LerpQuatInterval are, I believe: NodePath on which to operate, duration of interval, destination quaternion, initial HPR, initial quaternion–note the position of the duration of the interval.
(I experimented with LerpQuatInterval, and it seems that the “initial HPR” parameter is optional: if you just have two quaternions (as I’d expect here) you can just pass in “None” for that parameter.)
self.destQuat = Quat()
self.startQuat = Quat()
# I'm using "setHPR" here just to produce arbitrary quaternions;
# you would presumably get their values from the relevant NodePaths
self.destQuat.setHpr(Vec3(0, 0, 90))
self.startQuat.setHpr(Vec3(90, 0, 0))
self.np = render.attachNewNode(PandaNode("An arbitrary NodePath"))
duration = 5.0
self.interval = LerpQuatInterval(self.np, duration, self.destQuat, None, self.startQuat)
I just wanted to mess directly with the Quat coordinates without having to resort to setting them through a Hpr vector everytime, since I think it’s more limiting, specially when you have to pass it to a LerpQuatInterval. I think you have more freedom when you mess directly with the orientation through a rotation you can introduce with a function. Sometimes built in functions do too much for us that we lose control over small details.
Hmm… I think that I may be missing something–why do you find that you end up setting the quaternions by HPR each time? In the example that I posted above I did so only so that they had some useful data (since I didn’t have any actual NodePaths from which to get them)–surely your quaternions would be getting their data from the relevant NodePaths?
I’m imagining something like this:
# Presume that you have a NodePath named "self.targetNP";
# this is the NodePath that I mentioned in the first point in the list
# in my first post above, and is attached below the camera.
# Presume too that the camera is rotating around another NodePath named "self.anchor".
# The hierarchy is then something like this:
# (other nodes, potentially)
# When the directional key is pressed, set targetNP's HPR accordingly--I think that
# HPR is likely the simplest method here, something like this:
if key == "right":
self.targetNP.setHPR(-90, 0, 0)
elif key == "left":
self.targetNP.setHPR(90, 0, 0)
elif key == "up":
self.targetNP.setHPR(0, -90, 0)
elif key == "down":
self.targetNP.setHPR(0, 90, 0)
# (The numbers up there are untested, and so may be incorrect--especially in sign
# Get the current orientation
cameraQuat = self.camera.getQuat()
# Get the target orientation relative to the anchor
targetQuat = self.targetNP.getQuat(self.anchor)
duration = 5.0 # Or whatever duration you want.
# I'm storing the interval in case I want to act on it
# (such as by stopping it) at some point.
self.interval = LerpQuatInterval(self.camera, duration, targetQuat, None, cameraQuat)
Ok, the camera is lookingAt the cameraCenter node, which HPR is (0, 90, 0), meaning that the camera is on top looking down with a HPR of (180, 0, 0). Since the camera itself is a new coordinate system (a new XYZ), it means that if I attach the Orientation node to it, it will have a HPR of (0, 0, 0).
Since I want the cameraCenter node to go from HPR (0, 90, 0) to (90, 0, 90) (by feeding these to the Quats), I have to set the Orientation node’s HPR to (90, 0, 0) when I press “left”, and pass it on to the cameraCenter set of Axis as a Vec3 so that I can feed it to the destQuat. Is that it?
# Define camera relative to current position
self.camCenter = self.center.attachNewNode("cameraCenter")
self.camCenter.setHpr(0, 90, 0)
base.camera.setPos(0, 13, 0)
self.camOrient = base.camera.attachNewNode("camOrient")
def moveCamera(self, task):
self.initQuat = Quat()
self.destQuat = Quat()
# Define the movement of the camera between different set angles
if self.keyMap["left"] != 0:
self.orientVec = self.camCenter.getRelativeVector(base.camera, self.camOrient.getHpr())
self.camCenter.quatInterval(4, self.destQuat, None, self.initQuat, blendType="easeInOut").start()
I can’t make sense with the API documentation, I don’t know what is relative to what, nor what arguments we should pass so that I get the desired effect…
(I may be mistaken in what I write below; I stand for correction if this is the case. ^^; )
I think that these points are rather important, especially for what we’re doing here, so I’d like to address this first. I’m going to try to be fairly detailed in this, so my apologies for the length of this post! ^^;
As I understand it:
A node’s transformation–position, rotation, etc.–is understood to be relative to its immediate parent.
np1 = NodePath(PandaNode("node 1"))
np2 = NodePath(PandaNode("node 2"))
# I could also have done this in one step, like so:
# np2 = np1.attachNewNode(PandaNode("node 2"))
We now have two nodes, represented by the NodePaths np1 and np2, such that np2 is a child of np1–that is, it is below np1 in the scene graph.
If we now move np1 (such as by calling “np1.setPos(x, y, z)”), np2 moves with np1. However, if we were to call np2.getPos() after movig np1, np2’s position would appear to be unchanged–because np2’s position is measured relative to np1, and that relationship hasn’t changed.
Similarly, when we set np2’s position, we are setting it relative to np1; a position of (0, 0, 0) places it at the same position as np1, while a position of (1, -5, 7) places it at a position 1 unit along np1’s x-axis, -5 units along np1’s y-axis and 7 units along np1’s z-axis.
This is an important point: in the paragraph above I referred to np1’s x-axis, etc., not the x-axis, etc. (i.e. the world-space x-axis, etc.). If, for example, np1 is rotated, its axes will not necessarily match those of the world.
However, we sometimes want to know or set a NodePath’s position, rotation, etc. relative to some node other than its immediate parent. In this case we specify the NodePath relative to which we want to operate by passing it as the first parameter to the various getPos, setPos, getQuat, etc. methods. In this case the methods should change the node’s transformation relative to the transformation of the specified NodePath, rather than to the node’s immediate parent.
If you’re having trouble with LerpQuatInterval, then this is my understanding of the parameters:
LerpQuatInterval(np, duration, destQuat, startHPR, startQuat)
np = The NodePath to be moved–in this case, this would presumably be your camera
duration = The duration of the interval
destQuat = The final orientation–presumably relative to its parent–that “np” should have
startHPR = The HPR at which “np” should start; since we likely have quaternions, we should be able to ignore this parameter by passing in “None”
startQuat = the quaternion orientation at which “np” should start.
Hmm… I’m not entirely sure that I have this part of the scenario entirely clear in my mind: is the camera parented to “cameraCenter”? If so, do I take that that the camera has an HPR of (180, -90, 0), such that it has a world-space HPR of (180, 0, 0)? Do you have a y-up coordinate system that (180, 0, 0) corresponds to looking downwards?
For simplicity’s sake, what I’m inclined to suggest is this: the camera is parented to (thus, is a child of) cameraCenter. Its “default” orientation (an HPR of (0, 0, 0)) is looking down the y-axis (…or possibly the negative y-axis, I forget ^^; ), which I believe to be Panda’s default orientation. (Note that, as discussed above, this refers to cameraCenter’s y-axis, not necessarily the world y-axis.) You should be able to do this without the use of the “lookAt” method, I believe, since the camera is parented to cameraCenter, which I presume to be located at the position of the object being looked at.
To check that I’m understanding your correctly, am I correct in taking it that you mean that the orientation node will have an HPR of (0, 0, 0)? if so, then yes, that seems correct.
The first part (setting the orientation node’s HPR) seems correct to me, but not the second, if I understand you correctly: you should be able to just get the destination quaternion directly from the orientation node via the “getQuat” method, and then either feed that to a LerpQuatInterval or directly set the camera’s orientation via the “setQuat” method.
The important thing is to get the destination quaternion relative to the camera’s parent, so that it’s comparable with the camera’s orientation; this should just involve passing in the camera’s parent to getQuat, as described above, I believe.
Hey man, I have to thank you for your help, no need to apologize for long posts, they’re always welcome to me. In fact, I don’t think I’ve ever replied to you with the same in-depth exposition. But I have to say that I figure out what to do, and what was going wrong with my experiment. The thing is, I have this:
And what happens is that whenever I press the key, it counts as a hold and so the Pitch for camOrient increments as the LerpInterval is playing, which means that by the time I press the “up” key again, it won’t go to the next natural orientation, which would be from Pitch(90) to Pitch(180). Instead, it could be 360, or 540, or whatever big number I get when I hold the key. Is there a way for the press to increment only 90 even if I’m holding the key?
Hmm… I take it that you want the system to continue to rotate if you hold down the key, as opposed to waiting for you to release the key and press it again before rotating again? If so, then you might want to incorporate a boolean flag that is set to “True” when a rotation begins, then to “False” again when it ends. You might do this by placing your interval inside a Sequence, with a function interval at the finish.
Additionally, don’t forget to reset “self.camOrient”'s HPR to zero for it’s next use–otherwise your changes will indeed presumably accumulate over successive rotations: the first rotation would be as expected, the second would be the result of the first plus the intended second, and so on.
Something like this, perhaps:
if self.keyMap["up"] != 0:
if not self.rotating:
self.rotating = True
self.destQuat = self.camOrient.getQuat(self.center)
# Note that I've removed ".start()" from this next line.
# I presume from your usage of it that "quatInterval" returns
# an interval.
interval = self.camCenter.quatInterval(4, self.destQuat, None, self.initQuat, blendType = "easeInOut")
print self.camOrient.getHpr(), self.camCenter.getHpr()
# Note this!
self.camOrient.setHpr(0, 0, 0)
# Now, create and start a Sequence: this should
# run your rotation, then, when that's done,
# set our flag to False
self.rotationSequence = Sequence()
self.rotating = False
THANKS! I did it! After all this time… xD I’ll simplify this now, it’s quite an extensive piece of code, but it gets the job done. I added an extra so that it’s only accept input when the HPR is 90, 180, 270, 360, 540 and so on!