Hey everyone, so I’ve really been trying to figure this out on my own, but even after reading the manual and multiple forum posts, I still cant fully grasp all the available tools of Panda to really solve this issue.
So basically, the title explains exactly what I am trying to do. Here are the complications:
1.I have my camera offset from the character (x,y,z).
2. My camera is parented to a dummy node so that the dummy node rotates Pitch and Heading based off of where the mouse dictates the rotation.
3. My mouse is always set to “base.win.movePointer(0, self.mouseCenter, self.mouseCenter)”
Now, I would have thought this would actually be fairly simple, since even though my camera is pivoting everywhere, the Absolute Center of the screen is always (0,0); therefore, why does this thought process not work?
in order to get the vector from the near viewpoint of the camera, to the far point, through the center. I could technically update a plane that always goes through the center of the screen as well then use .intersectsLine but I am not even sure why that would be necessary because I am not sure why the extrude logic is failing me. If you know of another forum post that could explain this logic to me, I would also appreciate that! Thank you in advance!
Your call to “extrude” would, I believe, give a vector that would be appropriate to firing from the centre of the camera to the centre of the screen. However, since the character is offset from the camera, shooting in this direction from the character’s position likely won’t hit the centre of the screen.
If I’m not much mistaken, what you could do is to use “extrude”–or even just a dummy-node attached to the camera, perhaps–to define a point in 3D space that corresponds to “the centre of the screen”. You could then that the difference of this from the position of the player-character, which should, I think, give you a vector that points from the character’s position to the indicated “centre-point”.
There is a complication if you want to hit objects closer to the camera than a singular “far-point”: due to the offset, the vector to such an object won’t, I think, be the same as the vector to the “far-point”. In that case you might want to consider casting a ray from the camera, seeing whether it hits anything, and if so using the hit-position as your “far-point” instead.
I think my personal complication then comes from actually detecting where the “center of the screen” actually is in order to attach a node there, and update the node. So, if extrude will give me a vector from a near point to a far point through the center, how do I determine the actual 3D center since the way I originally got the vector is using a 2D point on my screen. Even if I use a ray, I would still have to know how to direct it to the center of the screen.
Also, interestingly enough, if I use base.camLens.extrude(LPoint2f(0,0), nearPoint, farPoint)
and print(farPoint) the far point actually never updates but remains static no matter how I rotate my camera which I think could be a complication. If the vector/point never changes, it wont be useful.
The centre of the screen should always, I believe, be directly in front of the centre of the camera–that is, it’s some arbitrary distance down the camera’s y-axis.
So, you could attach a node something like this:
self.screenCentreDistance = 100 # This is an arbitrary value
self.screenCentreNP = NodePath(PandaNode("screen centre"))
# Or whatever camera you're using, if not the default one
# Since the node is parented below the camera,
# its location should be relative to the camera
As it happens, there’s a handy helper-method for that! Specifically, the CollisionRay class has the method “setFromLens”, which allows you to pass in a lens (such as the camera’s lens) and have it align itself as appropriate.
Ahh ok, yes that certainly makes sense. However, I think I am screwing something up with my initial code that complicates this. So since I am actually updating the dummyNode that my camera is parented to, my camera position actually never updates. So in my initializations:
I would have thought; however, that my base.camera.getPos() would vary with my dummynode position, but it does not. It stays static at self.cameraOffset even though it actually moves as expected when I move my character.
Now, I can easily fix this
self.screenCentreNP.reparentTo(base.camera) to actually be
self.screenCentreNP.reparentTo(self.dummyNode) and self.screenCentreNP.setPos(self.dummyNode.getPos + self.cameraOffset)
But Im just noting all of this because I think I am doing something clunky with my coding and probably should fix it before moving on
If I’m not much mistaken, it both does and it does not. In particular, I think that if you were to call “base.camera.getPos(render)”–i.e. “get the position of the camera relative to ‘render’”–you would find that the position would change.
Specifically, what’s happening is that “getPos()”, without any arguments, gives the position of the NodePath in question relative to its parent. Since you’re not moving the camera relative to its parent, that call will keep returning the same value. However, since the parent itself is being moved, the position of the camera relative to render should indeed be changing.
Honestly, I don’t see anything in the first snippet that you just posted that seems problematic–have you tried either of the methods that I suggested? I’d expect either to work with your current code.
So here is my understanding. A solid without it being attached to a node is useless. So after making my ray, I need to attach it to a node, specifically a collision node. The reason I add it to a collision handler is because I want to be able to detect the nearest collision that the ray finds, in order to accurately shoot the arrow at the target. Since like you said, if I just point to an arbitrary value out in space, the arrow will most likely fly past the target.
My issue: when I add the node to render, I do not believe the “Setfromlens” command actually applies, since rendering the node sets it at 0,0,0 rather than where the setfromlens positioned it originally.
Ahh, thank you for the information! I certainly appreciate it; although yes, I do think a collision ray is the best approach.
I think that you have to call “setFromLens” on each update, so that the ray is adjusted for the new position and orientation of the camera.
I’m also not sure of what parent the ray’s NodePath should have; it makes sense to have it attached to render, but I see in my own code that I seem to have attached it to the camera. The latter might therefore be something that’s worth trying, if the first point above doesn’t help!
All that said… Thinking about it, given that your ray never moves relative to the camera, is always firing down the direct centre, perhaps you can do without “setFromLens”. If you attach the ray directly to the camera and point it down the y-axis, it should, I think, remain pointing in the right direction thereafter.
Yes! That worked perfectly! So now, its shooting directly at the center of whatever I target without even needing to use “setFromLens”
Now, the only catch is that this logic requires everything that I hover over to have a collision for
ArrowHit = self.ArrowQueue.getEntry(0)
self.hitPos = ArrowHit.getSurfacePoint(render)
But one issue I can see, is that for instance, my boxes/objects will generally have sphere colliders. So if I hover over a corner that doesnt have a collider, and the past thing I hovered over was a wall, it will actually shoot at the wall even though my cursor is hovering over the box since it hasnt detected the box’s collider yet.
Basically, all of my colliders will have to perfectly mimic the object, or else I will need to add some type of catch logic where if I move my crosshairs, but the object collider position doesnt change, then maybe I default to shooting straight at Y = 100 or some such
I think that it’s likely pretty normal, even with targeting systems like this, for the collision geometry to not perfectly match the model. The only note that I’ll make is to suggest that you have the targeting geometry for the enemies be larger than strictly accurate, to give the player some leeway.
A simple way to provide a targeting-position in the case of no apparent geometry being under the cursor might be to attach an inverted sphere collider to the camera, with a bit-mask that has it only interact with the targeting ray. This should result in there always being a collision result to process, even if it’s just the fallback “backdrop sphere”.