I’m attempting to implement a camera that follows a target (the player), and specifically one that is not fixed absolutely to the target, but rather that follows the target’s position smoothly over time.
To some degree I have this working. However, I keep encountering terrible jitter when the camera is near to the target, and perhaps at relatively-low frame-rates (~45fps). And thus far I have not managed to entirely get rid of it. (At least not without incurring some other issue.)
I’ve tried a variety of things–rearranging the order of operations (which did help a bit, I think), clamping the camera’s movement so that it doesn’t overshoot, re-working it to use a velocity-vector, or re-working it to draw from a set of samples over multiple rather than a single value. Thus far to little avail, I fear.
Since the actual code in question is a little complicated (involving multiple files and classes), let me describe my current approach to this in partial pseudocode:
(I’m doing so in part off the top of my head, and so may be mistaken in some of the details. However, I believe the below to be at least broadly accurate.)
That is to prevent the camera from shooting past the target, presumably. That is one way to do it, but to be sure that that’s not the source of your jitter, you could explicitly setPos to the target in the if case and return.
Have you tried printing out the various values in your task and seeing if you see anything odd when the jitter occurs?
I don’t really know whether my input will help, but I have had a similar issue previously with my model shaking around when it’s supposed to be stationary due to the player rotation constantly jumping between two values at once. My solution was to only update the rotation (or in your case, camera position) if there is some sort of user input.
Hmm… I would expect that to cause stuttering as the camera jumps from one integer/rounded value to the next.
The varying value–by which I’m guessing that you mean the target-position; please correct me if I’m misunderstanding–moves smoothly using floating-point positions. The camera is intended to follow in the same way.
You’re pretty close to treating your camera as a sprung-particle, which is not a bad way to go. (This dates back to 1989, but is still pretty common in drawing tools to smooth out input , and I still use it fairly often: http://www.graficaobscura.com/dyna/.)
If the ‘spring’ is too strong, you get jitter. If friction’s too low, you get an orbit. As you’re implemented right now, you have no friction, so you’re going to orbit. (I left ‘mass’ out, because it just drops out of the calculations, and I got used to writing this on much slower systems)
diff = targetPos - cameraPos
cameraVel += diff * spring * dt
cameraVel *= friction #if dt varies a lot, you'll want friction ** dt, and set your friction constant accordingly
cameraPos += cameraVel * dt
from there, you now have ‘spring’ (usually in 0.01 … 0.1) and ‘friction’ (usually 0.1 … 0.9) to play with
other things to tweak:
round the diff to zero if it’s below some threshold
round velocity to zero if it’s below some threshold
cap velocity or acceleration to maximum thresholds
cap/round in a nonuniform fashion
(disclaimer: this is a fairly different kind of easing. this eases in and eases out… my next reply will have tweaks for your lerping solution)
now, without changing the ‘shape’ of what you’re doing… which now that I look at it is equivalent to
MAX_SPEED = 1
def lerp(a, b, t):
return a + (b - a) * t
cameraPos = lerp(targetPos, cameraPos, min(MAX_SPEED, cameraSpeed * dt))
which I’d tweaking MAX_SPEED, which may give a fix — offhand, I’d guess to 0.05 or so
but since you mention low frame-rate… pick an expected dt and normalized for that… though my friday-afternoon brain is forgetting its logs and exponents, so here’s a “solid, and fast enough” plus an “even better, if I’m not making a dumb mistake” approach
for _ in range(dt // EXPECTED_DT):
cameraPos = lerp(targetPos, cameraPos, min(MAX_SPEED, cameraSpeed * EXPECTED_DT))
On another forum, it was suggested that I divide my time-step. At the time I did so, and found that it didn’t help.
But in that attempt I only applied this division to the camera’s update–not to the target’s. It then occurred to me last night (I think that it was) that perhaps applying it to the target–thus changing the relationship between camera and target over the course of the divided time-step–might be important.
I tried it just a short while ago and… it seems to have worked!
In short, what I’ve done is as follows: In the semi-pseudocode that I showed above, I’ve changed “updatePlayer” to look something like this:
newDt = dt
while newDt > someVerySmallValue:
# Update both target and camera by a small, fixed increment
position += velocity * someVerySmallValue
# Keep doing so until the total delta-time for the frame is all-but
# used up
newDt -= someVerySmallValue
# Perform the update one more time, using whatever delta-time remains
position += velocity * newDt
My thanks to all who participated in this thread–it has been appreciated!