Collaborative Sci-Fantasy Tech Demo for an Official Panda3D Showcase

I get the same error; I wrapped that part of the program in a real quick try: except: – not really a solution, but the program doesn’t crash.

def __build_section(self, task, section, bot):
    try:
        part = section.generate(self.vertex_data)
        part.model.reparent_to(base.render)
        solidify_task = lambda task: part.solidify(task, 1.5, self.__add_primitive)
        base.task_mgr.add(solidify_task, "solidify")
        deactivation_task = lambda task: bot.set_part(None)
        base.task_mgr.add(deactivation_task, "deactivate_beam", delay=1.5)
        bot.set_part(part)
    except:
        pass

I like this idea, and I think I agree it would be more straightforward to just do it in a Task update so we don’t have to worry about blending Intervals.

1 Like

Ah yes, that does seem to remove the problem! Thank you! :slight_smile:

Oh wow yes, I didn’t even think of that. That does sound rather awkward! (I really just thought: “Tasks good; intervals bad.” :P)

1 Like

Hmm, I don’t recall having uploaded a hangar_1_cont5.zip file here – unless you mean the one I added to the bug report on GitHub?

In any case, here is another .zip file, containing both the hangar_1_cont5.fbx as well as the corresponding starship.bam file.

ship_model.zip (421.4 KB)

You might also want to clear your model cache and/or load starship.bam like this:

        self.model = base.loader.load_model("starship.bam", noCache=True)

Probably related to using the older .fbx file, which didn’t have as many parts defined (through vertex colors) as the latest one. This would also lead to Section instances being created in code that contain no GeomPrimitives, hence the crashes you are experiencing. In an attempt to prevent such errors, I have added a line of code that checks whether the created Section objects are valid:

        # prune any invalid sections
        self.sections = [s for s in self.sections if s]

Having thought about it myself today, there might actually be a relatively straightforward way to do this. What I tried could be considered a kind of “avoidance” technique (there might be a better terminology – I’m no physics specialist :stuck_out_tongue: ). The basic idea is to define a buffer zone around the moving bot (simply using a radius value); when the position of an obstacle gets within this buffer area, the vector pointing from this position to the bot’s origin is added to the current speed vector of the bot, effectively pushing the bot away from the obstacle.

Here’s what I played around with:

from direct.showbase.ShowBase import ShowBase
from panda3d.core import *
from random import random
import array


base = ShowBase()


Walls = []


class Wall:

    def __init__(self, normal, point):

        self.type = "wall"
        self.plane = Plane(normal, point)
        self.push_vec = normal
        Walls.append(self)
        self.model = base.loader.load_model("smiley")
        self.model.reparent_to(base.render)
        sx = -1. * (1000000. if normal.x == 0 else .2)
        sy = 1000000. if normal.y == 0 else .2
        self.model.set_scale(sx, sy, 1.)
        self.model.set_pos(point)


class Entity:

    _inst = []

    def __init__(self, pos):

        self.type = "entity"
        self.model = base.loader.load_model("smiley")
        self.model.reparent_to(base.render)
        self.radius = 5.
        self.model_inv = self.model.copy_to(self.model)
        self.model_inv.set_scale(-self.radius, self.radius, self.radius)
        self.model.set_pos(pos)
        self.speed = 3.
        quat = Quat()
        quat.set_hpr((random() * 360., 0., 0.))
        self.speed_vec = quat.xform(Vec3.forward()) * self.speed
        self._inst.append(self)
        base.task_mgr.add(self.move, "move_entity")

    def get_distance(self, other):

        if other.type == "entity":
            return self.model.get_distance(other.model)
        elif other.type == "wall":
            return other.plane.dist_to_plane(self.model.get_pos())

    def interpolate_speed(self, other, push):

        if other.type == "entity":
            other_vec = other.speed_vec  # this step might not be needed
            self.speed_vec = (self.speed_vec * .9 + other_vec * .1).normalized() * self.speed
            dist_vec = self.model.get_pos() - other.model.get_pos()
            push_vec = dist_vec.normalized() * push
            self.speed_vec += push_vec
        elif other.type == "wall":
            self.speed_vec += other.push_vec * push

        self.speed_vec.z = 0

    def move(self, task):

        others = self._inst[:]
        others.remove(self)

        for obstacle in others + Walls:
            distance = self.get_distance(obstacle)
            if distance < self.radius:
                self.interpolate_speed(obstacle, self.radius - distance)

        pos = self.model.get_pos()
        pos += self.speed_vec * globalClock.get_dt()
        pos.z = 0.
        self.model.set_pos(pos)

        return task.cont


class Simulation:

    def __init__(self):

        base.disableMouse()
        base.camera.set_z(100.)
        base.camera.set_p(-90.)

        # set up a light source
        p_light = PointLight("point_light")
        p_light.set_color((1., 1., 1., 1.))
        self.light = base.camera.attach_new_node(p_light)
        self.light.set_pos(5., -100., 7.)
        base.render.set_light(self.light)

        wall_dist = 20.

        Wall(Vec3.right(), Point3(-wall_dist, 0., 0.))
        Wall(Vec3.left(), Point3(wall_dist, 0., 0.))
        Wall(Vec3.forward(), Point3(0., -wall_dist, 0.))
        Wall(Vec3.back(), Point3(0., wall_dist, 0.))

        for i in range(6):
            pos = Point3(
                (random() * 2. - 1.) * wall_dist,
                (random() * 2. - 1.) * wall_dist,
                (random() * 2. - 1.) * wall_dist
            )
            entity = Entity(pos)


Simulation()
base.run()

The black circles represent the “buffer zone” around the moving objects, while the black lines represent the boundaries of the area the objects can move within.

To make the bot move (and rotate) more smoothly towards its target position, something I’d call “vector interpolation” could be used. This could work as follows:
cut off a small part of the bot’s current speed vector and add an equally small part of the “target vector” (normalized vector pointing from the bot to the target position) to it, keeping its length the same:

bot.speed_vec = (bot.speed_vec * .9 + target_vec * .1).normalized() * bot.speed

Will try it out later.

Indeed, that’s why I mentioned aerial drones in my previous post. In fact, having two types of bots doesn’t only add more variety to the scene, they can also give the “ground bot” that just added the final bottom plate to its section more time to move into position beneath the next section to be constructed (which is generally 4 or more sections away from its current location), by taking over for the construction of the top plates.
This “taking over”-action would also seem to add some cooperation to the proceedings, making it all look a bit more “alive” I think.

Of course! These kinds of things can always be fine-tuned further. :slight_smile:

That also sounds like a good idea! :slight_smile:

It’s looking great already! :+1:

1 Like

This looks like an interesting approach! :slight_smile:

Indeed, I’ve been glad to see the progress that this section has made of late. :slight_smile:

I do want to see what change your “vector interpolation” produces–there’s a bit of a sharpness to changes in direction currently that I’m hoping that said interpolation will improve.

One note that I might add here is that it might be worth incorporating a delta-time value here–otherwise it seems likely that the behaviour will be strongly frame-rate dependant. A simple incorporation of delta-time will likely still have some frame-rate dependence, but I would imagine less so.

I spent some time thinking about the matter myself today, specifically for aerial bots, and did come up with an idea–unimplemented and untested–that might work. However, if you have it in hand, then I’m content to not duplicate work too much! (For one thing, I have other elements that I want to work on–I have some ideas for FPS weapon-models that I’m enthused to make after PAnDA!)

Thank you very much! :slight_smile:

Oh, and I don’t think that I mentioned it last time: I like the overall design for the little drones! They’re cute! :slight_smile:

1 Like

Yeah, I experimented a bit with this today and it seemed to kinda work – until I disabled V-sync. Back to the drawing board! :sweat_smile:
Here’s what I have currently:

from direct.showbase.ShowBase import ShowBase
from panda3d.core import *
from random import random
import array


base = ShowBase()


Obstacles = []


class Obstacle:

    _inst = []

    def __init__(self, pos):

        self.model = base.loader.load_model("smiley")
        self.model.reparent_to(base.render)
        self.model.set_pos(pos)
        Obstacles.append(self)
        self.accel = .1
        self.speed = 0.
        self.speed_max = 10.
        self.speed_unit_vec = Vec3.left()
        self.speed_vec = Vec3()
        base.task_mgr.add(self.move, "move_obstacle")

    def move(self, task):

        if self.accel > 0.:
            if self.speed < self.speed_max:
                self.speed = min(self.speed_max, self.speed + self.accel)
            else:
                self.accel *= -1.
        else:
            if self.speed > 0.:
                self.speed = max(0., self.speed + self.accel)
            else:
                self.accel *= -1.

        if self.speed == 0.:
            self.accel *= -1.
            self.speed_unit_vec *= -1.

        self.speed_vec = self.speed_unit_vec * self.speed
        pos = self.model.get_pos()
        pos += self.speed_vec * globalClock.get_dt()
        pos.z = 0.
        self.model.set_pos(pos)

        return task.cont


class Bot:

    def __init__(self, pos, target_point, target_model):

        self.model = base.loader.load_model("smiley")
        self.model.reparent_to(base.render)
        self.radius = 5.
        self.model_inv = self.model.copy_to(self.model)
        self.model_inv.set_scale(-self.radius, self.radius, self.radius)
        self.model.set_pos(pos)
        self.turn_speed = 10.
        self.speed = 0.
        self.max_speed = 5.
        self.target_point = target_point
        self.target_point_index = 0
        self.target_points = [target_point, pos]
        self.target_model = target_model
        target_vec = self.target_point - self.model.get_pos()
        self.start_dist = target_vec.length()
        self.speed_start_vec = Vec3.forward()
        self.speed_vec = Vec3.forward()
        base.task_mgr.add(self.move, "move_bot")

    def get_distance(self, obstacle):

        return self.model.get_distance(obstacle.model)

    def interpolate_speed(self, obstacle, push):

        dist_vec = self.model.get_pos() - obstacle.model.get_pos()
        push_vec = dist_vec.normalized() * push
        self.speed_vec += push_vec
        speed = self.speed_vec.length()
        self.speed_vec.normalize()
        self.speed_vec *= speed
        self.speed_vec.z = 0

    def move(self, task):

        target_vec = self.target_point - self.model.get_pos()
        tmp_vec = Vec3(target_vec)
        dist = min(self.start_dist, target_vec.length())
        target_vec.normalize()
        target_vec *= self.start_dist - dist
        self.speed = min(self.max_speed, dist * 10.)

        if self.speed_vec.normalized().dot(tmp_vec.normalized()) > 0.:
            self.speed_vec = self.speed_vec * .995 + target_vec * .005
            self.speed_vec.normalize()
            self.speed_vec *= self.speed
        else:
            self.speed_vec = (self.speed_vec * .99 + tmp_vec * .01).normalized() * self.speed
#            print("Course corrected!")

        for obstacle in Obstacles:
            distance = self.get_distance(obstacle)
            if distance < self.radius:
                self.interpolate_speed(obstacle, self.radius - distance)

        pos = self.model.get_pos()
        pos += self.speed_vec * globalClock.get_dt()
        pos.z = 0.
        self.model.set_pos(pos)

        if (self.target_point - pos).length() < .2:
            self.speed_start_vec *= -1.
            self.speed_vec = Vec3(self.speed_start_vec)
            self.speed = 0.
            self.target_point_index = 1 - self.target_point_index
            self.target_point = self.target_points[self.target_point_index]
            target_vec = self.target_point - self.model.get_pos()
            self.start_dist = target_vec.length()
            self.target_model.set_pos(self.target_point)
#            print("Switched target; self.start_dist:", self.start_dist)

        return task.cont


class Simulation:

    def __init__(self):

        base.disableMouse()
        base.camera.set_z(100.)
        base.camera.set_p(-90.)

        # set up a light source
        p_light = PointLight("point_light")
        p_light.set_color((1., 1., 1., 1.))
        self.light = base.camera.attach_new_node(p_light)
        self.light.set_pos(5., -100., 7.)
        base.render.set_light(self.light)

        Obstacle(Point3(0., -10., 0.))
        Obstacle(Point3(0., 0., 0.))
        Obstacle(Point3(0., 10., 0.))

        target_point = Point3(0., 20., 0.)
        target_model = base.loader.load_model("smiley")
        target_model.reparent_to(base.render)
        target_model.set_pos(target_point)
        target_model.set_color(1., 0., 0., 1.)
        Bot(Point3(-20., -15., 0.), target_point, target_model)


Simulation()
base.run()

As the main object (bot) travels from its starting position to the target location, it does seem to describe a rather smooth curve (thanks to the aforementioned “vector interpolation”). When it gets interrupted by the moving obstacles (these would be the other bots, busy generating plates), it attempts to move out of the way, sometimes leading it very far away from its intended path.

If you let it run long enough, you’ll probably see some very weird behaviour, but that’s to be expected from such amateuristic physics code. :grin: This is really not my cup of tea at all. :stuck_out_tongue:
Although I’m surprised to even get it this good, it’s probably something better handled using a dedicated physics engine. So I’m going to hold off on integrating this into the main code until Simulan comes up with that perfect Bullet-based solution :wink: .

That’s OK, there’s still plenty of other things that we can work on first – and I’m glad that this demo project is proving so inspiring! :slight_smile:

By the way, is there already a GitHub repository for the demo? We should probably discuss who will set it up, how the rest of us will contribute, and so on.

Thank you! :slight_smile: The model is still very basic; I’ll see how far I can take it.

1 Like

Ah, that’s a pity! Well, these things can be tricky, I do think!

Honestly, I’m not convinced that this requires a physics engine–just a bit of maths, much as you’re currently doing. This is essentially a version of steering behaviours, I think.

I have some changes that I think might improve the bot’s behaviour, if you’re interested?

[edit] Actually, they’re really minor–one of the bigger changes turned out to be no change at all! So, in short, I found that it behaved better with a radius of 10, and a factor 0.01 multiplied into the calculation of “push_vec” in “interpolate_speed”. These things just smooth it out a little, I find.

[edit 2]

How long does it take? I’m interested to see what’s happening!

Although I agree that it might simply take a bit of balancing to get better results, the specific values you propose seem to enable the main bot to more or less ignore and run through the moving obstacles. The purpose of the push_vec is precisely to ensure that the bots do not crash into each other :wink: (akin to how the CollisionHandlerPusher deals with collisions, I imagine).

It might depend on framerate, but for me, after 3 or 4 times going back and forth between target points, the bot has to compensate for avoiding the obstacles to such a degree that it almost completely moves out of view – and when it eventually does reach its destination, it runs straight past it such that it needs to turn around once more in order for it to finally get close enough. The latter can likely be solved by just increasing the acceptable final distance, although this should not be made too big either.

EDIT
Another idea that comes to mind is to define different, adjacent “lanes” for each bot to safely travel along. The lanes would be parallel to the starship’s main axis and lie outside of the bots’ “working area”. This approach might have its own challenges though (e.g. how to compute the “working area”) and might look a bit artificial as well. Still, perhaps a viable alternative.

Ah that didn’t happen in my case, at least as far as I saw. Still, I think that a bit more balancing is likely to help a fair bit!

Interesting… I’m not in a good position to check right now, but I thought that I had it running for at least that many iterations, and had no such problem.

I wonder: at what frame-rate does it run on your computer? Could that be having an effect…?

This is actually pretty similar to my idea for the aerial drones, save that in that case I was envisaging the “lanes” being generated on the fly for each drone as it was tasked to move to a new spot.

So indeed, I think that something like that could work!

Or perhaps for the ground-based drones, something along the lines of a nav-mesh or nav-grid, with pathfinding to determine a route?

At 60 fps. When I disable V-sync, the bot runs in pretty much a straight line from start- to end-position (which isn’t very interesting to look at), while the obstacles just wiggle a bit in-place. So everything happens too fast then, although I am making use of delta-time (but apparently in the wrong way).

That could also be a solution should all else fail, indeed.

I’ve solved it “a bit” with something simple. The bots still jump quickly, just want to post a quick update before I show my position smoothing ideas.

class BuilderBot:

    def __init__(self, model, beam):

        self.model = model
        self.beam = beam.copy_to(self.model)
        beam.set_pos(0., 0., 2.3)
        beam.set_sy(.1)

    def set_part(self, part):

        if part:
            x, y, z = part.center
            print(part.center)
            if part.center[2] > 6:
                self.model.set_pos(x, y, 19)
                dist = (part.center - self.beam.get_pos(base.render)).length()
                self.beam.set_sy(dist)
                self.beam.look_at(base.render, part.center)
            if part.center[2] < 6:
                self.model.set_pos(x, y, 2)
                dist = (part.center - self.beam.get_pos(base.render)).length()
                self.beam.set_sy(dist)
                self.beam.look_at(base.render, part.center)
        else:
            self.beam.set_sy(.1)

2 Likes

Ah, I see.

Hmm… I wonder. I see that your “interpolate_speed” method does something odd with “speed_vec”: it gets the length of it, then normalises it, then applies the speed to it again. Did you perhaps intend to clamp to a maximum speed there, as I think that you do elsewhere? Perhaps that’s the source of the problem: the interpolation method pushing the speed too high.

As to delta-times, I think that there are a few places in which it would likely be called for–but I’m not familiar enough with what’s going on in this code to be confident of quite where without sitting and studying it in more detail.

Ah-ha. Thanks :slight_smile: I do not think this will end up requiring a physics engine based solution. Though, I could do that, with some physics engine constraints IE torque around hull-center constraint.

My natural inclination is to keep hammering on the position of the bots relative to their plate center position, as I roughly demonstrated recently. We can smooth the positions by adding increments to them with some basic logic to keep the motion path smooth. I’m trying to avoid unnecessary complexity, as I’m also working on the space station and docking physics too. I think it’s technically possible to do these position changes with Intervals, Task, Bullet, or some combination of all these (not that I will).

I’ve only had about an hour total to study the latest generator code myself; I do think that delta-time applications will end up being pretty straightforward.

1 Like

I have created smooth motion paths for the bots at a basic level. The increments are additive on X, Y, and Z, and are therefor a bit ‘straight’. This could be modified or randomized. The increments have a delta-time applied on them from globalClock.

I have also added a “hover animation” for the else: condition of the BuilderBot class. This is meant to be more ‘cute’ or ‘organic’ than anything, and it can be taken out or modified.

These increment applications aren’t perfect yet, but there’s a lot of flexibility there to create more compelling motion paths. There’s always the option of doing part of the ‘bot dance’ with Intervals, IE for placing a special plate, or to send all the bots back to a “home position”.

Maybe for more realism we should have bots fly back to some home position immediately after finishing their share of plates.

One easy way to see what’s going on with this script is to change the “(0, abs(int(diff_z)))” line to

for x in range(0, abs(int(diff_z * 3)))

^ this will make the bots reach their ultimate Z-height much faster, which actually looks okay, I think. But their movements are quite fast this way.

Here’s an example of that line in effect:
high_z_procbots

The BuilderBot code from main.py:

from direct.showbase.ShowBase import ShowBase
from panda3d.core import *
from direct.stdpy import threading2
from random import random
import array
import time
from random import randint as r

...
class BuilderBot:

    def __init__(self, model, beam):

        self.model = model
        self.beam = beam.copy_to(self.model)
        beam.set_pos(0., 0., 2.3)
        beam.set_sy(.1)

    def set_part(self, part):

        if part:
            x, y, z = part.center
            init_pos = self.model.get_pos()
            
            if part.center[2] != 0:
            
                def get_to_z_18():
                    diff_z = init_pos[2] - z
                    print(diff_z, ' = diff-z')
                    for x in range(0, abs(int(diff_z))):
                        if abs(diff_z) < 3:
                            p_inc = Vec3(10 * globalClock.get_dt(), 2 * globalClock.get_dt(), 10 * globalClock.get_dt())
                        if abs(diff_z) > 3 < 5:
                            p_inc = Vec3(60 * globalClock.get_dt(), 12 * globalClock.get_dt(), 60 * globalClock.get_dt())
                        if abs(diff_z) > 5:
                            p_inc = Vec3(80 * globalClock.get_dt(), 20 * globalClock.get_dt(), 80 * globalClock.get_dt())
                        time.sleep(0.02)
                        pos = self.model.get_pos()
                        if pos[0] < 15:
                            self.model.set_pos(pos[0] + p_inc[0], pos[1] + p_inc[1], pos[2] + p_inc[2])
                        if pos[0] == 15:
                            if pos[2] < 18:
                                self.model.set_pos(pos[0], pos[1], pos[2] + (8 * globalClock.get_dt()))
                        
                        dist = (part.center - self.beam.get_pos(base.render)).length()
                        self.beam.set_sy(dist)
                        self.beam.look_at(base.render, part.center)
                
                threading2._start_new_thread(get_to_z_18, ())

        else:
            self.beam.set_sy(.1)
            
            def idle_hover():
                for x in range(0, 40):
                    time.sleep(0.05)
                    pos = self.model.get_pos()
                    p_inc = Vec3(10 * globalClock.get_dt(), 10 * globalClock.get_dt(), 10 * globalClock.get_dt())
                    ran_sign = r(-1, 1)
                    self.model.set_pos(pos[0] + (p_inc[0] * ran_sign), pos[1] + (p_inc[1] * ran_sign), pos[2])
                
            threading2._start_new_thread(idle_hover, ())
            

It’s hard to see the smooth motions in this gif, but they are indeed smooth motions when the bots are translating in the running demo.

motion_control_mod_3

I’d be happy to host/pay for the upgraded 2 GB team plan. The 2 GB one would allow “Code Ownership” settings (I imagine we’ll stick with BSD-3 as an overall license to fit Panda). I’m also not against you or Thaumaturge hosting it.

This path-avoidance script is actually kinda cool! It’s pretty hard to solve for this kinda stuff in 4 dimensions (3D + time) without faking it, especially when the bots are materializing a starship. :slight_smile:

2 Likes

Those motion paths look like a good start, I think! :slight_smile:

I have no argument with you hosting it, should you be so inclined!

1 Like

Thanks!

I’d still prefer a different type of bot (quadcopter) to take over from the ground bots to generate the upper plates. And here’s a model I made today:

builder_copter.zip (1.5 MB)

Hope you like it. :slight_smile:

When I interpolate the speed vector, I replace a small fraction of it with an equally-small fraction of the target vector. I general, the sum of these vectors does not have the same length as the original speed vector, so I have to first normalize that sum and multiply it with the speed value to restore this length.
Anyway, the problem is fixed – see below. :slight_smile:

Thanks! :slight_smile: Something to play around with.

It seems a bit jittery at the moment, but it’s certainly a good idea to have an “idle” behaviour for the bots!

Maybe for more realism we should have bots fly back to some home position immediately after finishing their share of plates.

Sounds like a good idea.

You’re certainly welcome to be the maintainer – thanks in advance! :slight_smile:

Thank you! Then you will be happy to hear that I managed to get it to work with delta-time as intended! :slight_smile:
The motion of the obstacles is fixed by also multiplying their acceleration with delta-time (this was obvious: an acceleration is expressed per time-unit squared, so delta-time has to be applied twice – once for the speed and once for the acceleration).
The movement of the bot is controlled by vector-interpolation; this means that a fraction of the speed vector is replaced with the same fraction of the target vector every frame. To make this framerate-independent, I had to multiply this fraction with delta-time as well. That was less obvious.
Here is the fixed version:

from direct.showbase.ShowBase import ShowBase
from panda3d.core import *
from random import random
import array

load_prc_file_data('', 'sync-video #f')


base = ShowBase()


Obstacles = []


class Obstacle:

    _inst = []

    def __init__(self, pos):

        self.model = base.loader.load_model("smiley")
        self.model.reparent_to(base.render)
        self.model.set_pos(pos)
        Obstacles.append(self)
        self.accel = 10.
        self.speed = 0.
        self.speed_max = 10.
        self.speed_unit_vec = Vec3.left()
        self.speed_vec = Vec3()
        base.task_mgr.add(self.move, "move_obstacle")

    def move(self, task):

        dt = globalClock.get_dt()

        if self.accel > 0.:
            if self.speed < self.speed_max:
                self.speed = min(self.speed_max, self.speed + self.accel * dt)
            else:
                self.accel *= -1.
        else:
            if self.speed > 0.:
                self.speed = max(0., self.speed + self.accel * dt)
            else:
                self.accel *= -1.

        if self.speed == 0.:
            self.accel *= -1.
            self.speed_unit_vec *= -1.

        self.speed_vec = self.speed_unit_vec * self.speed * dt
        pos = self.model.get_pos()
        pos += self.speed_vec
        pos.z = 0.
        self.model.set_pos(pos)

        return task.cont


class Bot:

    def __init__(self, pos, target_point, target_model):

        self.model = base.loader.load_model("smiley")
        self.model.reparent_to(base.render)
        self.radius = 5.
        self.model_inv = self.model.copy_to(self.model)
        self.model_inv.set_scale(-self.radius, self.radius, self.radius)
        self.model.set_pos(pos)
        self.turn_speed = 10.
        self.speed = 0.
        self.max_speed = 5.
        self.target_point = target_point
        self.target_point_index = 0
        self.target_points = [target_point, pos]
        self.target_model = target_model
        target_vec = self.target_point - self.model.get_pos()
        self.start_dist = target_vec.length()
        self.speed_start_vec = Vec3.forward()
        self.speed_vec = Vec3.forward()
        base.task_mgr.add(self.move, "move_bot")

    def get_distance(self, obstacle):

        return self.model.get_distance(obstacle.model)

    def push_back(self, obstacle, push):

        dist_vec = self.model.get_pos() - obstacle.model.get_pos()
        push_vec = dist_vec.normalized() * push
        self.speed_vec += push_vec
        speed = self.speed_vec.length()
        self.speed_vec.normalize()
        self.speed_vec *= speed
        self.speed_vec.z = 0

    def move(self, task):

        dt = globalClock.get_dt()
        target_vec = self.target_point - self.model.get_pos()
        dist = min(self.start_dist, target_vec.length())
        target_vec.normalize()
        dist_vec = Vec3(target_vec)
        target_vec *= self.start_dist - dist
        self.speed = min(self.max_speed, dist * 10.)

        if self.speed_vec.normalized().dot(dist_vec) > 0.:
            frac = .35 * dt
        else:
            frac = .95 * dt
            target_vec = dist_vec * 100.
#            print("Course corrected!")

        # to interpolate the speed vector, it is shortened by a small fraction,
        # while that same fraction of the target vector is added to it;
        # this generally changes the length of the speed vector, so to preserve
        # its length (the speed), it is normalized and then multiplied with the
        # current speed value
        self.speed_vec = self.speed_vec * (1. - frac) + target_vec * frac
        self.speed_vec.normalize()
        self.speed_vec *= self.speed

        for obstacle in Obstacles:
            distance = self.get_distance(obstacle)
            if distance < self.radius:
                self.push_back(obstacle, self.radius - distance)

        pos = self.model.get_pos()
        pos += self.speed_vec * dt
        pos.z = 0.
        self.model.set_pos(pos)

        if (self.target_point - pos).length() < .2:
            self.speed_start_vec *= -1.
            self.speed_vec = Vec3(self.speed_start_vec)
            self.speed = 0.
            self.target_point_index = 1 - self.target_point_index
            self.target_point = self.target_points[self.target_point_index]
            target_vec = self.target_point - self.model.get_pos()
            self.start_dist = target_vec.length()
            self.target_model.set_pos(self.target_point)
#            print("Switched target; self.start_dist:", self.start_dist)

        return task.cont


class Simulation:

    def __init__(self):

        base.disableMouse()
        base.camera.set_z(100.)
        base.camera.set_p(-90.)

        # set up a light source
        p_light = PointLight("point_light")
        p_light.set_color((1., 1., 1., 1.))
        self.light = base.camera.attach_new_node(p_light)
        self.light.set_pos(5., -100., 7.)
        base.render.set_light(self.light)

        Obstacle(Point3(0., -10., 0.))
        Obstacle(Point3(0., 0., 0.))
        Obstacle(Point3(0., 10., 0.))

        target_point = Point3(0., 20., 0.)
        target_model = base.loader.load_model("smiley")
        target_model.reparent_to(base.render)
        target_model.set_pos(target_point)
        target_model.set_color(1., 0., 0., 1.)
        Bot(Point3(-20., -15., 0.), target_point, target_model)


Simulation()
base.run()

There is still a slight difference in behaviour when colliding with obstacles, but it seems similar enough, so I’m quite happy with it.
Note that I renamed interpolate_speed to push_back, which is more correct (I originally placed the interpolation code in there, but it was since moved to the move method, so the previous name is no longer valid).

2 Likes

That’s looking like a decent little drone! :slight_smile:

That is looking improved, I do think! :slight_smile:

And well done on finding those various delta-time locations, by the way! :slight_smile:

I did see cases of the bot wandering off quite some distance for some reason, however. :/ It eventually came back, but its disappearance was quite unexpected.

Looking at the behaviour, I wonder whether it’s the interpolation element being too slow in some cases, resulting in broad “orbits”. Perhaps it might be sharper the further from the target vector the bot is facing, or something like that?

On another note, thinking of the proposed first-person segment, I’ve realised that I have a partial first-person-shooter implementation that I’m happy to donate a version of. It’s incomplete, and it has bugs, and changes may well be called for as we determine what precisely we want for the section in question. However, it may at least provide a first-pass at the thing.

If it’s desired, once we have that GitHub repository ready I can put together a version to this end and upload it.

(There are changes that I’ll likely make from my current version. For one thing, the game that was intended from it had a Heretic-like “powered up secondary form for each weapon” mechanic, which would be overkill in a segment like this, I feel.)

As it stands, and if I recall correctly, the code currently has the following:

  • Basic collision
    • Using Panda’s built-in collision system
  • Simple movement
    • Changes in height, but no jumping or falling
  • Basic enemies
    • Having basic “advance to weapons range and fire” logic, and simple obstacle-avoidance
  • A weapon system
    • I’ll likely replace my current set of weapons, starting off with two in the uploaded version, I think.
1 Like

It’s really no problem, happy to help. I’ll get the repository together soon.

It’s basic for sure. I can’t think of a way to integrate your vector interpolation into my time-sleep thread idea quite yet. But these are plenty promising as sketches, I think. We have enough at this point to throw together the beginnings of the demo project in a shared repo.

This sounds neat! Also looking forward to your weapon model contributions. The robotic panda is great. I’d maybe like to make a slightly higher-poly version, even an automatic-subdivider / LOD sort of thing for it – just an idea.

Thank you! I appreciate that! :slight_smile:

Hmm… One generally aims to keep the number of vertices down for game-dev… ^^;

Are there contours that look overly angular, to you? I’d rather approach the matter by targeting such contours than by making the whole thing higher-poly, I think.

(I could see adding a few more subdivisions around the central hole, for one.)

You two should have invitations pending to join P3D-Space-Tech-Demo on GitHub. :slight_smile: There’s nothing there yet, but I plan to put together a few of the most recent changes we’ve made with the original, updated, hangar environment as a base for Section 1.

I of course agree!

This is mostly my concern. I am also interested in having different quality settings, like Low, Medium, High, and Ultra for the demo.

Got it, and joined, I believe! Thank you! :slight_smile:

Fair enough! It is indeed something that I can polish, I daresay!

Honestly, I’m not sure that vertex-count is something to change here. The count should be low enough for most systems, I suspect, and I’m not convinced that an overall increase will result in all that visible a difference.

Instead, I suspect that things like texture size, buffer-size, and post-processing effects are likely to be more effective here.

1 Like