If you wanted to have something like an in game node editor in your game where you connect things like nodes are connected in blenders node editor, but you used the OnscreenImage how would you do that? Im not sure where to start because i dont know if getting events from certain pixels on an image is a thing.
I don’t think that you can get events from certain images, but you could potentially get events from entire objects, and then compare the mouse -position in those events to your pixels.
Still, what I’d suggest is this:
- Associate the appropriate events with your various objects.
- If you want just a part of an object to be used for dragging, then have that be a separate object parented to the main object.
- Have those events call some custom code in your program that sets and clears variables for the current “drag” operation
- Have a task running in the background that, when a “drag” operation is ongoing, sets the position of that drag object in accordance with the mouse-position and the above variables
You are overcomplicating the logic, everything is easier if you use DirectFrame, here is an example of how it interacts with events, I think you can easily adapt it to your goals.
from direct.showbase.ShowBase import ShowBase
from direct.gui.DirectGui import DirectFrame, DGG
from panda3d.core import NodePath, TextNode
import random
def create_button(pos, function):
def enter(self):
function()
def press(self):
label.setText("You won!")
b = DirectFrame(pos = pos, frameColor = (1, 1, 1, 1), state = DGG.NORMAL)
b['frameSize'] = (-0.35, 0.35, -0.076, 0.076)
b.bind(DGG.WITHIN, enter)
b.bind(DGG.B1PRESS, press)
b.reparentTo(aspect2d)
label = TextNode('')
label.setText("Start game")
label.setAlign(TextNode.ACenter)
label.setTextColor(0, 0, 0, 1)
Node = NodePath(label)
Node.setScale(0.1)
Node.setZ(-0.03)
Node.reparentTo(b)
return b
class MyApp(ShowBase):
def __init__(self):
ShowBase.__init__(self)
self.button = create_button(pos = (0, 0, 0), function = self.game)
def game(self):
x = random.uniform(base.getAspectRatio()-0.35, -base.getAspectRatio()+0.35)
y = random.uniform(1-0.076, -1+0.076)
self.button.setPos(x, 0, y)
app = MyApp()
app.run()
Eh, DirectFrame does provide some conveniences, but I think that the logic would still be much the same as with an OnscreenImage: assign events, use some variables to note what object is being dragged, and use a task to update their position during a drag.
The only real difference would be how the events were assigned: OnscreenImage is a DirectObject, so one would presumably use the “accept” method to assign an event, while DirectFrame is a DirectGUI object, and so one would presumably use the “bind” method to assign an event.
And if the user only needs the area that generates the event. Since DirectFrame is only for receiving events, you can use at least the model, even the generated vector geometry by placing the geometric in the same place.