Hi guys, I got a quick question I could not solve myself.
How is the “pos” argument calculated? Does it consider the image size?
I am creating a class to manipulate images better. There is one thing I am having problems with, how is the “pos” argument calculated? As far as I could understand, the middle of the screen is 0,0 an the values are in the interval [-1, 1].
As my screen size is configurable in a tuple named WorldSize, here is the formula (not the code) I came up with:
Screen x = posX/(WorldSizeX/2.0) - 1.0
Screen y = posY/(WorldSizeY/2.0) - 1.0
Where posX and posY are the screen position coordinates.
What I wish to achiev is to give the screen pixel (x, y) and get the image rendered from that point on. My formula seems to be ok, still the image is not where I though it would be nor could I realize where the rendering start… Any clues?
The formula you describe is accurate for things parented to render2d. It does not apply to things parented to aspect2d or to render, of course.
This describes the (0, 0) point of any card or model parented to render2d. Where, precisely, the images falls in relation to its own (0, 0) point depends on the way you construct the card. The default CardMaker constructs a card with the (0, 0) point at the lower-left corner.
Wow, thanks a lot, really fast answer, I will redo the math then.
Now, on my quick test, I realized it scales the image to fit the screen. Is there any way to get the image dimensions or to tell Panda not to scale it?
Thanks in advance.
EDIT: solved it using:
imagePNM = PNMImage(file)
self.xSize = imagePNM.getXSize()
self.ySyze = imagePNM.getYSize()
file is a string with the filename.
Now I just have to adjust the scale, still I wish there was an easier way.
For the record, you can also query the original texture size with texture.getOrigFileXSize() and texture.getOrigFileYSize(). But you still have to do the math yourself.