Performance info on Panda3d and python in a server side?

Hi guys.
Well, I am using panda3d again for college stuff (this is my 4th time using it), and I use it a lot to learn/for fun programming.

My main problem is that right now I am using to client/server application that I need to finish by december.

The client is nearly done when it comes to graphics, UI and human interface. What is missing is the communication with the server, which is a very critical part.

I was planning on writing the server side with c++ since one of the conditions is that one instance of the game (a 2 on 2 match) should consume less than 1% of the server CPU (a core 2 duo 3 ghz per core).

Anyone have been in a similar scenario and could tell me if this is possible to achieve or give me some input on this topic?

In my completely personal opinion, Python is a fine match for making client-server applications, if you’re concerned about performance, I’d look into stackless python perhaps - I’ve heard it’s a decent speed boost.

I honestly think that Python will do fine, however, should you choose to do something else:
/hate-me-for-saying-this
I would use Java over C++, many of your fan base are going to be much more java-knowledged than C++ knowledged, this is, of course with the assumption that your project is either A, open source, or B, user-plugin-enabled

otherwise, I’d use C++, in all honesty though I always come to the conclusion (in my MMORPG-type game server-client arch.) that you will run out of bandwidth before you will run out of computing power, this all really is only my opinion, so no getting mad at me if you think I’m completely wrong, I’ve never actually completed a large scale test with any of my ideas/code

Hope this helps you,
~powerpup118

Can’t really, since I am using some of panda3d stuff in the server as well (collisions for instance).

I tried writing a fast python code just to accept connections, it is freaking fast to develop comparing to C++, but it leaves my processor at 50% (which means it is consuming a core all by itself).
Probably something to do with tasks. I tried using a task doMethodLater too, no success.

If you want to be really smart about it, make two of the same examples in C++ and in python, and compare performance :wink:

Also, what is wrong with it consuming a core? Panda is probably running at max speed. Do you have v-sync on or off? What frame rate are you getting on your server?

I’d be willing to bet that it’s extremely high, and 50% of your CPU I can assure you is no where near per-client.

If you want to test more accurately, you need to find the frame rate of your server (try printing “globalClock.getAverageFrameRate()”)

Otherwise turn v-sync on and I’d be willing to bet that you’ll get like %1 CPU usage or so

Measuring based off CPU usage is a horrible method of testing server performance, you should go only on a per-client basis, which you need to conduct with no clients, and a large number of clients (30 for instance) and determine what the increase in CPU / memory is with the increase of clients

Let me know what the frame rate is and I’ll help you understand the issue more accurately,

~powerpup118

If the server create a windows the cpu usage is in about 2~3%, and it will run at 60 fps. if i remove the window generation it goes to 50% (which is 100% of a single-core, since the program is not multi-theaded).

This is probably because the task calculates the maximum output possible when there is no window. I don’t really know, but right now I am not using tasks, I am just receiving/processing information and sleeping for some time (1/20 seconds minus the time I read the information). With this method I barely use any CPU.

I am going to use python in the end, if performance got problematic I will recompile with cython, I took a look today and find it very interesting.