using GPU to reduce CPU workload without CUDA?

i’m wondering if it’s possible to use GPU to do some float value calculation, using shader as input and a framebuffer as output, without a CUDA GPU. if it could be done, programs can be faster right?
i read something about GPU collision detection, it’s a good thing, but what about less specialised task?

You can certainly use the GPU for calculations. There are certain advantages and disadvantages:


  • Freaking fast
  • Everything is parallel


  • Everything is parallel
  • Copying data between CPU and GPU is totally slow because it stalls the pipeline

So the main issue will be, even if the computation on the GPU might be much faster, copying the data back to the cpu will be very slow, because it forces a synchronization.

So, this is mainly applicable for computations which are only needed on the GPU. For example, GPU FFT is a good example. It computes a water heightmap on the GPU, which is then used for rendering water. However, this heightmap is never needed on the CPU, so it never has to get copied back.

If you are doing some offline calculation tho, without actual rendering, then you have to weigh if it will be useful to you. If you have a huge amount of data, which only needs to get processed very less, then it will be slow, because you spend all the time copying data. If you have a moderate dataset tho, and you do a lot of computations, then the GPU solution might outperform the CPU solution.

So, in case you want to do some GPU calculations, you should have a look at compute shaders.

You basically would pass your dataset as an input, and would also attach an arbitrary texture where you write your data to (with imageStore()). Then you extract the data from the gsg with:


Now you’d have the data from your compute shader in a texture, and can do whatever you want with it.

thank you very much. didn’t know Panda1.9 has access to compute shader, that’s one more tool other than vertex and fragment shaders.