dirus at March 2nd, 2011 12:09 — #1
While writing a Design Doc for an MORPG game with FPS type combat I started thinking about the collision detection on the server side. Specifically tracking bullets.
I remember when I first started playing FPS' 32 players on a server was more or less the max, iirc largely due to the servers at that time not being able to handle the stress of more players without significant lag, due to the servers inability to keep up with every bullet flying around. I know Planetside had a 10k player max and the same goes for Battleground Europe. The bullet's also only flew so far before they just disappeared.
So I started wondering, since AMD & Nvidia are now really starting to push the GPU into a GpGPU would it be useful to try and harness the power of a GPU to do just collision detection. I know there is costs associated with the transfer of data across the PCIe bus and back, and in most cases that cost negates any benefit and can even make it worse. However I honestly don't have any serious expertise when it comes to coding anything for the GPU themselves.
Ultimately what I'd like is the ability to have the CPU send the vector/velocity info of all the moving objects such as bullets, players, vehicles etc to the GPU and have it run all the collision detection work, then if there is actually a collision the GPU would report that back to the CPU, which could then do any required calculations.
One of the main things that got me thinking about collision detection in the first place is that I think it would be awesome to have a mass battle with 1000 or so players where not only can you get hit by stray bullets(since most FPS' do that already) but also where the projectiles interact with each other. For example player 1 throws a knife at player 2, however while in flight a stray bullet from player 3's gun hits the knife and knocks it off course. Or the ability to shoot RPG's out of the air etc.
tobeythorn at March 2nd, 2011 12:56 — #2
Latency and bandwidth have traditionally been the barrier to server side physics, but there are some potentially great benefits to it. One of which is that rather than having however many client computers redundantly computing physics and having to deal with synchronization, all the physics would be done once in one place. This in turn would mean that each client computer could use its resources to draw prettier graphics or calculate better sound.
Perhaps if using a powerful server side GPU to calculate physics could reduce physics computation time enough to offset the internet latency, then server side physics could be feasible. That would be cool.
As for shooting knives out of the air with bullets...Lol...Unless you are Neo in The Matrix, that's not very likely, and because of the small time steps that would be necessary to do such physics, it would be extremely computationally expensive.
dirus at March 2nd, 2011 13:30 — #3
I realize that shooting a knife out of the air would be extremely computationally expensive. Hence why I was pondering the idea of using the GPU do to such calculations. Would enough GPU horsepower make such a thing possible?
tottel at March 2nd, 2011 13:46 — #4
Apart from being expensive; is it even possible to do in real life?
smile_ at March 2nd, 2011 15:02 — #5
Shooting knife doesn't really need extremely small time steps. You can do some form of 4D collision detection with usual time interval.
dirus at March 2nd, 2011 15:10 — #6
I realize the knife/bullet example is a little insane in terms of chances of it happening.
However shooting down something like a missile or rocket, is not nearly insane and in fact the US Army has the C-RAM project. So I guess for the purposes of the topic lets focus on that?
I am more or less interested in whether or not it would make sense to try and harness a GPU(s) for collision calculations server side, or if the latency costs associated with something like that would negate any benefit.
I'm looking at trying to offload bottlenecks server side when there is 1000's of object flying around at once.
alphadog at March 2nd, 2011 15:59 — #7
Well, GPU are great for parallelizable tasks. Grossly-speaking, only some parts of the typical collision detection scheme is that. But, that doesn't mean the GPU can't be used. There are lots of "hybrid" approaches out there. ex: http://sglab.kaist.ac.kr/HPCCD/
nick at March 3rd, 2011 07:02 — #8
I am more or less interested in whether or not it would make sense to try and harness a GPU(s) for collision calculations server side, or if the latency costs associated with something like that would negate any benefit. I'm looking at trying to offload bottlenecks server side when there is 1000's of object flying around at once.
You won't get much if any benefit from using a GPU. They are horrendously slow at processing a single thread (about a factor 100). They compensate for that by processing thousands of threads. This works great for graphics since there are thousands if not millions of pixels, and the communication goes one way (it's ok for the first pixel to appear on screen tens of milliseconds later).
But when you have a feedback loop, like with physics, that's a really high latency and it's not easy to find thousands of independent work items. In fact even with thousands of objects flying around it's a lot smarter to sort them spacially before doing any expensive collision detection, instead of just brute force processing everything. So what you need is fast single-threaded processing, as done by the CPU.
That doesn't mean you can't do compute intensive work. The latest Intel CPUs have 256-bit vector operations, four cores, and Hyper-Threading. For the majority of workloads, a modern CPU offers a well balanced mix between ILP, TLP, and DLP.
And it's only going to get better. Fused multiply-add instructions are already in the AVX specification, and gather/scatter instructions would dramatically speed up parallel indirect addressing. In the meantime GPUs are struggling to fight Amdahl's Law and are forced to spend more die space on latency optimizations, limiting their compute density. So they're both converging towards a device which is both latency optimized and throughput optimized. The GPU is worthless on its own, so in the long run the CPU will prevail.
blaxill at March 3rd, 2011 10:52 — #9
I disagree. In a straight port, collision detection doesn't map well to the GPU architecture, but changing from incremental prune and sweep method to parallelizing each axis and sorting and sweeping has been proven to be very fast/scalable. I remember reading a paper on it recently (must have been written no earlier than 08) but I can't find it. I think this is the method being implemented in bullet. (About halfway through the presentation.)
system at March 7th, 2011 21:28 — #10
im not an expert in this field but
what does the gpu have to do with collision math, when all collision is calculated before anything is drawn
ultimately you dont even need a video card to do collision
alphadog at March 7th, 2011 21:53 — #11
enjoycrf, please don't ever change... I enjoycrf your posts.
system at March 7th, 2011 22:10 — #12
thank you very much
rouncer at March 8th, 2011 00:34 — #13
You can handle collision detection server side, for sure, just can the server take the whole game at once, like you said.
nick at March 9th, 2011 07:59 — #14
I disagree. In a straight port, collision detection doesn't map well to the GPU architecture, but changing from incremental prune and sweep method to parallelizing each axis and sorting and sweeping has been proven to be very fast/scalable.
Making an algorithm suited for more parallelization, doesn't necessarily mean truly making it fast. Most 'brute force' approaches are quite wasteful. I don't have much experience wih collision detection, but I found this to be true for the majority of GPGPU applications.
Note that for instance a 106 Watt GeForce GTS 450 peaks at 601 SP GFLOPS, while a 95 Watt i7-2600 can deliver over 218 SP GFLOPS and this will increase to 435 SP GFLOPS with the addition of FMA instruction support. So there really isn't much room to make your algorithm more suited for the GPU. The CPU can handle a much larger range of workloads efficiently.
I don't think GPGPU has much of a future.
system at March 9th, 2011 16:53 — #15
would it be possible to write a game that could be served from multiple servers?
im getting this idea from a load balancer one would use for the web
that way you would have a server to act as a manager and additional ones to split the calculations
also i agree that before crunching any numbers its best to organize them by visible areas
just like any bullet for instance has a range
and yes obviously one can shoot a knife out of the air
its just that with everything dropping the frame rate
people are more concerned with thinking of ways to do less