We've seen some insane leaps of realism in 3d games in the past few years. IMO the biggest breakthrough in recent times for 3D-Hardware is programmable-shading & for 3D Engines it is Real-time Lighting/Shadows.
We've seen Doom3 and the mind-blowing Unreal3 Engine. Normal, Displacement & Parallax mapping, soft-shadows, spherical harmonics, HDR & what not?
What do you guys think is the next big thing in 3d hardware & engines.
Apart from more & more GPU horse-power, longer & complex shaders, what kind of hardware capabilities do you guys wish for ?
If u were to design a 3D-Engine that gives Unreal3 a run for it's money, what techniques would you incorporate (assuming you got the hardware powerful enough)
The ability to store arbitrary data on the graphics card and read/write it in shaders using some kind of generalized data stream. This would open up possibilities for GPU-dynamic simulations of water waves, particle systems, other things that can currently only be done in a limited and hacky way, and only on cards that have vertex shader texture lookups.
Dedicated ray tracing hardware.
We've seen Doom3 and the mind-blowing Unreal3 Engine. Normal, Displacement & Parallax mapping, soft-shadows, spherical harmonics, HDR & what not? [snapback]13467[/snapback]
AFAIK Unreal 3 isn't due to be out for another 2 years because it simply won't run acceptably on current consumer hardware, so it's not exactly a good example to take when looking at "recent" graphics developments. That's what you should be looking at for future developments .
AFAIK Unreal 3 isn't due to be out for another 2 years because it simply won't run acceptably on current consumer hardware, so it's not exactly a good example to take when looking at "recent" graphics developments.
To Baldurk : It is true that games using Unreal3 aint gonna be nowhere for the next 2 years, but, we've seen that engine in action. NV40 can run it but not exactly real-time. We all knew about Doom3 engine long time back. The tech was there since years. But, the game came out just a few months back.
What i wanted to know is the Ultimate Next Gen thing in Hardware & software . May be we might be re-writing our engines for hardware that does not do Rasterization at all. Like the Real-Time ray tracer NomadRock & Anubis were talking about http://www.saarcor.de
(btw, I am getting into a project to develop an FPS using Unreal3 engine. We're gonna get that baby in our hands pretty soon. Boy, I cant want to see that beast in action & code for it. :rolleyes: )
You don't need to wait for Unreal 3 to see exactly the same technology used in games. The technology they have is trivial to implement, but it's not trivial to implement so that it runs fast, which is essential in games.
If you ask me now, I would say that the next big leap (similar to normal mapping) is this time in geometry side, to increase the geometrical complexity in games. Anyway, if you ask me tomorrow, I may have already changed my mind
Maybe the ability to generate *extra* vertices from inside the programmable pipeline is going to come next?
Maybe the ability to generate *extra* vertices from inside the programmable pipeline is going to come next? [snapback]13510[/snapback]
Yes, dynamic tesselation is definitely on its way. Very useful stuff
In my opinnion, the next big thing in relation to computer graphics, will be real time. inter active, global illumination. Sure Unreal Engine 3 has spherical harmonics for global illumination, but it's rather limited in what it can simulate, and it is only real time for static objects.
So in my opinnion, real time, inter active, global illumination will be the next 'big' thing in relation to computer graphics; or atleast i hope so
If we are going as far as realtime GI, then I would say the next step is total photon simulation. A scene is "rendered" by a server that constantly bounces photons around, and each client need only record the photons that hit their "film". In this way we get very scaleable network games. This of course would require the sending of each frame over the network so the client would not need much of a rendering card at all, but it would scale well with the number of viewers. Also, quit using polygonal rendering entirely, and specify everything in molecules. This of course requires entirely different modeling techniques, and most likely they would be a combination of evolutionary techniques combined with realistic techniques. IE if you wanted a monster, you might start with your monster catalog and breed selecting for the large horns and lighter green skin tone you are looking for. One could start with a carving tool to literally shave away material from a block of simulated stone or whatnot.
Of course this is not the _next step_ but it will forever be my rendering goal.
NomadRock, that sounds dangerous. What if they escape? huh? huh? What then?
one of the next big things will be the moment when people officially realise no mather how much shaders you put on something, there is still no way to have a nice dynamic realistic scene all in all. the fact, that rastericers simply don't scale in any form well to global illumination (wich would solve all the problems we have today..).
this will be the moment where some raytracing hardware has to prove it's power. we'll see when that moment comes, and if the hw will be ready.
i definitely hope so.
Ray tracing doesnt help much in terms of global illumination as opposed to rasterisation. you still have the same limitations, it's just 'easier' in ray tracing as you have most of the information you need already processed on a per fragment level.
Oh, and it is currently possible to do real time photon mapping (note i did not say interactive), however the lighting may take a few seconds to catch upto the current state of everything... eg: shadows lagging (this can be a matter of seconds!) behind the object... it all just depends on the photon map density... but anyway, that's another story for another day
P.S. IMO, the only advantage to ray tracing is pixel-perfect rendering of mathematically reprasentable objects (eg: spheres, quadrics, super quadrics [see dev shot], etc...)
uhm, every full gi solution uses raytracing as backend, photon mapping as well. (radiosity is no full solution).
and that hw realtime photonmapper is full raytracer in shaders, nothing else. you don't know much about realistic image synthesis, and gi, it seems....
with rastericers you have one gi limitation: it's virtually impossible (except you emulate raytracing logic).
with raytracing you have no limitation.
Games will not see a major benifit in the next generation of graphics technology.
Sure, there will be advancements that games can take advantage of. Most likely these will be dynamic tesselation and the unification of pixel and vertex shaders.
The real advancement for graphics technology will come in a different form this time around. We will see generalization of the GPU to the point of being used for generic stream processing, GPU multitasking/threading, and a major change in the way applications interface with this hardware.
Microsoft has discussed Direct3D's future movements towards "Windows Graphics Foundations" which will mean the GPU will become a common application resource used by multiple applications simultaniously. Widespread presence of PCI express will mean new possibilities in real time rendering, but it will mainly enable superior usage of available ram for graphical user interfaces.
Microsoft need to catch up to Apple and their Quartz rendering engine. Microsoft wants to do it better than them and will use DirectX development to drive ATI and nVidia development to enable superior looking desktop applications.
the most interesting thing to me in some sort will be the ability to multitask on gpu. that means, different rendering "threads", and not only one big render-queue like today..
so you can render at say 25fps onto a tv-texture, but still render the game at fastest possible (rest-)speed. etc..
we will see that. automatic scheduling and dispatching of tasks over the different general purpose pipelines. will be fun
davepermen, it seems you've misunderstood what i said.
What i meant is that in terms of rendering, ray tracing does not give you much of an advantage when dealing with global illumination, as opposed to polygon rasterisation. And i'm well aware that 'most' full gi implementations rely on ray tracing.
I don't think there'll be dedicated raytracing HW that would replace current rasterizing HW, but that existing rasterizing HW will simply evolve to the generalized point, which allows natural computational assistance for instance for full scene raytracing (we have seen raytracing done in special cases with current rasterized HW already). There was also some paper about using GPU for photonmapping. We got now floating point textures, longer shaders, dynamic branching, etc. which makes the rasterizing HW much more general purpose than what it used to be in sm1.1 days.
Real-time REYES rasterization!!! Probably easier to accomplish.
Reading this, I started thinking of how making use of advanced graphics will help applications be more useful and productive, rather than just making hardware upgrades faster.
I suppose with GPU utilisation, we can have true zooming interfaces and more 3d hints like shadows which help usability. Any other ideas? I for one want a particle engine in my wordprocessor so when I hit the delete key, the whole freaking world knows what happened to that letter
next page →