I know almost everything about openGL and real time 3d graphics, and i am proud of that, but sometimes there are a few wee bits that i feel is missing.
So what better place than here to bring it up, well ok i admit there are better places, but this is my place and here i reign supreme.
1. Fix the damn z buffer
This problem is a two parter, it’s simple but annoying.
a) 64 bit z buffer, it will reduce distance z fighting, that isn’t to much to ask for, is it?
b) Currently the way graphics accelerators calculate the z value is by interpolating the vertics z-values, this is fast and works well 99.99% of the time, but on the flip side it is the reason why z fighting appears in the first place.
Instead use a standard plane equation for the z values, this will force the z values into submission and in the process reduce the near plane z-fighting.
2. Virtual texturing.
Or full virtualisation of the texture memory as John Carmack put it, it will fix the damn batch problem and give all graphics artists more freedom to work with, all in one swift blow.
3. Increased parallelism.
Real time computer graphics is the champion of parallelism, but it is also it’s strongest opponent.
Everyone in the business knows that computer games rarely have any huge benefit of multiple processors, this is because the render pipeline is horribly linear.
I love to see a parallel capable graphics API/graphics accelerator but i just don’t see that happening in the near future, perhaps in 3-5 years, but not before that.
I know how it could be done, i know the benefits of doing so, but they more or less laughed at me for suggesting anything like that on the openGL forums.
I know i am right on this one, so give me my damn parallelism before i hurt someone.
4. layered rendering.
I know, this one is a bit of a stretch, but it will make everything look so much nicer.
Let me explain what i mean.
in layered rendering you have a special color and z buffer, they have 8 or more layers in them.
when a pixel is written to the buffer a layer is chosen for it to be written to, if it is opaque it is always clamped to the deepest layer, if it’s transparent it chooses a layer before that depending a little on depth.
If a a pixel needs to occupy the space between layers(since they all have a z value) it pushes (and merges if needed) all other pixels put of the way.
If you understood what this means in terms of what it can do, you will also understand that it is somewhat of a wet dream for game makers.
But i admit, it is a bit of a stretch.
It’s time for a change, it’s time to bring out the heavy artillery (and not just because i did that in the army) it’s time to introduce raytracing as an realistic addition to real time computer graphics, it can be done, it’s relatively easy to implement in shader programs and it will kick ass.
Not to mention the relative jump in visual quality games will get.
Well that is that, if you know a member of the ARB or the core Nvidia dev team, let them know about this, please.