A bit more than a year ago i posted a sort of wish list i wanted for openGL or computer graphics in general, a lot has happened since then, bu then again not enough has happened, so here is version 2.0 of that list, some things are off it and some things are new.
1. Z-buffer improvements.
Nothing has happened on this area besides for one thing, Nvidia released a sort of a tip at this years GDC to normalize the otherwise logarithmically spaced z buffer to something more linearly spaced, i haven’t tested it yet but i will.
According to them it’s sort of an hack that has a few problems, so i still really wish that we get the 64bit z buffer which will cause a shift that i will be talking about in point 2.
2. 64 or 128 bit floats.
Although we barely have 32 bit float textures i really do feel that we need to expand on this, at least internally in the shader units, this would definitely help with stuff like blurring shaders, extreme multi sampling shaders.
3. Virtual texturing.
Now this is one of the areas where most progress has been made, the new texture arrays or texture buffer objects really opens up the texturing bit, now with these there is not really a limit on how many textures you can have, i think the limit of the number of textures is now about 2048, it used to be 4 just a few months ago.
But still i really feel more can be done, Longs peak will take a step forward, but i want all limitations removed.
4. Blend shader.
This is something that came out of the many conversations on opengl.org i had with the various industry people there.
Basically what it is is a fourth shader stage that comes after the fragment shader.
What it does is that it takes the various bits of data generated by the fragment shader + whatever is in the current fragment of the render target and then output it to the specific fragment, no texture reads needed besides the render target, most often it would be just a simple pass trough shader.
There are many many new cool things you could do with this, like that layered rendering i was talking about the last time, coincidently this is why it’s not on the list currently.
5. Texture sampling.
I all actually pretty tired of mipmaping, it bugs me that the best hardware supported texture sampling method is actually a hack, i really want some kind of high quality projection centric gaussian texture sampling, or well at least something better than what we have now.
Now i can’t stress it hard enough, raytracing will truly bring a revolution in graphics, well ok perhaps not, it is just another evolution, a great one but still.
And it wouldn’t be that hard to implement, for one it could be done in a shader, just place all the polygons to test in a texture buffer object and start testing, now that wouldn’t be all that effective.
So the next step would be some kind of specialized circuitry to make it faster.
I would suggest some kind of modified memory, a grid processor array of SPPUs (Single Purpose Processing Unit), they each contain the plane equations for one or at most a few polygons and a small and super simple set of gates that tests the polygon against the incoming ray and returns the distance and polygon id to a special router on the chip that decides what polygon id’s to return to the card based on the distance.
Doing it like that does mean that you can literally can fit thousands of processors on a simple little chip and that will cut down the ray test time from something unmanageable to something that more resembles a texture lookup.
And it’s just not graphics that will be affected by this, GPGPU physics will also do so and many many more things, i couldn’t really say because i don’t really know, it’s all new to everybody.
Well that’s all for now, perhaps i will revisit in a year or so.