OpenGL whish list 3.0

It’s been a while, and no it’s not because of the computer, it’s because of time, time and lack of things to write about, though not this time, this is something I wanted to write all this year.
In the beginning I was just collecting my thoughts, though I really should have written it down somewhere, then as months progressed I had less and less time to think about this.
However a few things lately caught my eye that made me rethink.

1. A while back Nvidia acquired Aegia, the physics chips maker, at first I thought “great now they will add a physics sub processor to every graphics card”.
No, they added it trough cuda, and I didn’t get it since although the gpu can do it well, any use of the gpu limits your frame rate, and that is generally considered bad.
In essence computing more objects means you can render less objects.

2. Just recently Nvidia also acquired a company called RayScale, a company that specializes in a hybrid raster/raytrace renderer.
Not that such a renderer is anything new, Lightwave have used one for years now, but for nvidia to be interested makes me interested as this is something I have been talking about for some time now.
I can only assume that this to will be implemented using cuda, which made me wonder how and why.

3. And finally, the inquirer leaked the numbers for the next set of nvidian GFX cards, never mind the change in the numbering scheme that suggest it’s not really just another geforce.
If you look at the numbers you will see that it has a pretty solid 1296Mhz core and 240 streaming processors clocked at 602Mhz, that’s a lot, but it does start smelling a lot more like the cell chip to me compared to other versions.
Also there is no mention of new features, no blend shader, no tessellator no nothing, there is even a question mark if it supports DX10.1, which it should BTW, nvidia is not that careless.

So it finally dawned on me what they where doing, remember all those articles about nvidia are doing a CPU, well it’s not that far from the truth.
The next gen GFX cards are going to be 100% GPGPU, it will have no leftover fixed function graphics components, just a massive amount of sub processors.
Thus all graphics will run in a low level variant of cuda and the entire feature set is now software based.

It is in this light I now write an partially abbreviated wish list, just read previous ones to understand the reason behind it.

1. Double precision (64bit colors)
Affects not only internal sampling but hopefully also the z buffer.
I have heard some rumblings that the new cards might support this.

2. Virtual texturing and enhanced texture sampling.
It’s still an issue as explained in the last wish list, but I want two new add ons.
a) megatexture sampling, works like mipmapping but adds additional full res layers on to of the base layer, however they are tiled so that the fist mega map is tiled 2 times over each uv axis the next one 4 times and so on
b) sample region, it allows you to use only a small part of the texture in the same way you would a normal texture.
Both can be done with something I will mention below.

3. Blend shader, texture shader, custom shader
Blend shaders are still pretty much still needed, so would texture sampling shaders be as I do like to put my own touch on the texture sampling, instead of sampling the usual trilinear 4+4 pattern, why not a 9+4 pattern or a 16+0+4.
We all witnessed what happened when we went from unfiltered to linear to bilinear to trilinear, so I see much potential of using some extra cycles for this, especially when games get older and we have enough spare juice over to run it at 1000 FPS

And while we are adding shaders, let’s add a custom shader.
As they now all use the same sub processors, the same shader code and so on this would be pretty easy, just define the inputs, outputs and just stick it in the pipeline or as a shader function.
In fact an adjustable pipeline is something that could be beneficial when you do not need certain parts, like vertex and geo shaders when you are doing the full screen quads in screen space rendering.

4.Object shader
This shader would have the task of doing some simple rendering tasks like instancing and vertex skinning, not the actual computation itself but it initiate rendering of polygons from VBOs (or completely new ones for that matter), adjust buffer objects and other render data, switch texture targets,render targets, shaders and so on.
If you wanted to it could be running the actual render thread.

5. Raytracing
Still important, it’s literally the next big thing in graphics, it looks closer, much closer then ever now that nvidia acquired that raytracing company.
Frankly I don’t care about the speed as I know it will not for some time be any good, I just want to use it.
And it could be just the thing to fix and augment deferred rendering.

Stuff that didn’t make the list – and probably never will
Parallel multiviewport rendering it would be useful when rendering depth cubemaps for lights, but is inherently complicated, maybe with the custom shaders, but it wouldn’t be any faster than doing it manually.
Layered buffers useful for the transparency problem deferred rendering has, but would increase the memory usage of the rendering buffers from about 80MB to around 640MB (and that’s probably disregarding the added buffers needed to fix the blending problems), besides adding in the transparent surfaces with regular forward rendering afterwards wouldn’t be that much of a problem if you keep the number of transparent surfaces down.

Either way, that’s it for this time, I will come with another wish list in a year or so, basically whenever I know more and have something new to add.

No Comments

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

WordPress Themes