Category: Tutorials

Running OpenGL3.0 – part1

Welcome to a series I have planned on OpenGL 3.0 regarding the forward compatible mode, or in layman’s terms, using OpenGLwithout all the stuff that got cut.
It’s quite a different beast to work with, even I have problems getting my head around it since so much things that you used for every other line has to be done differently, I find myself adding the line glTranslatef(…. only to suddenly stop thinking “o, right, doesn’t work anymore”.

It’s a bit harder also as you have to use shaders or it won’t be drawn, you cant just use a VBO like before but instead you have to bind it to a input variable.
In short everything is different, but I will try to guide you trough the basic stuff, but look at it from the bright side, now you have no more excuse not to code the way it should be done.

So what got cut well quite a bit, take a look at http://www.opengl.org/documentation/specs/ and download the 3.0 specification and then check under the section called “The Deprecation Model”, but here are some highlights Read more »

Vertex array objects

With openGL 3 came a few cool features most of these are pretty small and quick to learn, VAOs are among these, I really didn’t want to write a full fledged tutorial about it before, but seeing as I’m changing the format to promote shorter and more article like posts, it seemed like as good as place as any to start.

There hasn’t been that much written about the vertex_array_object extension, also there is some confusion on where to use it and how.
Simply put it couldn’t be simpler, it’s used to make the daily life of using VBOs easier and the code prettier, earlier you had to call these three functions for each buffer to set up rendering, normally that would be at least 3.

glEnableClientState
glBindBuffer
glTexCoordPointer Read more »

PCF

In the last previous tutorial we explored the depth shadow maps in all it’s aliasing glory.
I showed you ho to fix the steep angle artifacts by using the diffuse term, and the shadow poping by adding a bias to the shadows.
In this lesson we are going to implement something called Percentage Closer Filtering or PCF as it is more commonly called, the name doesn’t exactly reveal what it does.
It is in fact a way to multi sample the shadows and this is pretty simple, so simple that i am not going to supply you with anything other than the new shader file, the rest is exactly the same as tutorial 03a.

1. transform the texture coordinates first two values by a really small amount.
2.make a shadow test against those coordinates.
3. repeat 1 and 2 as many times as needed..
4. take the collective result from the shadow tests and divide it with the number if samples you just did.

Lets demonstrate this with some code.
First we set up some variables + load the other textures besides the shadows.
Since we are also going to do 25 samples we want to eliminate as much as possible from the loop, that is why i added the inverted bias to gl_TexCoord[2].z instead of the shadow value.


float blur_spread[5];
blur_spread[0] = -0.003;
blur_spread[1] = -0.001;
blur_spread[2] = 0.000;
blur_spread[3] = 0.001;
blur_spread[4] = 0.003;

float samples=1.0/25.0;

gl_TexCoord[2] = gl_TexCoord[2]/gl_TexCoord[2].w;
gl_TexCoord[2]=(gl_TexCoord[2]+ 1.0) * 0.5;

gl_TexCoord[2].z -=0.005;
vec4 base = texture2D(texunit0, gl_TexCoord[0].xy);
vec3 norm = texture2D(texunit1, gl_TexCoord[0].xy).xyz*2.0-1.0;

float shade=0.0;
float shadow=0.0;

int x=0;
int y=0;

next do the sampling, note that because a division is more problematic than a add, we pre divided the samples so instead of adding one for each sample we are adding 0.04, this saves us the trouble of having to divide at runtime.


for(x=0;x<5;x++)
{
for(y=0;y<5;y++)
{
shadow = texture2D(texunit2,gl_TexCoord[2].xy+vec2(blur_spread[x],blur_spread[y]));
if(shadow > gl_TexCoord[2].z)
shade+=samples;
}
}

Great now all we have to do is include the rest and we are done


norm = normalize(gl_NormalMatrix * norm);
float fresnel =max((norm.z-0.6)*-1.0,0.0);
float diffuse = max(dot(lightVec, norm),0.0);
float specular = max(dot(reflect(lightVec,norm), viewVec), 0.0)*1.7;
specular=pow(specular,8.0);
shade*=diffuse;
gl_FragColor = (base* shade)+(vec4(0.0,0.1,0.3,0.0)*fresnel)+
(vec4(0.5,0.5,0.4,0.0)*specular*shade);

You can now begin to play around with the numbers, increasing the values in blur_spread will make it a bit more blurry, but then other artifacts show up, these can be fixed by fiddling with the bias parameters, but ultimately PCF is just a hack and won’t work every time.
I originally planned to include a version that changed the blurriness according to the distance from the shadow casting surface, it looked pretty good, unfortunately not good enough in all situations, it was done by getting the distance between the fragment and the shadow map depth value and with this manipulating the values in blur_spread.
It would have worked if it whereat for those meddling kids(read:artifacts).
Test and see if you can do it.

Download the new shader file for this tutorial, to get the rest download the previous tutorial

Depth Shadow maps

Shadows are always nice, but they can sometimes be a hazzle to implement, i am going to show you the simplest way to implement them in a method called projected depth shadow maps.
I am also going to bring up a way to create some nice shadows and light effects.

But first some theory, what we are going to do is.
1. render the depth to a texture, i will use a FBO here since they are nice an easy to use.
2. i am going to compare this map with the distance from the lightsource to what i am going to render.
3. finally we have to mask all the artifacts, and believe me, there are plenty of them there.

Now the real problem here is the texture projection, but fortunately openGL and glsl has some tricks up it’s sleeve.
What you do is you first make two matrices, one projection and one modelview matrix, we do the same with the rendering matrices to insure we will get the same every time.
Next we multiply the lightprojection and lightmodelview matrices into a temp matrix.
Then we combine that matrix with the inverted camera matrix and call it the texture matrix.

When we render the final rendering we then need to feed openGL with the new texture matrix, normally this is normally done with glTexGen, but we are going to take a little shortcut by directly uploading it with glLoadMatrixf.

Then in the shader we call a simple little code to buid new texture coordinates
gl_TexCoord[2] = gl_TextureMatrix[0]*gl_ModelViewMatrix*gl_Vertex; // in the vertex shader

gl_TexCoord[2] = gl_TexCoord[2]/gl_TexCoord[2].w; // fragment shader
gl_TexCoord[2]=(gl_TexCoord[2]+ 1.0) * 0.5;

now you need only use this texture coordinate and then compare the resulting texture lookup with the third parameter in the texture coordinate to determine if it’s in a shadow or not.

Sounds simple, so let’s get cracking

First of all this lesson is based on lesson 2c + the FBO parts of lesson 1, so all the helper funcs are there then we need two more functions.


void Combine_Matrix4(float MatrixA[16],float MatrixB[16], float *retM)
{
float NewMatrix[16];
int i;

for(i = 0; i < 4; i++){ //Cycle through each vector of first matrix.
NewMatrix[i*4] = MatrixA[i*4] * MatrixB[0] + MatrixA[i*4+1] * MatrixB[4] + MatrixA[i*4+2] * MatrixB[8] + MatrixA[i*4+3] * MatrixB[12];
NewMatrix[i*4+1] = MatrixA[i*4] * MatrixB[1] + MatrixA[i*4+1] * MatrixB[5] + MatrixA[i*4+2] * MatrixB[9] + MatrixA[i*4+3] * MatrixB[13];
NewMatrix[i*4+2] = MatrixA[i*4] * MatrixB[2] + MatrixA[i*4+1] * MatrixB[6] + MatrixA[i*4+2] * MatrixB[10] + MatrixA[i*4+3] * MatrixB[14];
NewMatrix[i*4+3] = MatrixA[i*4] * MatrixB[3] + MatrixA[i*4+1] * MatrixB[7] + MatrixA[i*4+2] * MatrixB[11] + MatrixA[i*4+3] * MatrixB[15];
}
memcpy(retM,NewMatrix,64);
}

void Inverse_Matrix4(float m[16], float *ret)
{

float inv[16]; // The inverse will go here

inv[0] = m[0];
inv[1] = m[4];
inv[2] = m[8];
inv[4] = m[1];
inv[5] = m[5];
inv[6] = m[9];
inv[8] = m[2];
inv[9] = m[6];
inv[10] = m[10];

inv[12] = inv[0]*-m[12]+inv[4]*-m[13]+inv[8]*-m[14];
inv[13] = inv[1]*-m[12]+inv[5]*-m[13]+inv[9]*-m[14];
inv[14] = inv[2]*-m[12]+inv[6]*-m[13]+inv[10]*-m[14];

inv[3] = 0.0f;
inv[7] = 0.0f;
inv[11] = 0.0f;
inv[15] = 1.0f;

memcpy(ret,inv,64);
}

THese will help with some of the matrix math, once you start making more advanced stuff you should put all of this in a larger class.

Next we set up the fbo


glBindTexture(GL_TEXTURE_2D, depth_tex);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 2048, 2048, 0,
GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL );
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_DEPTH_ATTACHMENT_EXT,
GL_TEXTURE_2D, depth_tex, 0);

glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_RGBA,2048, 2048);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,GL_COLOR_ATTACHMENT0_EXT,
GL_RENDERBUFFER_EXT, color_rb);

Then in the update function we need to build all five matrices, these matrices are really only float[16].


glPushMatrix();

glLoadIdentity();
gluPerspective(45.0f, (float)800/600, 10.0f, 1000.0f);
glGetFloatv(GL_MODELVIEW_MATRIX, cameraProjectionMatrix);

glLoadIdentity();
gluLookAt(0, 17, 46,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
glRotatef(angle,0.0f,1.0f,0.0f);
glGetFloatv(GL_MODELVIEW_MATRIX, cameraViewMatrix);

glLoadIdentity();
gluPerspective(65.0f, 1.0f, 25.0f, 200.0f);
glGetFloatv(GL_MODELVIEW_MATRIX, lightProjectionMatrix);

glLoadIdentity();
gluLookAt( lpos[0], lpos[1], lpos[2],
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
glRotatef(angle,0.0f,1.0f,0.0f);
glGetFloatv(GL_MODELVIEW_MATRIX, lightViewMatrix);

glPopMatrix();

float tempa[16];
float inverted[16];
Inverse_Matrix4(cameraViewMatrix,inverted);

Combine_Matrix4(lightViewMatrix,lightProjectionMatrix, tempa);
Combine_Matrix4(inverted,tempa, textureMatrix);

Notice that we invert the cameraViewMatrix and combine it with the combined light matrix, we do this to cancel out the camera movement, this moves the texture matrix from world space to camera space and it’s what allows us to move things about without it getting all “funky”.
Next we have the renderplane function, i won’t explain it but it draws a single quad, super simple.
Then we are going to start rendering, first the light pass., it’s simple enough, but instead of the usual glTranslatef and glRotatef we load our premade matrices.


glViewport (0, 0, 2048, 2048);
glClearDepth (1.0f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
checkfbo();
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glLoadIdentity ();

glMatrixMode(GL_PROJECTION);
glLoadMatrixf(lightProjectionMatrix);

glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(lightViewMatrix);

glColor3f(1,1,1);

cv90_render();

Then we set up a few things


glViewport (0, 0, 800, 600);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glClearDepth (1.0f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glLoadIdentity ();

glMatrixMode(GL_PROJECTION);
glLoadMatrixf(cameraProjectionMatrix);

glMatrixMode(GL_TEXTURE);
glLoadMatrixf(textureMatrix);

glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(cameraViewMatrix);
glColor3f(1,1,1);

if (useshader) glUseProgramObjectARB(ProgramObject);

sendUniform1i("texunit0", 0);
sendUniform1i("texunit1", 1);
sendUniform1i("texunit2", 2);
sendUniform3f("lpos", lpos);

Now comes the fun bit, just make a note that we are not doing anything funny here, it’s a simple setup of multi texturing and then the rendering.


glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,tex1);

glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,tex2);

glActiveTextureARB(GL_TEXTURE2_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,depth_tex);

cv90_render();

//switch to the ground texture
glActiveTextureARB(GL_TEXTURE0_ARB);
glBindTexture(GL_TEXTURE_2D,tex3);
glActiveTextureARB(GL_TEXTURE1_ARB);
glBindTexture(GL_TEXTURE_2D,tex4);

RenderPlane();

glActiveTextureARB(GL_TEXTURE2);
glDisable(GL_TEXTURE_2D);

glActiveTextureARB(GL_TEXTURE1);
glDisable(GL_TEXTURE_2D);

glActiveTextureARB(GL_TEXTURE0);
glDisable(GL_TEXTURE_2D);

if (useshader) glUseProgramObjectARB(0);

glFlush ();

Ok on to the shader parts.

Vertex shader


uniform sampler2D texunit0;
uniform sampler2D texunit1;
uniform vec3 lpos;

varying vec4 pos;
varying vec3 normal;
varying vec3 lightVec;
varying vec3 viewVec;

void main( void )
{
pos= gl_ModelViewProjectionMatrix * gl_Vertex;
normal = normalize(gl_NormalMatrix * gl_Normal);
lightVec = normalize(lpos - pos.xyz);
viewVec = vec3 (normalize(- (gl_ModelViewProjectionMatrix *gl_Vertex)));

gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
gl_TexCoord[2] = gl_TextureMatrix[0]*gl_ModelViewMatrix*gl_Vertex;
}

Fragment shader


uniform sampler2D texunit0;
uniform sampler2D texunit1;
uniform sampler2D texunit2;
uniform vec3 lpos;

varying vec4 pos;
varying vec3 normal;
varying vec3 lightVec;
varying vec3 viewVec;

void main( void )
{
gl_TexCoord[2] = gl_TexCoord[2]/gl_TexCoord[2].w;

gl_TexCoord[2]=(gl_TexCoord[2]+ 1.0) * 0.5;

vec4 base = texture2D(texunit0, gl_TexCoord[0].xy);
vec3 norm = texture2D(texunit1, gl_TexCoord[0].xy).xyz*2.0-1.0;
vec4 shadow = texture2D(texunit2,gl_TexCoord[2].xy);

norm = normalize(gl_NormalMatrix * norm);

float fresnel =max((norm.z-0.6)*-1.0,0.0);

float diffuse = max(dot(lightVec, norm),0.0);

float specular = max(dot(reflect(lightVec,norm), viewVec), 0.0)*1.7;

specular=pow(specular,8.0);

float shade=1;

if((shadow.z+0.005) < gl_TexCoord[2].z)
shade=0;
else
shade= diffuse;

gl_FragColor = (base* shade)+ (vec4(0.0,0.1,0.3,0.0)*fresnel) +(vec4(0.5,0.5,0.4,0.0)*specular*shade);
}

And that is it, shadow mapping will produce a lot of artifacts, one of them can be hidden by always including the diffuse value among the shadow.
I will continue to clear out some of them in coming tuts.

Download this tutorial for MSVCPP 6.0
note: this does not include the updated code so you need to change it before using.

Per pixel lighting

Today we will talk about per pixel lighting, or fragment lighting as the correct term is today.
Specifically we are going to add diffuse, specular and Fresnel reflection, now it’s not normal to add the Fresnel term, but i think it’s interesting to learn and here it looks kinda good to.
This is mostly a shader tutorial since the only c++ code change from 2b is that i now load a normal texture instead of the ambocc texture (that one is baked into the diffuse texture map using photoshop), and the other thing i changed was that i now push the light position to the shader using the sendUniform3f command (lpos in the shaders), you can do it any way you want, but it is preferable to use the built in constants available, i didn’t, but that’s because i didn’t want to write more code.

Now to do fragment lighting you need some stuff from the vertex shader, specifically the position of the fragment, the vector to the viewer and the vector to the light, i have included a normal to, but since we are going to use normal mapping that will not be needed.

uniform sampler2D texunit0;
uniform sampler2D texunit1;
uniform vec3 lpos;

varying vec4 pos;
varying vec3 normal;
varying vec3 lightVec;
varying vec3 viewVec;

void main( void )
{
pos= gl_ModelViewProjectionMatrix * gl_Vertex;
normal = normalize(gl_NormalMatrix * gl_Normal);
lightVec = normalize(lpos - pos.xyz);
viewVec = vec3 (normalize(- (gl_ModelViewProjectionMatrix *gl_Vertex)));

gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
}

Now to the fragment shader, it’s a bit complex and has many parts so i will split it up a little.
First the start bit, it’s identical to the vertex shader start bit.

uniform sampler2D texunit0;
uniform sampler2D texunit1;
uniform vec3 lpos;

varying vec4 pos;
varying vec3 normal;
varying vec3 lightVec;
varying vec3 viewVec;

void main( void )
{

next we read from the textures, please note what we are doing to the normal texture, this is because, the normal texture needs to have negative values to become a normal so just multiply by two and subtract by one, simple.

vec4 base = texture2D(texunit0, gl_TexCoord[0].xy);
vec3 norm = texture2D(texunit1, gl_TexCoord[0].xy).xyz*2.0-1.0;

Next on we need to treat the normal the same way as we did the normal in the vertex shader so that as the tank rotates so does the normal.

norm = normalize(gl_NormalMatrix * norm);

After this we can begin to compute the different lighting components, first we do the Fresnel term, the Fresnel term is a value that basically controls diffuse ambient reflections at a low incandescent angle, and it looks great on round shiny objects.

float fresnel =max((norm.z-0.4)*-1.0,0);

Diffuse term is simple, it is just the dot product of the light vector and the normal.

float diffuse = dot(-lightVec, norm);

Specular is a bit more complicated, and there in no real correct way of doing this, specular highlights are basically a diffuse reflection of the light source.

float specular = max(dot(reflect(-lightVec,norm), viewVec), 0.0)*1.1;
specular=pow(specular,8.0);

and finally we bring it all together

gl_FragColor = (base * diffuse) +(vec4(0.0,0.5,1,0.0)*fresnel) +(vec4(1.0,1.0,0.8,0.0)*specular);

}

Download this tutorial for MSVCPP 6.0

Textures and shaders

This tutorial will be short, basically we are going to add texturing to the shader equation.
So what do we need for this one.
First we need to be able to send uniforms to the shaders, now uniforms are constants that you set and they can control many different aspects of the shader, however unlike the varying variables these variables will not interpolate across the polygons.
In this tut we will use them to send the sampler cords which is sort of stupid since it’s just a number like 0 or 1, but it has to be done.
I will use this function for this job, i got several more of these for other types of variables included in the code.

void sendUniform1i(char var[60], int v)
{
GLint loc = glGetUniformLocationARB(ProgramObject, var);
if (loc==-1) return; // can't find variable

glUniform1iARB(loc, v);
}

Then we need to load textures, fortunately i added a load texture function, it’s a nifty little thing, ok it’s a jumbled mess of crappy code, but it does work, most of the time.
Anyway any texture loading function will do it’s no biggie, i load the textures like this.

tex1=LoadGLTextures("cv90base",2);
tex2=LoadGLTextures("cv90ambientocc",2);

The “2” tells the loader to use trilinear filtering with ansitopic filtering.

Next up is the multi texturing code, it’s par of the rendering code so just let’s see it all at once.


if (useshader) glUseProgramObjectARB(ProgramObject);
sendUniform1i("texunit0", 0);
sendUniform1i("texunit1", 1);

glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,tex1);

glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,tex2);

cv90_render();

glActiveTextureARB(GL_TEXTURE1);
glDisable(GL_TEXTURE_2D);
glActiveTextureARB(GL_TEXTURE0);
glDisable(GL_TEXTURE_2D);
if (useshader) glUseProgramObjectARB(0);

Finally the shader code itself (we only change the fragment shader code)

uniform sampler2D texunit0;
uniform sampler2D texunit1;
varying vec4 pos;

void main( void )
{

vec4 base = texture2D(texunit0, gl_TexCoord[0].xy);
vec4 ambi = texture2D(texunit1, gl_TexCoord[0].xy);

gl_FragColor = base * ambi;
}

This shader reads from booth textures and then multiplies them together, simple as always, the next one will be more complicated as we add ppl lighting, wohoo.
Download this tutorial for MSVCPP 6.0

Shaders

Yes shading, the holy grail of real time 3d computer graphics, except they are not actually called shaders, programs are the correct term as in vertex programs instead of vertex shaders and fragment programs instead of pixel shaders, and FYI i am not gonna care about the correct terminology in this tut.
Now we are going to take a first step in learning how to use glsl shaders, we will base this tut on tut1 with all the FBO stuff removed, then we are gonna add five functions to help us, oops i lied, i mean they are going to do all the work for us.
The first function is a function that replaces the old draw_cube() function, and although we don’t really need it, since in this tut we could still just use the cube, but we are going to need it in the various other shader tut’s.
What does it do you ask well it’s name is cv90_render(); and it resides in a file called cv90.cpp, and basically contains a pretty decent mesh of a Swedish built CV9040c, that’s right, it’s a tank (well technically a cross between an APC and a light tank), this particular 3d model was made for FHS in a project called Foreign ground which is set in Liberia, hence in the later tutorials where we will be adding texturing you will see it’s has the traditional UN white colors instead of the slightly cooler M90 pattern.
What are the contents then, well first you have six huge arrays containing all the data that i exported using a program called crossroads3d , old but still useful.
The other part is this function to render it all, it’s pretty straightforward.


void cv90_render(void)
{
int faces=sizeof(cv90_face)/sizeof(long);
int i=0,p=1;

while(i<faces)
{
glBegin(GL_POLYGON);
while(p)
{
if(cv90_face[i]==-1) p=0;
else
{
glTexCoord2f(cv90_uv[cv90_uvface[i]].x,cv90_uv[cv90_uvface[i]].y);
glNormal3f(cv90_normal[cv90_nface[i]].x,cv90_normal[cv90_nface[i]].y,
cv90_normal[cv90_nface[i]].z);
glVertex3f(cv90_vertex[cv90_face[i]].x,cv90_vertex[cv90_face[i]].y,
cv90_vertex[cv90_face[i]].z);
}
i++;
}
p=1;
glEnd();
}
}

Then we need a few variables to store our shaders in


GLhandleARB ProgramObject;
GLhandleARB VertexShaderObject;
GLhandleARB FragmentShaderObject;
char* VertexShaderSource;
char* FragmentShaderSource;
unsigned int useshader;

the second one is just a general purpose function for getting the files length, it’s used in the folowing two functions.

unsigned long getFileLength(ifstream& file)
{
if(!file.good()) return 0;

unsigned long pos=file.tellg();
file.seekg(0,ios::end);
unsigned long len = file.tellg();
file.seekg(ios::beg);

return len;
}

The third and fourth function loads the vertex and fragment program source code from a file, they are both identical save from the different variables used so i am only gonna show loadVShade, you have to do loadFShade yourself.

void loadVShade(char filename[160])
{
ifstream file;
file.open(filename, ios::in);
if(!file) {useshader=0; return;}

unsigned long len = getFileLength(file);
if (len==0) {useshader=0; return;}
VertexShaderSource = new char[len+1];

if (VertexShaderSource == 0) {useshader=0; return;}
VertexShaderSource[len] = 0;

unsigned int i=0;
while (file.good())
{
VertexShaderSource[i++] = file.get();
if (i>len) i=len;
}
i--;
VertexShaderSource[i] = 0;
file.close();
return;
}

and now to the most important one, so important that i have to explain it in segments.

First create all the shader objects we need + other variables

void compileShaders(void)
{
int compiled = 0;
int linked = 0;
char str[4096];

useshader=1;

ProgramObject = glCreateProgramObjectARB();
VertexShaderObject = glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB);
FragmentShaderObject = glCreateShaderObjectARB(GL_FRAGMENT_SHADER_ARB);

Next, transfer the shader source to the shader objects, after this is done we don’t need it anymore so lets delete the memory containing the source.

glShaderSourceARB(VertexShaderObject, 1, (const char **)&VertexShaderSource, NULL);
glShaderSourceARB(FragmentShaderObject, 1, (const char **)&FragmentShaderSource, NULL);

delete[] VertexShaderSource;
delete[] FragmentShaderSource;

This step compiles each shader source independently and then checks for compilation errors

glCompileShaderARB(VertexShaderObject);
glGetObjectParameterivARB(VertexShaderObject,
GL_OBJECT_COMPILE_STATUS_ARB, &compiled);

if (!compiled)
{
glGetInfoLogARB( VertexShaderObject, sizeof(str), NULL, str );
MessageBox( NULL, str, "vertex Shader Compile Error", MB_OK|MB_ICONEXCLAMATION );
useshader=0;
return;
}

glCompileShaderARB(FragmentShaderObject);
glGetObjectParameterivARB(FragmentShaderObject,
GL_OBJECT_COMPILE_STATUS_ARB, &compiled);

if (!compiled)
{
glGetInfoLogARB( FragmentShaderObject, sizeof(str), NULL, str );
MessageBox( NULL, str, "Fragment Shader Compile Error",
MB_OK|MB_ICONEXCLAMATION );
useshader=0;
return;
}

when it’s all compiled and well we attach the shader objects to the program objects, now the program object is what we actually call when we want to “bind” a shader.
In this step we can also delete the shader objects, we wouldn’t want to hog all that memory anyway, not after that half meg array the cv90_render() function got.

glAttachObjectARB(ProgramObject,VertexShaderObject);
glAttachObjectARB(ProgramObject,FragmentShaderObject);

glDeleteObjectARB(VertexShaderObject);
glDeleteObjectARB(FragmentShaderObject);

Finally we link the program object and check if something went wrong, if not then we got a shader ready for use.

glLinkProgramARB(ProgramObject);
glGetObjectParameterivARB(ProgramObject, GL_OBJECT_LINK_STATUS_ARB, &linked);
if (!linked)
{
MessageBox (HWND_DESKTOP, "can't link shaders", "Error",
MB_OK | MB_ICONEXCLAMATION);
useshader=0;
return;
}

return;
}

So after all that the end is rather anticlimactic, well i said the functions where gonna take care of it for us, so in the Initialize function just add these line to load and compile the shaders.

loadVShade("glsl.vert");
loadFShade("glsl.frag");
compileShaders();

and to render we use this

[code]if (useshader) glUseProgramObjectARB(ProgramObject);
cv90_render();
if (useshader) glUseProgramObjectARB(0);

There is nothing more to it, oops i lied again, we need the glsl.vert and glsl.frag files


// glsl.vert
varying vec4 pos;

void main( void )
{
pos=gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}

// glsl.vert
varying vec4 pos;

void main( void )
{
gl_FragColor = vec4(sin(pos.x*10.0),cos(pos.y*10.0),sin(pos.z*10.0),
0)+vec4(gl_TexCoord[0].xy,0,0);
//gl_FragColor = vec4(sin(gl_TexCoord[0].x*2.0),cos(gl_TexCoord[0].y*2.0),0,0);
}

I have included two patterns the default striped pattern and one based on the UV coordinates, just comment and uncomment the two gl_FragColor lines in glsl.frag to switch between them.
Now these shaders are not that advanced, that’s for another tut to teach so for now and as always check the source code for more info, comments and just plainly monkeying around with it and see if you can make something cool.
Download this tutorial for MSVCPP 6.0

FBO feeedbackbuffer

This is a continuation of the first FBO tutorial, this time we will use two of them, one for rendering and the other will be used as a feedback buffer, witch if you have seen a disco music video from the seventies you will know what i mean.
Basically a feedback buffer allows you to overdraw this frames rendering with all the previous frames renderings using an alpha blending.
Now normally one would think, why not do it directly into the back buffer, well that’s because you can’t be sure that you will get the previous rendering there after swapping the buffer, and if your triple buffering your in trouble.
A feedback FBO is a comfortable way of getting around that, since data is not shuffled around in them when you swap the buffers.
Although this is a neat effect, but because of all the low precision errors we get we will actually start using a little bit of HDR rendering, int’s not a lot, but it will remove all those artifacts we get but, it’s not full HDR rendering, that is a whole other tutorial with.

So what do you need them,
1.in the init function double up on the number of frame buffers with a simple copy paste and then just add a “2” after each FBO variable.
And because we don’t want any precision down sampling errors we set the internal format on the FBO#2 color buffer texture to GL_RGBA16F_ARB instead of GL_RGBA, this will make the blur look as smooth as ever.

2. just render to FBO#1 as you did in the first tutorial.

3. move the contents of FBO#1 to FBO#2 with an alpha blending like this

glViewport (0, 0, 512, 512);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb2);
checkfbo();
glClear (GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
ViewOrtho(512,512);

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, color_tex);

glBegin(GL_QUADS);
glColor4f(1.0f, 1.0f, 1.0f, 0.1f);
glTexCoord2f(0,1);
glVertex3f(0,0,0);
glTexCoord2f(0,0);
glVertex3f(0,512,0);
glTexCoord2f(1,0);
glVertex3f(512,512,0);
glTexCoord2f(1,1);
glVertex3f(512,0,0);
glEnd();

glDisable(GL_BLEND);
ViewPerspective();

Now about ViewOrtho() and ViewPerspective(), you should already be familiar with them if you did the NeHe tutorials, they enable and disable orthographic rendering, very handy indeed.

4. Finally, render FBO#2 to the frame buffer using most of the same stuff besides the blending part that is, note that i also draw a cube instead and that the full screen render has no blue component, this is because i want the cube to be visible.

glViewport (0, 0, (GLsizei)(g_window->init.width), (GLsizei)(g_window->init.height));
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, o);
checkfbo();
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
ViewOrtho(512,512);

glBindTexture(GL_TEXTURE_2D, color_tex2);

glBegin(GL_QUADS);
glColor4f(1.0f, 1.0f, 0.0f, 0.1f);
glTexCoord2f(0,1);
glVertex3f(0,0,0);
glTexCoord2f(0,0);
glVertex3f(0,512,0);
glTexCoord2f(1,0);
glVertex3f(512,512,0);
glTexCoord2f(1,1);
glVertex3f(512,0,0);
glEnd();

glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);

glLoadIdentity ();
glTranslatef (6.0f, -4.0f, -16.0f);
glRotatef(angle,0.0f,1.0f,0.0f);
glRotatef(angle,1.0f,0.0f,0.0f);
glRotatef(angle,0.0f,0.0f,1.0f);
glColor3f(1,1,1);
drawBox();

ViewPerspective();
glDisable(GL_TEXTURE_2D);

There are many other uses for FBO buffers, using them for post processing or advanced blending is not unheard of, multi pass rendering likes them and for HDR it’s almost must.
The double FBO method i use here is good way of doing things, first render to buffer 1 and then post process them in the second one, it’s fairly fast and leaves few artifacts, but this is only if your GPU can take it that is, there are a few new extensions that deals directly with transferring and blending data between two FBOs, but i won’t include them here, not until we get gl 2.1 where these and the fbo extension is merged.
Sit tight, because the next one is about GLSL shaders, and as always check the source for more info and comments on the code.
Download this tutorial for MSVCPP 6.0

Simple FBO rendering

Frame buffer objects is something new to openGL, it didn’t make it in time to 2.0 but it will probably be in 2.1 so it’s likely that it will change a little, but this tut should still remain valid for the most part.
Now if you don’t know what frame buffer objects are then i will tell you, the frame buffer is where you do all of your rendering, it is the memory area where the actual pixel data is stored, now previously you had only one (4 actually, but the other ones are rarely used) and if you wanted more then you had to use p-buffers, but p-buffers are something that is definitely problematic, I’m not gonna go into details but my advice it to stay as far away from them as possible.

This is where Frame Buffer Objects come into the picture, an FBO is a custom memory area in where you could render stuff to and provide a quick and easy render to texture solution where you could render directly to a depth texture or use odd formats and sizes.
FBO’s also makes High Dynamic Rendering (HDR) practical as you only have to create a texture that uses this format witch is as easy as typing GL_RGBA16F_ARB.

Now on to the cody bits.
The stuff i am gonna ignore this tutorial is the drawBox() func, the IsExtensionSupported() func, the checkfbo() func and all the extension init code, they are pretty much self explanatory and dull as dry bread, if you want to know what’s in it then you have to download the code and take a look.

So the first thing we have to do is to initialize the frame buffer and the associated textures and render buffers then binding the frame buffer so that we can work with it.
fb, color_tex and depth_rb are just unsigned ints, nothing complicated.

[stextbox id=”grey”]// generate namespace for the frame buffer, colorbuffer and depthbuffer
glGenFramebuffersEXT(1, &fb);
glGenTextures(1, &color_tex);
glGenRenderbuffersEXT(1, &depth_rb);

//switch to our fbo so we can bind stuff to it
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);[/stextbox]

Then bind and create the texture color_tex, the same way as with regular texturing but with a NULL or 0 instead of the data

[stextbox id=”grey”]//create the colorbuffer texture and attach it to the frame buffer
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0,
GL_RGBA, GL_INT, NULL)[/stextbox]

Now attach the texture to the frame buffer to the first color attachment, if you need to in conjunction with MRT it’s possible to have more than one color buffer

[stextbox id=”grey”]glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, color_tex, 0);[/stextbox]

Do the same thing with the render buffer and bind it to the depth attachment.
A render buffer is basically the same as the texture above except that it cant be used anywhere else so it’s primarily used as throwaway depth buffer.
[stextbox id=”grey”]// create a render buffer as our depth buffer and attach it
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
GL_DEPTH_COMPONENT24,512, 512);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, depth_rb);

// Go back to regular frame buffer rendering
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);[/stextbox]

And thats it, use the checkfbo() function to see if you where successfull in creating the FBO, if it passes you can now start using it.

So we have now managed to create it, but in order for you to be able to render to this frame buffer you need to do two things, first make sure the color buffer texture or any other texture bound to the fbo is not bound at the moment and then switch to FBO rendering.
[stextbox id=”grey”]glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);[/stextbox]

Simple, and to render the texture you do the reverse.
[stextbox id=”grey”]glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glBindTexture(GL_TEXTURE_2D, color_tex);[/stextbox]

And thats all there is to it, naturally the render loop is a bit more complicated and i am gonna include the tutorials full draw func just to show you how it’s done, note that you have to call glClear two times, this is because the clear only works on the currently bound frame buffer and not all of them at once, you also need to set the viewport to the size of the current framebuffer or things will look odd.
[stextbox id=”grey”]// FBO render pass
glViewport (0, 0, 512, 512);
// set The Current Viewport to the fbo size glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
glClearColor (1.0f, 0.0f, 0.0f, 0.5f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity ();
glTranslatef (0.0f, 0.0f, -6.0f);
glRotatef(angle,0.0f,1.0f,0.0f);
glRotatef(angle,1.0f,0.0f,0.0f);
glRotatef(angle,0.0f,0.0f,1.0f);
glColor3f(1,1,0);
drawBox()

// Framebuffer render pass

glEnable(GL_TEXTURE_2D);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glBindTexture(GL_TEXTURE_2D, color_tex);

glClearColor (0.0f, 0.0f, 0.0f, 0.5f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glViewport (0, 0, (GLsizei)(g_window->init.width), (GLsizei)(g_window->init.height));
glLoadIdentity ();
glTranslatef (0.0f, 0.0f, -6.0f);
glRotatef(angle,0.0f,1.0f,0.0f);
glRotatef(angle,1.0f,0.0f,0.0f);
glRotatef(angle,0.0f,0.0f,1.0f)
glColor3f(1,1,1);
drawBox();
glDisable(GL_TEXTURE_2D);
glFlush ();[/stextbox]
That’s all for this time make sure you download the tutorial and look at the code to understand the context a bit better.
Download this tutorial for MSVCPP 6.0

Base code

Now i want to get one thing straight before i start explaining things, this tutorial is not a full sized tutorial, no it’s just a small entry of misc comments about the changes of the nehe base code, it does not contain all changes, like variables and such.
Also this tut is mainly for win32 systems, my guess is that if you have another system then you might just use this tut as an inspiration.
Also there is not comments in the code below, just download the base code from the bottom of the page and look in that

Ok, lets begin.
The first change i made was that i added not one but four timing systems, the timing system is what makes the animations smooth and it dictates the speed of things.
I have included the following timing codes

Full delta time: it runs as fast as it can and reports the delta time to the update function, it gives max performance and works well for most things except physics

Full delta time paused: as above, but includes a sleep function in order to try and cap the framerate since it’s useless to run at a higher framerate than the monitor refresh rate, this also saves in on a lot of cpu time.

Set delta time: runs the update function at a set framerate: this saves in a little on performance and will work well with physics

Set delta time paused : again as above but also caps the actual rendered framerate with the use of the sleep function.

Here is the code itself


// timing functions
tickCount = GetTickCount ();
window.deltaTime = ((float)(tickCount - window.lastTickCount))/1000;
window.lastTickCount = tickCount;

switch(timingFormula)
{
case TIMER_FULLDT:
Update (window.deltaTime);
Draw ();
SwapBuffers (window.hDC);
break;

case TIMER_SETDT:
dtTemp+=window.deltaTime;

while (dtTemp>window.frameRate)
{
Update (window.frameRate);
dtTemp-=window.frameRate;
}

Draw ();
SwapBuffers (window.hDC);
break;

case TIMER_SETDT_PAUSED:
dtTemp+=window.deltaTime;

while (dtTemp>window.frameRate)
{
Update (window.frameRate);
dtTemp-=window.frameRate;
}
Draw ();
SwapBuffers (window.hDC);
timeSpent=(float)(GetTickCount()-window.lastTickCount)/1000;

if (timeSpent
{ Sleep((unsigned long)((window.frameRate-timeSpent)*900)); }
break;

case TIMER_FULLDT_PAUSED:
Update (window.deltaTime);
Draw ();
SwapBuffers (window.hDC);
timeSpent=(float)(GetTickCount()-window.lastTickCount)/1000;
if (timeSpent
{ Sleep((unsigned long)((window.frameRate-timeSpent)*900)); }
break;

default:
Update (window.deltaTime);
Draw ();
SwapBuffers (window.hDC);
break;
}

A bit much perhaps, but in essence it all about supplying the right values to the update function.
This in coincidently the only thing i changed with the interface to the actual application ( the init, draw and update functions), i changed it do that it sends the delta time in seconds instead of milliseconds.
Its better that way, it’s more clear in how to use it and it is a SI (Systeme Internationale) standard.

Using set delta time you have the possibility to do some cool time effects, normally the update function updates at the same rate as is given to it but if you change the value it gets to lets say a tenth of the update rate, you get some kind of bullet time effect, this is something you could play around with later on.

Now for the second thing i changed.
The original base code could switch from windowed to fullscreen at the push of a button, this is good, but unfortunately it was done in such a way that the context has to be released, this means that all the nice little textures you have loaded disappears.
The way i do it means that the render context does not get released, but this was only possible if i sacrificed one thing, the border around the window, but that’s ok, i don’t need it and it actually makes the window smaller then intended.

To change thing then first make sure the two top lines in the CreateWindowGL function looks like this.

DWORD windowStyle = WS_POPUPWINDOW;
DWORD windowExtendedStyle = WS_EX_APPWINDOW;

Next in WindowProc replace everything between case WM_TOGGLEFULLSCREEN: and the following break; with this

g_createFullScreen = (g_createFullScreen == TRUE) ? FALSE : TRUE;

if (g_createFullScreen== TRUE)
{
ShowCursor (FALSE);
SetWindowLong(hWnd,GWL_EXSTYLE,WS_EX_TOPMOST); ChangeScreenResolution (window->init.width, window->init.height,
window->init.bitsPerPixel);
SetWindowPos(hWnd,HWND_TOPMOST,0,0,window->init.width,
window->init.height,SWP_NOZORDER);
}
else
{
ShowCursor (TRUE);
SetWindowLong(hWnd,GWL_EXSTYLE,WS_EX_APPWINDOW); ChangeDisplaySettings (NULL,0); SetWindowPos(hWnd,HWND_TOPMOST,window->x,window->y,
window->init.width,window->init.height,SWP_NOZORDER);
}

Now this code will now toggle fullscreen/windowed without releasing the render context.
It will Also make sure the window is topmost, has not mouse cursor and is always positioned at the upper left corner when in fullscreen mode.
As a bonus the code will also return the window as it was when switching back from fullscreen.

Download the MSVCPP 6.0 base code here.

WordPress Themes