Every few months I need to relearn how properly set up a GLSL shader with a vertex array object. This time I was running into a strange bug where my normals were being used as vertex positions. I had a simple pass-through vertex shader looking something like:
#version 330 core
in vec3 position;
in vec3 normal;
out vec3 frag_normal;
void main()
{
gl_Position = position;
frag_normal = normal;
}
And on the CPU side I was setting up my vertex attributes with:
glVertexAttribPointer(0,3, ... );
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,position_buffer_object);
glVertexAttribPointer(1,3, ... );
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,normal_buffer_object);
My folly was assuming that because I'd declared position and normal in that order in the vertex shader that they'd be bound to attribute ids 0 and 1 respectively. Not so! After some head-slamming I thought to query these ids using:
glGetAttribLocation(program_id,"position");
glGetAttribLocation(program_id,"normal");
And found that for whatever reason position was bound to 0 and normal was bound to 1. Of course I then tried hacks to get these to reverse order, or I could hard code the different order. But there appears to be two correct options for fixing this problem:
The obvious one is to use glGetAttribLocation when creating the vertex array object:
glVertexAttribPointer(glGetAttribLocation(program_id,"position"),3, ... );
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,position_buffer_object);
...
I was a little bothered that this solution requires that I know which shader is going to be in use at the time of creating the vertex array object.
The opposite solution is to assume a certain layout on the CPU-side when writing the shader:
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
Now I can be sure that using ids 0 and 1 will correctly bind to position and normal respectively.