So I understand that for best results one uses a height (or bump) map and a normal map together.
And I also understand that one can calculate a normal map from a height map using some sobel operator.
But out of curiosity I wonder how the first games that supported bump mapping did it.
I found this fragment shader code that seems to do it, but I am not sure: OpenGL Bump Map -- Texture artifacts ?
If the code above does exactly what I am looking for, can someone explain the line
vec3 normalDirection = normalize(tangenteSpace*(texture2D( bump_tex, f_texcoord ).rgb*2.0 - 1.0));
How can you get a normal from a bump ?
I understand what tangent space is mathematically but how does it work here ?
Is it a vec4 or a matrix ?
So basically my question is: How was bump mapping done without a normal map in the old days ?
Bump mapping simulates surface displacement, i.e. the shader pretends that the pixel is further back along the surface normal than it actually is. It then compares the depth to the pixels next to it on the map, and so can establish a gradient, using pythagoras: if we know the difference in height, and the difference in pixels to the left/right/up/down/etc, then we can compute the slope in each direction. We can then use those slopes to calculate a new normal vector for that pixel. This involves several texture samples per pixel, in the sobel matrix you mentioned, and is indeed how it was done "back in the day".
For this reason, it is expensive, and sub-optimal, and in the case of blurred height maps, can led to artifacts like the ones you saw.
Normal mapping on the other hand, uses a normal map, pre-baked from a bump, or displacement ma, in exactly the same method as above. The difference is, that by doing it in advance, you save on at least 4-8 texture samples per pixel, in addition to not having to compute the slopes per pixel. Not having to do this FAR outweighs the cost of the matrix transformations from tangent to view space per pixel, making lighting much more efficient. This is how it is done in today's hardware, though tomorrow's graphics engine will likely use raytracing instead.