Back in a day when I've started working with OpenGL, NeHe tutorials were the main learning material. They made stuff look simple and following them was fairly quick way to get results. For a good part that was due to simplicity of OpenGL's immediate mode and fixed pipeline. There was minimal setup code and very direct approach to specify what to draw. The setup consisted of tying OpenGL context with a window (or whatever OS of choice uses for displaying graphics, later covered by myriad of 3rd party libraries), turning desired options on and setting projection matrix (kind of like camera, defines which part of game world is visible and how is distance perceived). When it came to drawing stuff you'd simply specify which kind of polygons are you going to draw and then feed GPU with polygon vertex positions. Optionally you'd specify which texture to use (or no texture) and which color to apply.
That's all good when you are dealing with simple scenes and don't need that much performance. Feeding vertex data every frame (immediate mode) is inefficient because GPU can't do much but wait for CPU to communicate all the data. Alternative approach is retained mode where list of vertex data, much like texture image, is uploaded once and later just referred to. In this mode draw calls boil down just telling GPU which vertex list ID to crunch and GPU can get to work immediately. Additionally retained mode goes hand in hand with shaders which are an opposite of the fixed pipeline. Problem with fixed pipeline is it's "one size fits all" kind of solution where unneeded features can be disabled. Shaders on the other hand can be tailored much closer to application's needs and can provide features beyond those supported by fixed pipeline. For those and probably other valid reasons immediate mode and fixed pipeline were declared deprecated in OpenGL 3 and later.
But in order to get any graphical result from shaders and retained mode there is a lot of stuff to setup:
- Load shader source code and compile it. One would think OpenGL doesn't deal with raw source code but that's the main way to get a shader on GPU. Of course shader source may not be valid and compilation can fail so the application has to check if this step was successful.
- Attach and link shaders into a single program. In previous steps you just loaded various shader types (geometry, vertex, fragment) but to make them useful you have to tie them in a single unit called simply a "program". This can fail too (for example if output of one phase doesn't match with an input of the next) so application has to check this too.
- Collect location IDs of shader attributes and uniform variables.
- Make vertex data buffers.
- Make "array objects". They hold information about how to feed vertex buffers to shaders, which bytes go to which attribute. This step won't fail but provides a lot of room for error.
- Turn on options like in fixed pipeline (like depth test, alpha blending and face culling)
- There is projection matrix calculation step too, like in fixed pipeline API but matrix management is very different. You can't use OpenGL's functions for building and switching matrices, you have to do it on your own (or use 3rd party library).
It's exciting and all that but before I bring all that goodies I'd take pause from engine making in order to do something cool in game making department. After that I'll pack a release and then refactor graphical engine a bit to make scene building easier. And then the visible improvements will ensue.
Nema komentara:
Objavi komentar