- GPU is specialized hardware for parellelizing calls to pixels?
- switching shaders is the most expensive operation in OpenGL after compiling shaders
- inputs: matrix (where is the camera), allows us to transform the vertex location based on where the camera is
- output: transformed vertices with associated data (color, etc.)
- inputs: takes in the transformed vertices (output of the vertex shader)
- interpolates values for pixels between vertices
- outputs: fragments and associated data
- runs all fragments in parallel through the shader (using varying attributes produced by the interpolation in rasterization step)
- output: rendered scene
- everything is triangles (some exceptions: ex. lines w fill-outline)
- a list of all the triangles you want to render is a GL Vertex Buffer (a list of groups of 3 elements representing a triangle, each element has coordinates and associated data) - also called un-indexed rendering
- unindexed rendering ends up processing vertices multiple times if there are multiple triangles associated with the vertex - this is inefficient
- indexed rendering - de-dups vertex buffer, and uses that in conjuction with the index/element buffer to draw
- Mapbox GL uses indexed rendering most of the time
- vertex array objects
- depth buffer is used to record how far away an element is from the camera and whether or not its visible based on other elements that are opaque and closer to the camera
- rendering is done front-to-back wrt to camera for opaque things, and back-to-front for translucent elements so pixels can be blended
- can't use depth testing for transparent pixels
- tells the GPU not to render fragments that fall in the tile-buffer zone (tile clipping)
- most hardware only has 8 bits of stencil buffer - only supports 256 distinct values
- it would be preferable to clip geometries and not have to use stencil buffer to avoid switching shaders