OpenGL basic knowledge summary

Article catalog

Image rendering pipeline

In OpenGL, everything is in 3D space, while the screen and window are 2D pixel arrays, which leads to most of OpenGL's work is about transforming 3D coordinates into 2D pixels suitable for your screen. The process of converting 3D coordinates to 2D coordinates is managed by OpenGL Graphics Pipeline (in fact, it refers to the process that a pile of original graphics data passes through a transmission pipeline and finally appears on the screen after various changes processing). The graphics rendering pipeline can be divided into two main parts: the first part is to convert your 3D coordinates to 2D coordinates, and the second part is to convert 2D coordinates to actual colored pixels.
The graphics rendering pipeline takes a set of 3D coordinates and converts them to colored 2D pixel output on your screen. The graphics rendering pipeline can be divided into several stages, each stage will take the output of the previous stage as input. All of these phases are highly specialized (they all have a specific function) and easy to execute in parallel. Because of their parallel execution, most of today's graphics cards have thousands of small processing cores. They run their own small programs on GPU for each stage (rendering pipeline), so as to quickly process your data in the graphics rendering pipeline. These applets are called shaders.

  • Vertex shader: it takes a single vertex as input. The main purpose of vertex shaders is to transform 3D coordinates into another kind of 3D coordinates (to be explained later). At the same time, vertex shaders allow us to perform some basic processing on vertex attributes.
  • Primitive assembly: phase takes all vertices output by vertex shader as input (in case of GL_POINTS, then a vertex), and all the points are assembled into the shape of the specified element; in this section, the example is a triangle.
  • Geometry shader: the geometry shader takes the collection of a series of vertices in the form of entity as input. It can generate other shapes by generating new vertices to construct new (or other) entities. In the example, it generates another triangle.
  • Rasterization stage: here, it will map the elements to the corresponding pixels on the final screen and generate fragments for the fragment shader. Clipping is performed before the clip shader runs. Cropping discards all pixels beyond your view for efficiency.
  • Fragment shader: the main purpose is to calculate the final color of a pixel, which is where all OpenGL advanced effects are produced. In general, clip shaders contain data from 3D scenes (such as lighting, shadows, color of light, etc.), which can be used to calculate the final pixel color.
  • Alpha testing and blending: after all the corresponding color values are determined, the final object will be passed to the last stage, which is called alpha testing and blending stage. In this stage, the corresponding depth (and Stencil) values of the fragments (to be discussed later) are detected, and they are used to determine whether the pixel is the front or the back of other objects and whether it should be discarded. This stage also checks the alpha value (which defines the transparency of an object) and blends the object. Therefore, even if the output color of one pixel is calculated in the clip shader, the final pixel color may be completely different when rendering multiple triangles.

The relationship among VAO, VBO and EBO

Vertex attribute format

glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(6 * sizeof(float)));
glEnableVertexAttribArray(2);

Coordinate system

OpenGL wants all the visible vertices to be normalized device coordinate (NDC) after each vertex shader run. That is to say, the x, y, z coordinates of each vertex should be between - 1.0 and 1.0, and the vertices beyond this coordinate range will not be visible. We usually set a range of coordinates ourselves, and then transform these coordinates into standardized device coordinates in vertex shaders. These standardized device coordinates are then passed into the rasterizer and transformed into 2D coordinates or pixels on the screen.
The process of transforming coordinates into standardized equipment coordinates and then screen coordinates is usually carried out step by step, which is similar to the pipeline. In the pipeline, the vertex of the object will be transformed to multiple coordinate systems before it is finally converted to screen coordinates. The advantage of transforming the coordinates of an object into several intermediate coordinate systems is that in these specific coordinate systems, some operations or operations are more convenient and easy, which will soon become obvious. There are five different coordinate systems that are more important to us.
The following figure shows the whole process and what each transformation process has done:

coordinate_systems

  1. The local coordinate is the coordinate of the object relative to the local origin, and also the starting coordinate of the object.
  2. The next step is to transform the local coordinates into the world space coordinates, which are in a larger space range. These coordinates are relative to the global origin of the world, and they will be placed with other objects relative to the origin of the world.
  3. Next, we transform the world coordinates into observation spatial coordinates, so that each coordinate is observed from the perspective of the camera or the observer.
  4. After the coordinates reach the observation space, we need to project them to the clipping coordinates. The clipping coordinates are processed to - 1.0 to 1.0 and determine which vertices will appear on the screen.
  5. Finally, we will transform the clipping coordinates into screen coordinates, and we will use a process called viewport transform. The viewport transform transforms the coordinates in the range - 1.0 to 1.0 to the coordinates defined by the glViewport function. Finally, the transformed coordinates will be sent to the rasterizer to convert them into fragments.

Orthographic projection and perspective projection



The frustum above defines the visible coordinates, which are specified by the width, height, near plane and far plane. Any coordinates that appear before the near plane or after the far plane are clipped out. The orthophoto frustum directly maps all the coordinates inside the frustum to the standardized equipment coordinates, because the w component of each vector has not been changed; if the w component is equal to 1.0, the perspective division method will not change this coordinate.
Each component of vertex coordinates is divided by its w component, and the further away from the observer, the smaller the vertex coordinates are. This is another reason why w component is very important. It can help us to make perspective projection.

Combine transforms

Each of the above steps creates a transformation matrix: model matrix, observation matrix and projection matrix. A vertex coordinate will be transformed to a clipping coordinate according to the following procedure:
Vclip=Mprojection∗Mview∗Mmodel∗Vlocal V_{clip} = M_{projection} * M_{view} * M_{model}* V_{local} Vclip​=Mprojection​∗Mview​∗Mmodel​∗Vlocal​
Note that the order of matrix operations is the opposite (remember that we need to read matrix multiplication from right to left). The last vertex should be assigned to GL in the vertex shader_ Position, OpenGL will automatically perform perspective division and clipping.

Right hand coordinate system

By convention, OpenGL is a right-hand coordinate system. In short, the positive x-axis is on your right, the positive y-axis is up, and the positive z-axis is back. Imagine that your screen is in the center of three axes, and the positive Z axis is facing you through your screen. The coordinate system is drawn as follows:

To understand why it is called the right-hand coordinate system, follow the steps below:

  • Extend your right arm along the positive y axis, pointing up.
  • Point your thumb to the right.
  • The index finger points up.
  • The middle finger is bent down 90 degrees.

video camera

When we talk about camera / view space, we talk about the coordinates of all the vertices in the scene when the camera's perspective is taken as the origin of the scene: the observation matrix transforms all the world coordinates into the observation coordinates relative to the camera's position and direction. To define a camera, we need its position in world space, the direction of observation, a vector to its right and a vector to its top. Careful readers may have noticed that we actually created a coordinate system with three unit axes perpendicular to each other and the camera position as the origin.

LookAt matrix

One of the advantages of using a matrix is that if you define a coordinate space with three mutually perpendicular (or non-linear) axes, you can use these three axes plus a translation vector to create a matrix, and you can multiply this matrix by any vector to transform it into that coordinate space. This is what the LookAt matrix does. Now that we have three mutually perpendicular axes and a position coordinate defining the camera space, we can create our own LookAt matrix:

Where R is the right vector, U is the up vector, D is the direction vector P is the camera position vector. Note that the position vector is the opposite, because we ultimately want to translate the world in the opposite direction of our own movement. Using this LookAt matrix as an observation matrix can efficiently transform all world coordinates into the observation space just defined. The LookAt matrix is as its name suggests: it creates an observation matrix that looks at a given target.

Fortunately, GLM has provided this support. All we have to do is define a camera position, a target position and a vector representing the up vector in world space (the up vector we use to calculate the right vector). GLM then creates a LookAt matrix that we can use as our observation matrix.

Euler angle

Euler angle is three values that can represent any rotation in 3D space, which was proposed by Leonhard Euler in the 18th century. There are three kinds of Euler angles: pitch, yaw and roll. The following pictures show their meanings:

Glossary list

  • OpenGL: a formal specification for a graphic API that defines function layout and output
  • GLAD: an extended loading library for loading and setting all OpenGL function pointers so that we can use all (Modern) opengl functions.
  • Viewport: the window we need to render
  • Graphics Pipeline: the whole process of a vertex before it is rendered as a pixel
  • Shader: a small program running on the graphics card, many stages of the graphics pipeline can use custom shaders to replace the original function
  • Normalized Device Coordinates (NDC): after clipping and perspective division in the clipping coordinate system, the vertex finally assumes the current coordinate system, and all the vertices between - 1.0 and 1.0 under NDC will not be discarded and visible
  • Vertex Buffer Object (VBO): a buffer object that calls the graphics memory and stores all vertex data for use by the graphics card
  • Vertex Array Object (VAO): stores buffer and vertex attribute States
  • Element Buffer Object (EBO): a buffer object that stores an index for indexed drawing
  • Uniform: a special type of GLSL variable, which is Quanjude (every shader in a shader program can access the uniform variable), and only needs to be set once.
  • Texture: a feature type image that encloses an object, giving it a fine visual effect.
  • Texture Wrapping: defines a mode that specifies how OpenGL samples textures when texture vertices are out of range (0, 1)
  • Texture Filtering: defines a mode that specifies how OpenGL samples textures when there are multiple texture selections, usually when the texture is magnified
  • Mipmaps: some miniaturized versions of the stored material, and the appropriate size of the material will be used according to the distance from the observer.
  • stb_image.h: image loading library.
  • Texture units: allows multiple textures to render on the same object by binding them to different texture units.
  • Vector: a mathematical entity that defines the direction and / or position in space.
  • Matrix: a mathematical expression for a rectangular array.
  • GLM: a math library for OpenGL.
  • Local space: the initial space of an object. All coordinates are relative to the origin of the object.
  • World space: all coordinates are relative to the global origin.
  • View space: all coordinates are viewed from the camera's perspective.
  • Clip space: all coordinates are viewed from camera perspective, but projection is applied to the space. This space should be the final space of vertex coordinates as the output of vertex shaders. OpenGL takes care of the rest (crop / perspective Division).
  • Screen space: all coordinates are viewed from the screen perspective. The coordinates range from 0 to the width / height of the screen.
  • LookAt matrix: a special type of observation matrix that creates a coordinate system in which all coordinates are rotated or translated according to the user observing the target from a location.
  • Euler angles: defined as yaw, pitch, and roll, allowing us to construct any 3D direction from these three values.

Sample code interpretation

This is a piece of code that uses mouse, keyboard and pulley to move in three-dimensional space and then observe ten boxes, with code interpretation attached.

#include <glad/glad.h>
#include <GLFW/glfw3.h>
#include "../../util/stb_image.h"

#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>

#include "../../util/shader_s.h"

#include <iostream>

void framebuffer_size_callback(GLFWwindow* window, int width, int height);
void mouse_callback(GLFWwindow* window, double xpos, double ypos);
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset);
void processInput(GLFWwindow *window);

// Screen size
const unsigned int SCR_WIDTH = 800;
const unsigned int SCR_HEIGHT = 600;

// camera
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f); // A vector pointing to the camera position in world space
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f); // Camera direction, pointing to the negative Z axis
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f); // Up vector of camera

bool firstMouse = true;
float yaw = -90.0f;	// yaw is initialized to -90.0 degrees since a yaw of 0.0 results in a direction vector pointing to the right so we initially rotate a bit to the left
float pitch = 0.0f; // Pitch angle
float lastX = 800.0f / 2.0;
float lastY = 600.0 / 2.0;
float fov = 45.0f; // Field of view

// timing calculates the time difference, which stores the time taken to render the previous frame. We multiply all the speed by the delta time value, and the result is that if our delta time is large, it means that the rendering of the previous frame takes more time, so the speed of this frame needs to be higher to balance the rendering time
float deltaTime = 0.0f;	// time between current frame and last frame
float lastFrame = 0.0f;

int main()
{
    // glfw: initialize configuration
    // ------------------------------
    glfwInit();
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Main version
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); // Minor version
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // Use core mode

#ifdef __APPLE__  //Apple system use
    glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
#endif

    // glfw create window
    // --------------------
    GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", NULL, NULL);
    if (window == NULL)
    {
        std::cout << "Failed to create GLFW window" << std::endl;
        glfwTerminate();
        return -1;
    }
    // After creating the window, we can inform GLFW to set the context of our window as the main context of the current thread.
    glfwMakeContextCurrent(window);
    // Tell GLFW that we want to call this function whenever the window is resized: framebuffer when the window is first displayed_ size_ Callback will also be called. For retina display, both width and height are significantly higher than the original input value.
    glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);
    glfwSetCursorPosCallback(window, mouse_callback); // Mouse position callback
    glfwSetScrollCallback(window, scroll_callback); // Mouse wheel callback

    // Tell GLFW to capture the mouse
    glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);

    // Glad: load all OpenGL function pointersload all OpenGL function pointers to ensure that different platform functions are the same
    // ---------------------------------------
    if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
    {
        std::cout << "Failed to initialize GLAD" << std::endl;
        return -1;
    }

    // Configure global OpenGl state
    // -----------------------------
    glEnable(GL_DEPTH_TEST); // Start the depth test, so there is Z buffer to control the display and occlusion of 3D objects

    // Build the compiled shader program, where the custom shader class is used
    // ------------------------------------
    Shader ourShader("shader/7.3.camera.vs", "shader/7.3.camera.fs");

    // Set vertex array (and buffers) and configure vertex properties
    // ------------------------------------------------------------------
    float vertices[] = {
            // Vertex coordinates x,y,z, texture coordinates x1,y1
            -0.5f, -0.5f, -0.5f, 0.0f, 0.0f,
            0.5f, -0.5f, -0.5f, 1.0f, 0.0f,
            0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
            0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
            -0.5f, 0.5f, -0.5f, 0.0f, 1.0f,
            -0.5f, -0.5f, -0.5f, 0.0f, 0.0f,

            -0.5f, -0.5f, 0.5f, 0.0f, 0.0f,
            0.5f, -0.5f, 0.5f, 1.0f, 0.0f,
            0.5f, 0.5f, 0.5f, 1.0f, 1.0f,
            0.5f, 0.5f, 0.5f, 1.0f, 1.0f,
            -0.5f, 0.5f, 0.5f, 0.0f, 1.0f,
            -0.5f, -0.5f, 0.5f, 0.0f, 0.0f,

            -0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
            -0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
            -0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
            -0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
            -0.5f, -0.5f, 0.5f, 0.0f, 0.0f,
            -0.5f, 0.5f, 0.5f, 1.0f, 0.0f,

            0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
            0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
            0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
            0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
            0.5f, -0.5f, 0.5f, 0.0f, 0.0f,
            0.5f, 0.5f, 0.5f, 1.0f, 0.0f,

            -0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
            0.5f, -0.5f, -0.5f, 1.0f, 1.0f,
            0.5f, -0.5f, 0.5f, 1.0f, 0.0f,
            0.5f, -0.5f, 0.5f, 1.0f, 0.0f,
            -0.5f, -0.5f, 0.5f, 0.0f, 0.0f,
            -0.5f, -0.5f, -0.5f, 0.0f, 1.0f,

            -0.5f, 0.5f, -0.5f, 0.0f, 1.0f,
            0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
            0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
            0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
            -0.5f, 0.5f, 0.5f, 0.0f, 0.0f,
            -0.5f, 0.5f, -0.5f, 0.0f, 1.0f
    };
    // Coordinates of our cube in world space
    glm::vec3 cubePositions[] = {
            glm::vec3( 0.0f, 0.0f, 0.0f),
            glm::vec3( 2.0f, 5.0f, -15.0f),
            glm::vec3(-1.5f, -2.2f, -2.5f),
            glm::vec3(-3.8f, -2.0f, -12.3f),
            glm::vec3( 2.4f, -0.4f, -3.5f),
            glm::vec3(-1.7f, 3.0f, -7.5f),
            glm::vec3( 1.3f, -2.0f, -2.5f),
            glm::vec3( 1.5f, 2.0f, -2.5f),
            glm::vec3( 1.5f, 0.2f, -1.5f),
            glm::vec3(-1.3f, 1.0f, -1.5f)
    };
    unsigned int VBO, VAO; // Define vertex buffer object and vertex array object, and EBO is index buffer object, which is useful for drawing rectangle with two triangles to define drawing order
    glGenVertexArrays(1, &VAO); // Use function to generate VAO, the first is the generated quantity, and the second is the object index
    glGenBuffers(1, &VBO); // Using functions to generate VBO

    // To use VAO, all you have to do is bind VAO with glBindVertexArray. After binding, we should bind and configure the corresponding VBO and property pointer, and then unbind the VAO for later use. When we plan to draw an object, we just need to bind VAO to the desired setting before drawing the object.
    glBindVertexArray(VAO);

    glBindBuffer(GL_ARRAY_BUFFER, VBO); // Use the glBindBuffer function to bind the newly created buffer to GL_ARRAY_BUFFER target
    // From this moment on, we use any (in GL_ARRAY_BUFFER calls on the buffer target are used to configure the currently bound buffer (VBO),glBufferData function, which copies the previously defined vertex data to the buffered memory
    // The position data of the triangle will not be changed, and will remain the same every time the rendering is called, so the best use type of the triangle is GL_STATIC_DRAW.  If, for example, the data in a buffer will be changed frequently, the type used is GL_DYNAMIC_DRAW or GL_STREAM_DRAW, which ensures that the graphics card places the data in the memory part that can be written at high speed.
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    //**Set vertex attribute pointer
    // Position attribute: the first parameter is the value of the vertex shader layout, the second parameter is the vertex attribute size 3, the third parameter is the type of specified data, the fourth parameter is whether to standardize, the fifth parameter is the step size, the interval between consecutive vertex attribute groups, and the last parameter is the offset of the starting position of the position data in the buffer
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)0);
    glEnableVertexAttribArray(0); // Turn on vertex attribute with vertex attribute position value as parameter; vertex attribute is off by default
    // 2 texture coordinate attributes
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));
    glEnableVertexAttribArray(1);


    // Load and create a texture
    // -------------------------
    unsigned int texture1, texture2; // Textures are referenced using ID
    // texture 1
    // ---------
    glGenTextures(1, &texture1); // The first parameter is the number of textures generated, and then stored in the second parameter unsigned int array, which is a separate unsigned int.
    glBindTexture(GL_TEXTURE_2D, texture1); // Bind to GL_ TEXTURE_ In 2D, at this time, any call will be used to configure texture1
    // Set texture wrapping parameters, how to display the texture beyond the boundary
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    // Set texture filtering parameters, how to deal with enlarging or shrinking
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    // Load image, create texture and generate multi-level fade texture
    int width, height, nrChannels;
    stbi_set_flip_vertically_on_load(true); // tell stb_image.h flips around the y axis when loading textures
    unsigned char *data = stbi_load("resources/container.jpg", &width, &height, &nrChannels, 0);
    if (data)
    {
        // Use the image data loaded previously to generate a texture. The first parameter specifies the texture target, which is set to GL_TEXTURE_2D means to generate a texture on the same target as the currently bound texture object
        // The second parameter specifies the level of the multi-level fade texture for the texture, if you want to manually set the level of each multi-level fade texture separately. Here we fill in 0, which is the basic level.
        // The third parameter tells OpenGL what format we want to store textures in. Our image has only RGB values, so we also store textures as RGB values.
        // The fourth and fifth parameters set the width and height of the final texture
        // The next parameter should always be set to 0 (a legacy problem).
        // The seventh and eighth parameters define the format and data type of the source graph. We load the image with RGB values and store them as char(byte) arrays, and we will pass in the corresponding values.
        // The last parameter is the real image data. When gltex image2d is called, the currently bound texture object will be attached to the texture image. At present, however, only base level texture images are loaded,
        // If we want to use multi-level fade texture, we have to manually set all the different images (increasing the second parameter). Or, call glGenerateMipmap directly after the texture is generated. This will automatically generate all required multi-level fade textures for the currently bound texture.
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
        glGenerateMipmap(GL_TEXTURE_2D);
    }
    else
    {
        std::cout << "Failed to load texture" << std::endl;
    }
    stbi_image_free(data); // It is a good habit to release the memory of the image after generating the texture and the corresponding multi-level fade texture.
    // texture 2
    // ---------
    glGenTextures(1, &texture2);
    glBindTexture(GL_TEXTURE_2D, texture2);
    // set the texture wrapping parameters
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    // set texture filtering parameters
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    // load image, create texture and generate mipmaps
    data = stbi_load("resources/awesomeface.png", &width, &height, &nrChannels, 0);
    if (data)
    {
        // note that the awesomeface.png has transparency and thus an alpha channel, so make sure to tell OpenGL the data type is of GL_RGBA
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
        glGenerateMipmap(GL_TEXTURE_2D);
    }
    else
    {
        std::cout << "Failed to load texture" << std::endl;
    }
    stbi_image_free(data);

    // tell opengl for each sampler to which texture unit it belongs to (only has to be done once)
    // Tell opengl which texture unit belongs to each sampler (only need to be done once)
    // We will also tell OpenGL which texture unit each shader sampler belongs to by setting up each sampler with glUniform1i. We only need to set it once, so this will be put in front of the rendering cycle:
    // -------------------------------------------------------------------------------------------
    ourShader.use(); // Don't forget to set up uniform before activating the shader!
    ourShader.setInt("texture1", 0); //Set which texture unit the texture belongs to GL_TEXTURE0,GL_TEXTURE1
    ourShader.setInt("texture2", 1);


    // render loop
    // -----------
    while (!glfwWindowShouldClose(window))
    {
        // Per frame time logic
        // --------------------
        float currentFrame = glfwGetTime();
        deltaTime = currentFrame - lastFrame;
        lastFrame = currentFrame;

        // input
        // -----
        processInput(window);

        // render
        // ------
        glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear color and depth buffer with corresponding color

        // Bind texture to corresponding texture unit
        // With glUniform1i, we can assign a position value to the texture sampler so that we can set multiple textures in a fragment shader. The position value of a texture is usually called a texture unit.
        // The default texture unit of a texture is 0, which is the default active texture unit, so we did not assign a position value in the previous part of the tutorial. The main purpose of texture unit is to let us use more than one texture in the shader.
        // By assigning texture units to the sampler, we can bind multiple textures at one time, as long as we activate the corresponding texture units first. Just like glBindTexture, we can use glActiveTexture to activate texture units, passing in the texture units we need to use:
        // After activating the texture unit, the next glBindTexture function call will bind the texture to the currently activated texture unit, texture unit GL_TEXTURE0 is always activated by default, so in the previous example, when we use glBindTexture, we don't need to activate any texture units.
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, texture1);
        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D, texture2);

        // Make sure to activate the shader before calling any glUniform
        ourShader.use();

        // Transfer the projection matrix to the shader (note that in this case it may change every frame)
        glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f);
        ourShader.setMat4("projection", projection);

        // camera/view transformation
        glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
        ourShader.setMat4("view", view);

        // render boxes
        // Any subsequent vertex attribute calls will be stored in this VAO, so the advantage is that when configuring vertex attribute pointers, you only need to execute those calls once, and then you only need to bind the corresponding VAO when drawing objects.
        // This makes it very easy to switch between different vertex data and attribute configurations, just bind different vaos. All the states you just set will be stored in the VAO.
        // A vertex array object stores the following:
        // -Calls to glEnableVertexAttribArray and glDisableVertexAttribArray.
        // -Configure the vertex properties set by glVertexAttribPointer.
        // -The vertex buffer object associated with the vertex attribute is called through glVertexAttribPointer.
        glBindVertexArray(VAO); //When we want to draw an object, we just need to bind VAO to the desired setting before drawing the object
        for (unsigned int i = 0; i < 10; i++)
        {
            // Calculate the model matrix for each object before drawing and transfer it to shader
            glm::mat4 model = glm::mat4(1.0f); // First, make sure to initialize to the unit matrix
            model = glm::translate(model, cubePositions[i]);
            float angle = 20.0f * i;
            model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f));
            ourShader.setMat4("model", model);

            glDrawArrays(GL_TRIANGLES, 0, 36);
        }

        // glfw: double buffering used, so exchange buffers and get IO events (keys pressed/released, mouse moved etc.)
        // -------------------------------------------------------------------------------
        glfwSwapBuffers(window);
        glfwPollEvents();
    }

    // optional: de-allocate all resources once they've outlived their purpose:
    // ------------------------------------------------------------------------
    glDeleteVertexArrays(1, &VAO);
    glDeleteBuffers(1, &VBO);

    // glfw: terminate, clearing all previously allocated GLFW resources.
    // ------------------------------------------------------------------
    glfwTerminate();
    return 0;
}

// process all input: query GLFW whether relevant keys are pressed/released this frame and react accordingly
// ---------------------------------------------------------------------------------------------------------
void processInput(GLFWwindow *window)
{
    if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
        glfwSetWindowShouldClose(window, true);

// float cameraSpeed = 2.5 * deltaTime;
    float cameraSpeed = 10 * deltaTime;
    if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
        cameraPos += cameraSpeed * cameraFront;
    if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS)
        cameraPos -= cameraSpeed * cameraFront;
    if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS)
        cameraPos -= glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS)
        cameraPos += glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
}

// glfw: whenever the window size changed (by OS or user resize) this callback function executes
// ---------------------------------------------------------------------------------------------
void framebuffer_size_callback(GLFWwindow* window, int width, int height)
{
    // make sure the viewport matches the new window dimensions; note that width and 
    // height will be significantly larger than specified on retina displays.
    glViewport(0, 0, width, height);
}

// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
    if (firstMouse)
    {
        lastX = xpos;
        lastY = ypos;
        firstMouse = false;
    }

    float xoffset = xpos - lastX;
    float yoffset = lastY - ypos; //And vice versa, because y coordinates go from bottom to top
    lastX = xpos;
    lastY = ypos;

    float sensitivity = 0.1f; // Adjust according to your preference
    xoffset *= sensitivity;
    yoffset *= sensitivity;

    yaw += xoffset;
    pitch += yoffset;

    // make sure that when pitch is out of bounds, screen doesn't get flipped
    if (pitch > 89.0f)
        pitch = 89.0f;
    if (pitch < -89.0f)
        pitch = -89.0f;

    glm::vec3 front;
    front.x = cos(glm::radians(yaw)) * cos(glm::radians(pitch));
    front.y = sin(glm::radians(pitch));
    front.z = sin(glm::radians(yaw)) * cos(glm::radians(pitch));
    cameraFront = glm::normalize(front);
}

// glfw: whenever the mouse scroll wheel scrolls, this callback is called
// ----------------------------------------------------------------------
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
    fov -= (float)yoffset;
    if (fov < 1.0f)
        fov = 1.0f;
    if (fov > 45.0f)
        fov = 45.0f;
}

Resource reference

LearnOpenGL

Tags: Attribute Fragment REST React

Posted on Fri, 05 Jun 2020 23:35:45 -0700 by affordit