3D visualization system of subway station based on HTML5 + WebGL


Industrial Internet, Internet of things, visualization and other terms are familiar to us under the background of information technology. The transportation, travel, food and clothing in daily life may be expressed in the way of information technology. In the traditional field of visual monitoring, it is generally based on the front-end technology of Web SCADA to realize 2D visual monitoring. The system uses Hightopo's HT for Web products are used to construct lightweight 3D visualization scene, which shows the reality scene of a subway station from the front, including the real-time operation of the subway, the up and down situation of the subway, video monitoring, smoke alarm, elevator operation and so on, to help us intuitively understand the current subway station.

In order to help users browse the current subway station more intuitively and friendly, the system provides three interaction modes:

  • First person mode - operation is similar to the effect of pedestrians or vehicles moving, which can be controlled by keyboard and mouse.
  • Automatic patrol mode - in this mode, the user does not need any operation, and the scene automatically moves forward and backward to patrol the current subway station.
  • Mouse operation mode: left click to rotate the scene, right click to pan the scene.

This article describes the construction of visualization scene, the realization of animation code, the principle analysis of interaction mode, and the realization of main function points to help us understand how to use HT Realize a simple visualization of subway station.

Preview address: 3D visualization system of subway station based on HTML5 WebGL http://www.hightopo.com/demo/ht-subway/

Interface introduction and effect preview

Metro operation effect

The effect of subway from outside to inside is that the transparency increases gradually and the speed decreases gradually.

Roaming effect

The above is the roaming effect of automatic patrol, and the scene automatically rotates forward.

Monitoring device interaction effect

When we click on the monitoring equipment in the scene, we can view the operation status, operation data and other information of the current equipment.

Scene construction

Most of the models in the system are generated by 3dMax modeling. The modeling tool can export obj and mtl files. In HT, all complex models in 3d scene can be generated by parsing obj and mtl files. Of course, if some simple models can be directly drawn by HT, it will be lighter than obj model, so most simple models All of them adopt HTML5/WebGL modeling scheme of product lightweight of HT for Web. The specific analysis code is as follows:

 1 // Respectively obj file address and mtl file address
 2 ht.Default.loadObj('obj/metro.obj', 'obj/metro.mtl', {
 3     center: true,
 4     // Whether the model is centered. The default value is false. If it is set to true, the model position will be moved to center its content
 5     r3: \[0, -Math.PI / 2, 0\],
 6     // Rotation change parameter, format is \ [rx, ry, rz \]
 7     s3: \[0.15, 0.15, 0.15\],
 8     // Size change parameter, format is \ [sx, sy, sz \]
 9     finishFunc: function(modelMap, array, rawS3) {
10         if (modelMap) {
11             ht.Default.setShape3dModel('metro', array); // Register a model named metro
12         }
13     }
14 });

After loading the obj model, a model named metro is registered. After that, if you want to use the model, you can use the following code:

var node = new ht.Node();
    'shape3d': 'metro'

In the above code, a new node object is created. By setting the shape3d attribute of the style object, you can use the model name Metro to the node object, and then we can see the metro train model in our scene.

Animation code analysis

Analysis on the realization of subway animation code

The subway operation in the scenario is realized by the scheduling plug-in provided by HT. For specific usage of scheduling, please refer to HT for Web Dispatching manual This scheduling is mainly used for function callback processing at a specified time interval. The first parameter of the callback function is the data element, which is the model node in the 3D scene. We can judge whether the current data is the metro node we just created for subsequent operations. In the scene, we simulate a left subway and a right subway, two subways Will alternate. There must be a coordinate system in the 3D scene. In HT, x, y and Z are used to represent three axes respectively. Therefore, the movement of the subway must change the position of the subway in the coordinate system to realize the operation of the subway. The subway coordinates are as follows:

You can know the coordinate system of the subway in the 3D scene through the figure above. If you want to realize the movement of the subway, you only need to move the subway in the direction of the red arrow shown in the figure, that is, the direction of the x-axis. You can constantly change the location of the subway through the setX method to achieve the purpose of the subway. In the code, you can continuously obtain the subway through the getSpeedByX and getOpacityByX methods At this time, the train speed and transparency are realized by the following key codes:

let metroTask = {
    interval: 50,
    // Every 50 seconds
    action: (data) = >{ // The callback function mentioned above
        // Judge whether the node passed in at that time is the metro train node
        if (data === currentMetro) {
            // Get the X-axis position and the direction of the subway at this time
            let currentX = data.getX(),
            direction = data.a('direction');
            // Obtain the current train speed according to the current X-axis position
            let speed = this.getSpeedByX(currentX);
            // Obtain the current train transparency according to the current X-axis position
            let opacity = this.getOpacityByX(currentX);
             // Judge whether the X-axis position exceeds a certain value, that is, the subway moves within a certain range
             if (Math.abs(currentX) <= 5000) {
                 // Set current transparency
                 opacity !== 1 ? currentMetro.s({
                     'shape3d.transparent': true,
                     'shape3d.opacity': opacity
                 }) : currentMetro.s({
                     'shape3d.transparent': false
                 // Set the current X-axis position
                 data.setX(currentX + direction * speed);
                 // Judge that the subway speed is 0 at this time, so the animation of opening the door should be executed at this time
                 if (speed === 0) this.doorAnimation(currentMetro, direction);
             // Right direction Metro to head, reset
             if (currentX > 5000 && direction === 1) {
                 currentMetro = leftMetro;
             // Left direction Metro to head, reset
             if (currentX < -5000 && direction === -1) {
                 currentMetro = rightMetro;
                 currentMetro.setX( - 5000);

Through the above codes, it can be known that in the process of metro operation, the forward animation is mainly generated by modifying the ﹣ x ﹣ axis position of the metro, and the Metro needs to move in a certain section, the boundary needs to be judged, and in order to simulate the real effect, the current train speed and transparency need to be obtained according to the current position of the metro. The following is the flow chart:

The above figure shows the process when the subway enters the station. When the subway stops and closes, it needs to exit. At this time, we just need to reset the subway position to not be 0. The following is part of the code implementation:

1 currentMetro.setX(direction * 10); / / set the location of the outbound train

After executing the above code, the metroTask scheduling task at the top gets a speed of not 0 after executing the getSpeedByX method, so it will continue to execute the animation of subway travel. At this time, the speed is from slow to fast, and the transparency is from deep to shallow. The following is the execution process of door opening animation:

Implementation and analysis of automatic patrol code

In the system, the implementation of automatic patrol inspection is to modify the values of eye and center in 3D scene. In HT, two methods of rotate and walk are provided to control the rotation of view angle and the movement of view angle. In non first person mode, the rotation is based on center, that is to say, the rotation is around the center object. In the first person mode, the rotation is based on center Eye is the center of rotation, that is, rotate the eye in the direction. The walk function changes the position of eye and center at the same time, that is, eye and center move the same offset in the vector direction established by two points at the same time. In this system, I did not use the rotate function, but realized the rotation of the viewing angle by myself. Because the original rotate function will rotate an angle immediately without a rotation process, so I re realized the rotation method. In this system, the rotation of the viewing angle is realized by constantly modifying the value of the center, and the specific realization process is shown in the following figure:

Some implementation codes are as follows:

 rotateStep() {
     // I.e. auxiliary point C above
     let fromCenter = this.fromCenter;
     // Point B above
     let toCenter = this.toCenter;
     // Once per frame
     let rotateValue = this.rotateFrame || Math.PI / 180;
     // Establish a direction vector between auxiliary points C and B
     let centerVector = new ht.Math.Vector2(toCenter.x - fromCenter.x, toCenter.y - fromCenter.y);
     let centerVectorLength = centerVector.length();
     // Rotation percentage at this time
     let rotatePercent = rotateValue * this.stepNum / this.curRotateVal;
     if (rotatePercent >= 1) {
         rotatePercent = 1;
         this.stepNum = -2;
     let newLength = rotatePercent * centerVectorLength;
     let newCenterVector = centerVector.add(fromCenter);
     // Get point information of center during rotation
     let newCenterPosition = \[newCenterVector.x, this.personHeight, newCenterVector.y\];
     // Set the size of the current center

Through the above code, the angle of view rotation in the scene is realized, and the rotation speed can be controlled by changing the value of rotateValue.

The realization and analysis of elevator animation code

In the scene, the elevator is an obj model, and the 3D model is composed of the most basic triangle faces. For example, a rectangle can be composed of two triangles, a cube can be composed of six faces or 12 triangles, and so on. More complex models can be composed of many small triangles. Therefore, the definition of 3D model is the description of all triangles to construct the model, and each triangle is composed of three vertex vertexes, each vertex vertex is determined by three-dimensional spatial coordinates of x, y, z. in HT, we use the array of vs to record all vertex coordinates of the triangular surface, so if we want to make the elevator run, we only need to move all vertex coordinates to the direction of the elevator The following are some key pseudo codes:

 // vs refers to the array of vertex coordinates of all triangles that make up the elevator model
 // Because the running direction of the elevator in the scene is to move to the upper right of the diagonal, only the x-axis and y-axis coordinate values need to be modified
 // xStep yStep is the distance of each elevator movement
 setInterval(() = >{
     // i+3 is because the order of the vs array is x, y, z, so i offset three units at a time
     for (let i = 0, l = vs.length; i < l; i = i + 3) {
         // The value of the next x-axis coordinate under the vertex coordinate
         let nextX = vs\[i\] - xStep;
         // The value of the y-axis coordinate under the vertex coordinate
         let nextY = vs\[i + 1\] + yStep;
         vs\[i\] = nextX < -0.5 ? 0.5 - (Math.abs(nextX) - 0.5) : nextX;
         vs\[i + 1\] = nextY > 0.5 ? -0.5 + (Math.abs(nextY) - 0.5) : nextY;

The animation of elevator movement is shown as follows:

Display and introduction of monitoring function

Video surveillance

After clicking the camera in the scene, the top right side will display the current camera monitoring screen, as shown below:

Smoke alarm monitoring

Smoke alarm will change the color of the current smoke alarm model according to the real-time status value transmitted from the background. Red is the alarm status, and the following is the implementation effect diagram:

TV train arrival time monitoring

There will be a special TV in the daily subway station to show the arrival schedule of the next subway. The effect is also simulated in the system. However, the TV model of the system is temporarily made, and the time is not connected. The following is the effect diagram:

Scene monitoring interaction

The interaction in the 3D scene is relatively simple. It is mainly to click the camera to display the 2D monitoring panel. In the 2D interface, it is mainly to switch three interaction modes, which are mutually exclusive. The following is the 3D interaction registration event code:

 g3d.mi((e) = >{
     let {
     } = this;
     // Is the click type
     if (e.kind === 'clickData') {
         // data is the currently clicked element
         let data = e.data;
         // shape3d type of current entity
         let shape3d = data.s('shape3d');
         // Judge whether the current shape3d type is a camera
         if (shape3d && shape3d.indexOf('Camera') > 0) {
             let cameraPanel = dm2d.getDataByTag('cameraPanel');
             // Toggle toggle camera 2d panel
             g2d.isVisible(cameraPanel) ? cameraPanel.s('2d.visible', false) : cameraPanel.s('2d.visible', true);
     // For clicking 3d scene background type
     if (e.kind === 'clickBackground') {
         let cameraPanel = dm2d.getDataByTag('cameraPanel');
         // Hide camera 2d panel
         g2d.isVisible(cameraPanel) && cameraPanel.s('2d.visible', false);


The industrial Internet connects people, data and machines. The 3D visualization system of metro station is a good display. The lightweight of HT, the visualization of data, the visualization of machines and the management of assets help us better monitor. The Internet of things will collect all kinds of information needed by monitoring, connecting and interacting objects or processes in real time through various information sensing devices, and better display the advantages of visualization through the combination with HT. Of course, the subway station can also be combined with VR. In various science and technology exhibitions, we can see various VR scene operations, and HT can also be combined with VR When the equipment is operated, it can wear the equipment to roam in the subway station, which makes people feel immersive. Due to the lightweight of the scene itself, the fluency of VR scene is also very high, so that users will not feel dizzy. Of course, the system itself can also run on the mobile side, as shown in the following screenshot:

Screenshot of program operation:

Tags: Javascript html5 Attribute Mobile

Posted on Fri, 08 Nov 2019 00:17:43 -0800 by moomsdad