Visual odometer optical flow

Optical flow is a method to describe the motion of pixels between images over time:

Based on the assumption that the gray level is constant:
I(x1,y1,z1)=I(x2,y2,z2)=I(x3,y3,z3) \begin{aligned} I(x_{1},y_{1},z_{1})=I(x_{2},y_{2},z_{2})=I(x_{3},y_{3},z_{3}) \end{aligned} I(x1​,y1​,z1​)=I(x2​,y2​,z2​)=I(x3​,y3​,z3​)​
Lucas Kanade optical flow
A pixel at time t, at (x,y)(x,y)(x,y), gray scale can be expressed as a function of position and time:
I(x,y,t) \begin{aligned} I(x,y,t) \end{aligned} I(x,y,t)​
Based on the assumption that the gray level is constant:
I(x+dx,y+dy,t+dt)=I(x,y,t) \begin{aligned} I(x+dx,y+dy,t+dt)=I(x,y,t) \end{aligned} I(x+dx,y+dy,t+dt)=I(x,y,t)​
Carry out Taylor expansion on the left side, and keep the first-order terms:
I(x+dx,y+dy,t+dt)≈I(x,y,t)+∂I∂xdx+∂I∂ydy+∂I∂tdt \begin{aligned} I(x+dx,y+dy,t+dt)\approx I(x,y,t)+\frac{\partial\textbf{I}}{\partial x}dx+\frac{\partial\textbf{I}}{\partial y}dy+\frac{\partial\textbf{I}}{\partial t}dt \end{aligned} I(x+dx,y+dy,t+dt)≈I(x,y,t)+∂x∂I​dx+∂y∂I​dy+∂t∂I​dt​
Thus:
∂I∂xdx+∂I∂ydy+∂I∂tdt=0∂I∂xdxdt+∂I∂ydydt=−∂I∂t \begin{aligned} \frac{\partial\textbf{I}}{\partial x}dx+\frac{\partial\textbf{I}}{\partial y}dy+\frac{\partial\textbf{I}}{\partial t}dt=0\\ \frac{\partial\textbf{I}}{\partial x}\frac{dx}{dt}+\frac{\partial\textbf{I}}{\partial y}\frac{dy}{dt}=-\frac{\partial\textbf{I}}{\partial t} \end{aligned} ∂x∂I​dx+∂y∂I​dy+∂t∂I​dt=0∂x∂I​dtdx​+∂y∂I​dtdy​=−∂t∂I​​
dx/dtdx/dtdx/dt is the moving speed of the pixel along the x-axis, dy/dtdy/dtdy/dt is the moving speed of the pixel along the y-axis, which is recorded as u,vu,vu,v. at the same time ∂ I / ∂ x\partial \textbf{I}/\partial x ∂ I / ∂ x is the gradient of the image along the xxx direction, and the other is the gradient in the yyy direction, which is recorded as IX, iyi {x}, I {y} IX, Iy. Written in compact matrix form:
[IxIy][uv]=−It \begin{aligned} \left[\begin{matrix} \textbf{I}_{x}&\textbf{I}_{y} \end{matrix}\right]\left[\begin{matrix} u\\ v \end{matrix}\right]=-\textbf{I}_{t} \end{aligned} [Ix​​Iy​​][uv​]=−It​​
Consider a window the size of W × w \ times w w × W, which contains the number of pixels of W 2W ^ {2} w 2. Suppose that the pixels in a window have the same motion, so there are totally W2W {2} W2 equations:
[IxIy]k[uv]=−Itk,k=1,....,w2 \begin{aligned} \left[\begin{matrix} \textbf{I}_{x}&\textbf{I}_{y} \end{matrix}\right]_{k}\left[\begin{matrix} u\\ v \end{matrix}\right]=-\textbf{I}_{tk}, \quad k=1,....,w^{2} \end{aligned} [Ix​​Iy​​]k​[uv​]=−Itk​,k=1,....,w2​
remember
A=[[IxIy]1⋮[IxIy]k],b=[It1⋮Itk] \begin{aligned} \textbf{A}=\left[\begin{matrix} \left[\begin{matrix} \textbf{I}_{x}&\textbf{I}_{y} \end{matrix}\right]_{1}\\ \vdots \\ \left[\begin{matrix} \textbf{I}_{x}&\textbf{I}_{y} \end{matrix}\right]_{k} \end{matrix}\right],\textbf{b}=\left[\begin{matrix} I_{t1}\\ \vdots\\ I_{tk} \end{matrix}\right] \end{aligned} A=⎣⎢⎡​[Ix​​Iy​​]1​⋮[Ix​​Iy​​]k​​⎦⎥⎤​,b=⎣⎢⎡​It1​⋮Itk​​⎦⎥⎤​​
So the whole equation can be expressed as:
A[uv]=−b \begin{aligned} A\left[\begin{matrix} u\\ v \end{matrix}\right]=-b \end{aligned} A[uv​]=−b​
The block least square method can be used to obtain the optimal solution of the overdetermined equations by solving the pseudo inverse
[uv]∗=−(ATA)−1ATb \begin{aligned} \left[\begin{matrix} u\\ v \end{matrix}\right]^{*}=-(\textbf{A}^{T}\textbf{A})^{-1}\textbf{A}^{T}\textbf{b} \end{aligned} [uv​]∗=−(ATA)−1ATb​
LK optical flow is usually used to track the movement of corners

Tum public dataset

Decompress the dataset:

tar -zxvf data.tar.gz


1.rgb.txt and depth.txt record the collection time and corresponding file name of each file
2.rgb / and depth / directories store the collected png image files. The color image is 8-bit 3-channel, and the depth image is 16 bit single channel image. File name is collection time
3.groundtruth.txt is the camera pose collected by the external motion system. The format is:
(time,tx,ty,tz,qx,qy,qz,qw) \begin{aligned} (time,t_{x},t_{y},t_{z},q_{x},q_{y},q_{z},q_{w}) \end{aligned} (time,tx​,ty​,tz​,qx​,qy​,qz​,qw​)​
4.associate.py can match depth map and color map
Function

python associate.py rgb.txt depth.txt > associate.txt
#If there is no numpy sudo apt get install Python numpy

Association.txt generated

useLK.cpp content

#include <iostream>
#include <fstream>
#include <list>
#include <vector>
#include <chrono>
using namespace std; 

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/video/tracking.hpp>

int main( int argc, char** argv )
{
    if ( argc != 2 )
    {
        cout<<"usage: useLK path_to_dataset"<<endl;
        return 1;
    }
    string path_to_dataset = argv[1];
    string associate_file = path_to_dataset + "/associate.txt";
    
    ifstream fin( associate_file );
    if ( !fin ) 
    {
        cerr<<"I cann't find associate.txt!"<<endl;
        return 1;
    }
    
    string rgb_file, depth_file, time_rgb, time_depth;
    list< cv::Point2f > keypoints;      // Because you want to delete the point where the trace failed, use list
    cv::Mat color, depth, last_color;
    
    for ( int index=0; index<100; index++ )
    {
        fin>>time_rgb>>rgb_file>>time_depth>>depth_file;
        color = cv::imread( path_to_dataset+"/"+rgb_file );
        depth = cv::imread( path_to_dataset+"/"+depth_file, -1 );
        if (index ==0 )
        {
            // Extract FAST feature points from the first frame
            vector<cv::KeyPoint> kps;
            cv::Ptr<cv::FastFeatureDetector> detector = cv::FastFeatureDetector::create();
            detector->detect( color, kps );
            for ( auto kp:kps )
                keypoints.push_back( kp.pt );
            last_color = color;
            continue;
        }
        if ( color.data==nullptr || depth.data==nullptr )
            continue;
        // Tracking feature points with LK for other frames
        vector<cv::Point2f> next_keypoints; 
        vector<cv::Point2f> prev_keypoints;
        for ( auto kp:keypoints )
            prev_keypoints.push_back(kp);
        vector<unsigned char> status;
        vector<float> error; 
        chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
        cv::calcOpticalFlowPyrLK( last_color, color, prev_keypoints, next_keypoints, status, error );
        chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
        chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>( t2-t1 );
        cout<<"LK Flow use time: "<<time_used.count()<<" seconds."<<endl;
        // Delete the missing points
        int i=0; 
        for ( auto iter=keypoints.begin(); iter!=keypoints.end(); i++)
        {
            if ( status[i] == 0 )
            {
                iter = keypoints.erase(iter);
                continue;
            }
            *iter = next_keypoints[i];
            iter++;
        }
        cout<<"tracked keypoints: "<<keypoints.size()<<endl;
        if (keypoints.size() == 0)
        {
            cout<<"all keypoints are lost."<<endl;
            break; 
        }
        // Draw keypoints
        cv::Mat img_show = color.clone();
        for ( auto kp:keypoints )
            cv::circle(img_show, kp, 10, cv::Scalar(0, 240, 0), 1);
        cv::imshow("corners", img_show);
        cv::waitKey(0);
        last_color = color;
    }
    return 0;
}

CMakeLists.txt content

cmake_minimum_required( VERSION 2.8 )
project( useLK )

set( CMAKE_BUILD_TYPE Release )
set( CMAKE_CXX_FLAGS "-std=c++11 -O3" )

find_package( OpenCV )
include_directories( ${OpenCV_INCLUDE_DIRS} )

add_executable( useLK useLK.cpp )
target_link_libraries( useLK ${OpenCV_LIBS} )

Terminal:

cd slambook/ch8/LKFlow
mkdir build
cd build
cmake ..
make
./useLK ../../data
#Here you need to specify the location of the dataset


The loss of feature points can be clearly seen from the terminal

Published 7 original articles, won praise 0, visited 33
Private letter follow

Tags: Python sudo OpenCV cmake

Posted on Thu, 12 Mar 2020 00:05:55 -0700 by audiodef