Implement FlowNetPytorch (ubuntu18.04 cuda10.1 cudnn7.6.4) using a pre-trained model

The definition of optical flow and so on, if you don't know you can search by yourself, you won't talk about it here.

There are many traditional methods for optical flow extraction, which do not necessarily require in-depth learning, such as those that come with opencv.Here's how to say that flownet is a network that currently sees v1 v2 v3. The original author's github has been updated and given the docker version all the time. However, I can't configure docker images here, so I found a pytorch implementation on the web.Share the process here.

The first step is to configure your pytorch and how to configure it. There are many tutorials on the web. I just want to emphasize that you should install pytorch in order and not in random order.

Download the pytorch version code, linked at Here

#1 Enter the virtual environment pytorch to make sure that cuda cudnn torch, etc. is installed in the environment and the version needs to be properly guaranteed to be available. Without a gpu, I can't help you. I can actually do it, but there are a lot to change.

conda activate pytorch

#Test it

python
import torch
ptint(torch.cuda.is_available())

#If the result is true congratulations, you can basically do it

#2 Check that the folder you just downloaded has requirements.txt to see if your virtual environment has all these packages

conda list

#or with pip

pip list

#Make sure that these are all installed Enter the following command to test if your computer has all the required packages installed

python main.py -h

#If not installed, or if there is a problem with the installation version, install or upgrade the version as prompted by an error

#2 Test if your computer has all the required packages installed

python main.py -h

#If not installed, or if there is a problem with the installation version, install or upgrade the version as prompted by an error

#3 Prepare data and load pre-trained models, build your own folders, put data and models in
/home/flownet/FlowNetPytorch-master/data
#To complete the data path, there is a problem with the name given by the author's data naming

#img_pairs = []
#    for ext in args.img_exts:
#        test_files = data_dir.files('*1.{}'.format(ext))
#        for file in test_files:
#            img_pair = file.parent / (file.namebase[:-1] + '2.{}'.format(ext))
#            if img_pair.isfile():
#                img_pairs.append([file, img_pair])

#According to this code in the run_inference.py program, you can see that the named pictures should end with 1 and 2 instead of 0 and 1, so you need to make a modification to the named pictures

/home/flownet/FlowNetPytorch-master/pretrained/flownetc_EPE1_766.tar
#The model needs to write in its full name, download the pre-training model in the original author's Google Cloud Disk, need a ladder, otherwise it is very slow, self-training needs to download the data set by itself, may be slower (contact me to help move to Baidu Cloud Disk if necessary), download it and do not decompress it

#4 Batch Run Help, check what needs to be entered

python run_inference.py -h

The results of the run are as follows:

PyTorch FlowNet inference on a folder of img pairs

positional arguments:
  DIR                   path to images folder, image names must match
                        '[name]0.[ext]' and '[name]1.[ext]' ##Be careful!!There's a problem with the tips here!!, not 0 and 1##
  PTH                   path to pre-trained model

optional arguments:
  -h, --help            show this help message and exit
  --output DIR, -o DIR  path to output folder. If not set, will be created in
                        data folder (default: None)
  --output-value {raw,vis,both}, -v {raw,vis,both}
                        which value to output, between raw input (as a npy
                        file) and color vizualisation (as an image file). If
                        not set, will output both (default: both)
  --div-flow DIV_FLOW   value by which flow will be divided. overwritten if
                        stored in pretrained file (default: 20)
  --img-exts [EXT [EXT ...]]
                        images extensions to glob (default: ['png', 'jpg',
                        'bmp', 'ppm'])
  --max_flow MAX_FLOW   max flow value. Flow map color is saturated above this
                        value. If not set, will use flow map's max value
                        (default: None)
  --upsampling {nearest,bilinear}, -u {nearest,bilinear}
                        if not set, will output FlowNet raw input,which is 4
                        times downsampled. If set, will output full resolution
                        flow map, with selected upsampling (default: None) ##Suggested settings, output full size flow##
  --bidirectional       if set, will output invert flow (from 1 to 0) along
                        with regular flow (default: False) ##Dual output after setup, determined as needed

#5 Write code to run batch tests directly after reading

python run_inference.py -u bilinear /home/flownet/FlowNetPytorch-master/data /home/flownet/FlowNetPytorch-master/pretrained/flownetc_EPE1_766.tar

#6 runs as follows, making an example

=> will save raw output and RGB visualization
=> fetching img pairs in '/home/flownet/FlowNetPytorch-master/data'       ##Picture Location Found
=> will save everything to /home/flownet/FlowNetPytorch-master/data/flow  ##Default result save location
18 samples found                                                ##Found the number of pairs of pictures, at first according to the settings in -h, how can't run out, then checked the source code, found that the author helped wrong!!!
=> using pre-trained model 'flownetc'                                            ##Load pre-training model, replaceable
100%|███████████| 18/18 [00:06<00:00,  2.80it/s]#Quick

#7 To Output Directory Check Results
#Output both flow and visualized results Visualization results are taken to see flow for further use

#This enables the whole process of reproducing flownet using pytorch and testing it in batches.If you want to train yourself, you need to download the original author's flying chair Dataset, click if you are interested in the original author's code and articles and want to learn and reproduce them Here

#Next I want to implement the following flownet3 source code in Here If you already have friends who have achieved this, welcome guidance and discussion

Published an original article. Praise 0. Visits 2
Private letter follow

Tags: Python Docker pip OpenCV

Posted on Tue, 11 Feb 2020 19:56:33 -0800 by Gibbs