Use your own date

Catalog

  • Create a project with your own data
  • Extract training frame
  • Mark the key parts in the training frame
  • Check label
  • Create a training set
  • Training network
  • Assessment Network
  • Analyze a new video
  • Create an auto tagged video
  • Draw track

(the following steps are optional)

  • Extract outlier frame
  • Correction tag
  • Merge datasets
  • Training network

Create a project with your own data

First, the initial file directory of test wumingna DLC is as follows: there is only one folder videos in this folder, and only one video file to be trained in the videos folder.

Activate environment, reference related libraries

#Activate environment
source activate deeplabcut-py36
#Enter the pathon interactive environment
python
#Reference related Library
import deeplabcut
import tensorflow as tf
import os
from libpath import Path

Create a new project

# Custom project name
task = 'Test-wumingna-DLC'
#Custom experimenter
exprimenter = 'wumingna'
#Video path
video =['/home/wumingna/Test-wumingna-DLC/videos/mytest.mp4'] 
#Create a new project  
deeplabcut.create_new_project(task,experimenter,video,working_directory='/home/wumingna/Test-wumingna-DLC/videos',copy_videos=True)

The file structure is as follows:

Extract training frame

The key point of a successful feature detector is to select different frames, which is a typical marking behavior. This function selects N frames (algo =='uniform') evenly sampled from a specific video (or folder). Note: if the behavior is sparse distribution (consider using kmeans) and / or selecting frames manually, etc. At the same time, make sure to get selection data from different (behavior) sessions and different animals (if these will be very different) (to train invariant feature detectors). A single image should not be too large (i.e. < 850 x 850 pixels). Although this can also be dealt with later, it is recommended to cut the frame and remove unnecessary parts of the frame as much as possible. Always check the cropped output. If you are satisfied with the results, continue to tag.

#Set the file configuration variable so that you can use
path_config_file=deeplabcut.create_new_project(task,experimenter,video,copy_videos=True) 
#There are two options when extracting training frames, and different methods are selected according to the needs
deeplabcut.extract_frames(config_path,'automatic/manual','uniform/kmeans',crop=True/False, userfeedback=True/False)
#(1) Automatic extraction of training frame
deeplabcut.extract_frames(path_config_file)  #Automatic extraction
#(1) Extract the training frame manually, and select "No" in "do you want to cross the frames?" otherwise, the image cannot be captured.
deeplabcut.extract_frames(path_config_file,'manual') #Manual extraction

uniform in time, kmeans based on visual appearance or manual selection. uniform is most suitable for changing posture in a time independent manner in the whole video. However, some behaviors may be sparse, for example, when reaching out and holding hands are very fast, in this case, if the user chooses to use kmeans to cluster the frames, he should use visual information to select different frames, and then this function samples the video and clusters the frames. Then select frames from different clusters. This process ensures that the framework looks different and is usually desirable. However, in large and long videos, this code is slow due to computational complexity.

After executing the automatic extraction command, a screen window will pop up and input yes to randomly extract frames from the video. If there are multiple videos, they will be stored in multiple files named after the video file name.

The file structure is as follows (all photos)

Mark the key parts in the training frame

deeplabcut.label_frames(path_config_file)

Left key move, right key mark. After marking, click "Save". After that, collecteddata? Wumingna.csv and collecteddata? Wumingna.h5 will be generated under the file labeled data / mytest /. Note: "select a bodypart to label" is determined by the parameters in bodyparts in the config.yaml file. You can modify the number of tags by yourself.

Inspection label

deeplabcut.check_labels(path_config_file)

The file directory is as follows. Add a new test? Labeled folder to save the marked pictures. (the images saved in the two folders of my test and mytest "labeled data are the same, the only difference is whether they are marked or not.)

Create dataset

deeplabcut.create_training_dataset(path_config_file)
# You can create multiple training subsets (1 by default) with the optional parameter num ﹤ shuffles = n, and you can change the parameter to select different networks, Resnet-50 by default
# deeplabcut.create_training_dataset(path_config_file, num_shuffles = n,resnet_101)

Function to break up the marked data set and create a training set and a test set. The subdirectory 'iteration - #' of the directory training datasets stores dataset and meta information, and# - represents the value of the iteration variable, which is stored in the project configuration file. If you want to benchmark the performance of DeepLabCut, you can create multiple training subsets by specifying an integer value in the num? Shuffles parameter.

Each iteration will create a. mat file and a. pickle file. mat file contains the address and target pose of the image. pickle file contains the meta information about the training data set. This step also creates a directory for the model, including two subdirectories called test and train in the dlc model. Each subdirectory has a configuration file named post ﹣ cfg.yaml. If necessary, you can edit the Post & cfg file between the start of the training.

After execution, the file structure is as follows:

Start training

deeplabcut.train_network(path_config_file,shuffle=1,displayiters=300,saveiters=10,)

# This function has many optional parameters. After learning the basic operation, you can slowly learn how to use other parameters.

#The complete function is as follows:
train_network(config_path,shuffle=1,trainingsetindex=0,gputouse=
None,max_snapshots_to_keep=5, displayiters=1000,saveiters=20000,
maxiters=200000)

After 30 minutes of training, ctrl+c stops training.

The new file structure is as follows:

Evaluation network

 #Set plotting to True to draw manual and predicted labels on all test and training sets
deeplabcut.evaluate_network(config_path, plotting=True)

New file structure:

Analyze a new video (the original video is still analyzed here)

deeplabcut.analyze_videos(config_path,video,videotype='.mp4')

Tags are stored in a multi index panda array, which contains the network name, body part name, (x, y) tag position (in pixels) and the possibility of each frame of each body part.

New file:

Create an auto tagged video

This function creates an. mp4 video with a prediction label. This video remains in the same directory as the unmarked video

deeplabcut.create_labeled_video(path_config_file,videofile_path, draw_skeleton=True)

To draw a track:

deeplabcut.plot_trajectories(config_path,video)

New file:

If you are satisfied with the result, you can stop here, but if you are not satisfied with the result, you can proceed to the following steps:

Extract outlier frame

deeplabcut.extract_outlier_frames(config_path,video)

New directory:

Manually change the label: this step is to let the user correct the label in the extracted frame

deeplabcut.refine_label(path_config_file)

Merge datasets:

deeplabcut.merge_datasets(path_config_file)

Create a new iteration of the training dataset, check and train... (now the rest of the steps are to repeat the above steps and train on the new dataset)

 

 

 

 

 

 

 

 

 

 

 

 

 

Published 16 original articles, won praise 1, visited 3956
Private letter follow

Tags: network Python REST

Posted on Fri, 13 Mar 2020 06:09:45 -0700 by Foser