A Working Example

This example shows :

  1. how to run the program for some video

  2. how to set some important parameters for the detection, view fixing and post processing

  3. how to show the result

The detection part

The detection part is separated by loading a pretrained YOLOv4 network. The training should be done separately before running the tracking program.

The library that loads the structure of YOLO is part of the project (https://github.com/Tianxiaomo/pytorch-YOLOv4)

Using other detection trained model is also possible, it just needs to be loaded within the class YoloDetector while changing the files directories in config file for the:

  1. The model directory

  2. The configuration file for Yolo network directory.

  3. The names of the classes in a new file with the extention .names

The training can be done in any library you want (tensorflow, Darknet). However, the result should be only in .pth format.

After installing the library simply by running:

[1]:
!pip install -U offlinemot
Collecting offlinemot
  Downloading offlinemot-1.0.3.tar.gz (28.1 MB)
     ---------------------------------------- 28.1/28.1 MB 10.7 MB/s eta 0:00:00
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: numpy in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from offlinemot) (1.18.5)
Requirement already satisfied: opencv-contrib-python in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from offlinemot) (4.5.3.56)
Requirement already satisfied: scikit-image in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from offlinemot) (0.18.1)
Requirement already satisfied: torch in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from offlinemot) (1.10.2)
Requirement already satisfied: scipy in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from offlinemot) (1.4.1)
Requirement already satisfied: gdown in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from offlinemot) (4.3.1)
Requirement already satisfied: tqdm in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from gdown->offlinemot) (4.62.3)
Requirement already satisfied: requests[socks] in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from gdown->offlinemot) (2.25.1)
Requirement already satisfied: filelock in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from gdown->offlinemot) (3.6.0)
Requirement already satisfied: beautifulsoup4 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from gdown->offlinemot) (4.8.1)
Requirement already satisfied: six in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from gdown->offlinemot) (1.14.0)
Requirement already satisfied: tifffile>=2019.7.26 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from scikit-image->offlinemot) (2021.4.8)
Requirement already satisfied: PyWavelets>=1.1.1 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from scikit-image->offlinemot) (1.1.1)
Requirement already satisfied: pillow!=7.1.0,!=7.1.1,>=4.3.0 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from scikit-image->offlinemot) (8.2.0)
Requirement already satisfied: networkx>=2.0 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from scikit-image->offlinemot) (2.5.1)
Requirement already satisfied: imageio>=2.3.0 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from scikit-image->offlinemot) (2.9.0)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from scikit-image->offlinemot) (3.4.2)
Requirement already satisfied: typing-extensions in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from torch->offlinemot) (3.10.0.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->offlinemot) (1.1.0)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->offlinemot) (2.8.1)
Requirement already satisfied: cycler>=0.10 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->offlinemot) (0.10.0)
Requirement already satisfied: pyparsing>=2.2.1 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->offlinemot) (2.4.7)
Requirement already satisfied: decorator<5,>=4.3 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from networkx>=2.0->scikit-image->offlinemot) (4.4.1)
Requirement already satisfied: soupsieve>=1.2 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from beautifulsoup4->gdown->offlinemot) (1.9.5)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from requests[socks]->gdown->offlinemot) (1.26.3)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from requests[socks]->gdown->offlinemot) (2020.12.5)
Requirement already satisfied: chardet<5,>=3.0.2 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from requests[socks]->gdown->offlinemot) (4.0.0)
Requirement already satisfied: idna<3,>=2.5 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from requests[socks]->gdown->offlinemot) (2.10)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from requests[socks]->gdown->offlinemot) (1.7.1)
Requirement already satisfied: colorama in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from tqdm->gdown->offlinemot) (0.4.4)
Requirement already satisfied: setuptools in c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image->offlinemot) (59.5.0)
Building wheels for collected packages: offlinemot
  Building wheel for offlinemot (setup.py): started
  Building wheel for offlinemot (setup.py): finished with status 'done'
  Created wheel for offlinemot: filename=offlinemot-1.0.3-py3-none-any.whl size=28144517 sha256=1bf6a028ccdc3857436c18197616d3086a37831681d2f967db9e335f31559740
  Stored in directory: c:\users\yasin\appdata\local\pip\cache\wheels\8b\fd\9b\f9493bf735bb3b1a5574b65eaee317ea6036e880bf2b9e47b5
Successfully built offlinemot
Installing collected packages: offlinemot
Successfully installed offlinemot-1.0.3
WARNING: Ignoring invalid distribution -pencv-python (c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages)
WARNING: Ignoring invalid distribution -pencv-python (c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages)
WARNING: Ignoring invalid distribution -pencv-python (c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages)
WARNING: Ignoring invalid distribution -pencv-python (c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages)
WARNING: Ignoring invalid distribution -pencv-python (c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages)
WARNING: Ignoring invalid distribution -pencv-python (c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages)
WARNING: Ignoring invalid distribution -pencv-python (c:\users\yasin\appdata\local\programs\python\python37\lib\site-packages)
WARNING: You are using pip version 22.0.3; however, version 22.0.4 is available.
You should consider upgrading via the 'C:\Users\yasin\AppData\Local\Programs\Python\Python37\python.exe -m pip install --upgrade pip' command.
[2]:
import offlinemot
from offlinemot.detection import YoloDetector

This class YoloDetector can take several input parameters, like the YOLO configuration file, the model file (in pytorch format), a flag to whether to use GPU or CPU and lastly a file containing the list of object names.

If there’s not a custom model to be loaded, then the default model can be loaded (as seen below). The example model will be downloaded when loading for the first time, this may take some time depending on your internet speed.

[3]:
cfg = offlinemot.config.configs()
detector  = YoloDetector(cfg)

Detection is only performed every N frame. Where N is set according to configs class instance. In the following sections there is a detailed description of all its parameters and their effects.

Note

A several testing with the parameters values maybe needed for different types of videos. Once a working set of values is found then all the similar videos can use the same set.

Config file content

To edit and show all these parameters, the following can be ran:

import offlinemot
cfg = offlinemot.config.configs()

cfg['detect_every_N'] = 3 #changing some value

cfg.print_summary() #showing the most important parameters and sections names

cfg.print_section('Detection') # showing the detection parameters

cfg.write('new.ini')

This will allow changes to all the parameters to be testsed and saved.

Detection and general configuration parameters

Other paramters influencing the detection and the general settings are:

### general parameters
draw = True
detect_every_N = 1
missing_thresh = 0.8
use_cuda = False
resize_scale = 0.4
colors_map = [ (0,255,0), # ped
               (255,0,0), # cyclist
               (0, 0, 0)] # cars

### Detection paramters
cwd = os.path.dirname(os.path.realpath(__file__))
model_name       = os.path.join(cwd,'model','Yolov4_epoch300.pth')
model_config     = os.path.join(cwd,'model','yolov4-obj.cfg')
classes_file_name= os.path.join(cwd,'model','obj.names')

detect_thresh = 0.3 #Yolo detection
# distance to the nearst match between detection and tracking
# output in pixels
dist_thresh = 25
size_thresh = 25
detect_scale = 4.0
  • The darwing flag draw, whether to draw or not.

  • With this variable: detect_every_N, the number of detections frequency is determined. If high accuracy is needed then a value of 1 is optimal. Bigger values are less accurate but faster to run.

  • The missing_threshparameter is for determining when to delete an object if it keeps failing tracking and detection, it should be between 0 and 1, with 0 means never delete, and 1 delete on the first failure. A value of 0.9 means delete if 10% of result is failed, keep otherwise.

  • The use_cuda flag is whether to use GPU for detection or CPU.

  • The resize_scale is just for display (if draw flag is True), to determine how much the image should be rescaled.

  • The colors_map list is for the color code for the different detection outputs. The first element correspond to the class with id=1 out of the Yolo network.

  • The cwd is the current directory of the config file

  • The model_name is for the trained model file directory

  • The model_config is for setting the parameters of the Yolo network, if a different structure is trained then a different file should be given here.

  • The class_file_name is a text file contining the names of the predicted classes from the detection network.

  • The detect_thresh is used to put a threshold on YOLO probabilty output for each detected class. The lower this value is the more detections it will give but with less accuracy.

  • The dist_thresh is for matching each detection with already detected object, it shouldn’t be too big because it represents the minmum distance to match. Otherwise, false matching will be given.

  • The size_thresh is another distance but to the change in width and height between a detection and a nearby object. It is used because the same object from bird’s eye view should have the same dimensions.

  • The detect_scale is for smaller objects detection. sometimes the drone is too high and the objects are small, so we need to zoom in to detect in a better way. one solution here is to detect in two levels: full scale image in the ordinary case and smaller proposed cropped areas. This parameter control how small these areas are. Higher values will make the areas bigger.

Fixing and smoothing

In configs class the following are the parameters for the part of fixing the view. The first boolean is for whether to do fixing or not. It will slow the processing, so if the video is stationary without noise then no need for it.

### fix view paramers
do_fix = False
fixing_dilation = 13
min_matches     = 15

The part for doing the smoothing is also included in the same file as follows. The first boolean is whether to do the smoothing or not, the other two are related to Savitzky-Golay filter from scipy library.

The last boolean flag save_out_video will save a new mp4 file in the same folder as the video with the same name started with output_ if set to True. This will be run only with offlinemot.show_results.show_result()

### Smoothing for post processing
do_smooth   = True
window_size = 7
polydegree  = 3
save_out_video = False

Other steps (along with smoothing) are included in the postprocess.py file, namely, the orientation calculation and interpolation of missing positions in the tracks. The orientation is calculated from the trajectory itself where the next point in the trajectory determines the current point heading.

Background subtraction parameters

Many parmeters are avaliable for background subtraction, as follows,

### background subtractor parameters
bgs_history = 5
bgs_threshold = 50
bgs_shadows = True
bgs_learning = 0.5
bgs_erosion_size = 3
bgs_min_area = 300
bgs_broder_margin =  0.45    # bigger would give boxes near the detected boxes with yolo

The most important of them are:

  • The bgs_history determines how many frames are used to calculate the background

  • The bgs_threshold is realted to the sensativaty of the subtraction , lower values will give more sensativity to movements

  • The bgs_erosion_size is the síze of the mask to perform erosion on the forground. higher values will give thiner objects

  • The bgs_min_area determines the minmum area of pixels that will be considered as object, otherwise it will be deleted.

  • THe bgs_broder_margin determines how mush overlapping is allowed between already detected objects and newly found forground. this number is considered a percentage of the detected object dimensions that objects are allowed to be in. For example a value of 0.5 will mean everywhere is allowed, because half of the dimension on all the objects means all the objects’ area.

Filering parameters

Additionally, in the configs class, there’s parameters that will determine when to delete the detected objects.

### Filtering Objects:
min_history = 100
overlap_thresh = 0.7
  • The min_hisory is the minmum number of frames that the objects will start to be tested for percentage of correct tracking (with missing_thresh).

  • The overlap_thresh is for the minmum area percentage of overlapping between confirmed objects to delete one of them.

How to run

To run for a new video, after setting the parameters, the command will be:

[6]:
offlinemot.core.extract_paths()# input your video path here (empty means the example video)
Used Parameters
====================
Groupus:
General parameters
Background subtractor
Fix view
Detection
Filtering
Smoothing
====================
General Parameters:
====================
draw:True
detect_every_n:1
missing_thresh:0.8
use_cuda:False
resize_scale:0.4
colors_map:[(0,255,0), (255,0,0), (0, 0, 0)]
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before
WARNING:root:Some objects are detected but not moving or seen before

and so on..

To stop the program at any point, simply press ESC

How to show result

After running the program on your video, a text file will be saved in same directory as the inputed video with the same name as well.

Warning

if there’s any text file with the same name of the video, then it will be overwritten without warning

If you want to show the result with angles and smoothing for the sample video above, you can run the command:

[7]:
import offlinemot
offlinemot.show_results.show_result(None,config=cfg)#put path of the video here (including extention, ex mp4, avi ..)
# None for the sample video

After running this command with your video path and extention (empty for an example), a new video will be written, if that is set in cfg class, annotated with the tracked objects.

Output structure

in the output text file, each line is structured as follows,

Frame_id   [ top_left_x   top_left_y   width   height]   class_id   track_id   angel

´39   [3748,   964,   169,   73]   2   5   138´

Above is an example of one line.