English | 简体中文
In addition, PaddleDetection also provides PP-Tracking real-time multi-object tracking system. PP-Tracking is the first open source real-time Multi-Object Tracking system, and it is based on PaddlePaddle deep learning framework. It has rich models, wide application and high efficiency deployment.
PP-Tracking supports two paradigms: single camera tracking (MOT) and multi-camera tracking (MTMCT). Aiming at the difficulties and pain points of actual business, PP-Tracking provides various MOT functions and applications such as pedestrian tracking, vehicle tracking, multi-class tracking, small object tracking, traffic statistics and multi-camera tracking. The deployment method supports API and GUI visual interface, and the deployment language supports Python and C++, The deployment platform environment supports Linux, NVIDIA Jetson, etc.
PP-tracking provides an AI studio public project tutorial. Please refer to this tutorial.
backbone | input shape | MOTA | IDF1 | IDS | FP | FN | FPS | download | config |
---|---|---|---|---|---|---|---|---|---|
DarkNet53 | 1088x608 | 72.0 | 66.9 | 1397 | 7274 | 22209 | - | model | config |
DarkNet53 | 864x480 | 69.1 | 64.7 | 1539 | 7544 | 25046 | - | model | config |
DarkNet53 | 576x320 | 63.7 | 64.4 | 1310 | 6782 | 31964 | - | model | config |
backbone | input shape | MOTA | IDF1 | IDS | FP | FN | FPS | download | config |
---|---|---|---|---|---|---|---|---|---|
DarkNet53(paper) | 1088x608 | 64.4 | 55.8 | 1544 | - | - | - | - | - |
DarkNet53 | 1088x608 | 64.6 | 58.5 | 1864 | 10550 | 52088 | - | model | config |
DarkNet53(paper) | 864x480 | 62.1 | 56.9 | 1608 | - | - | - | - | - |
DarkNet53 | 864x480 | 63.2 | 57.7 | 1966 | 10070 | 55081 | - | model | config |
DarkNet53 | 576x320 | 59.1 | 56.4 | 1911 | 10923 | 61789 | - | model | config |
Notes:
Training JDE on 8 GPUs with following command
python -m paddle.distributed.launch --log_dir=./jde_darknet53_30e_1088x608/ --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml
Evaluating the track performance of JDE on val dataset in single GPU with following commands:
# use weights released in PaddleDetection model zoo
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams
# use saved checkpoint in training
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=output/jde_darknet53_30e_1088x608/model_final.pdparams
Notes:
The default evaluation dataset is MOT-16 Train Set. If you want to change the evaluation dataset, please refer to the following code and modify configs/datasets/mot.yml
:
EvalMOTDataset:
!MOTImageFolder
dataset_dir: dataset/mot
data_root: MOT17/images/train
keep_ori_im: False # set True if save visualization images or video
Tracking results will be saved in {output_dir}/mot_results/
, and every sequence has one txt file, each line of the txt file is frame,id,x1,y1,w,h,score,-1,-1,-1
, and you can set {output_dir}
by --output_dir
.
Inference a vidoe on single GPU with following command:
# inference on video and save a video
CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams --video_file={your video name}.mp4 --save_videos
Notes:
apt-get update && apt-get install -y ffmpeg
.CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/jde/jde_darknet53_30e_1088x608.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams
python deploy/pptracking/python/mot_jde_infer.py --model_dir=output_inference/jde_darknet53_30e_1088x608 --video_file={your video name}.mp4 --device=GPU --save_mot_txts
Notes:
--save_mot_txts
to save the txt result file, or --save_images
to save the visualization images.frame,id,x1,y1,w,h,score,-1,-1,-1
.@article{wang2019towards,
title={Towards Real-Time Multi-Object Tracking},
author={Wang, Zhongdao and Zheng, Liang and Liu, Yixuan and Wang, Shengjin},
journal={arXiv preprint arXiv:1909.12605},
year={2019}
}