Maocheng Hu 46b2185aee second commit | 3 anos atrás | |
---|---|---|
.. | ||
model | 3 anos atrás | |
README.md | 3 anos atrás | |
basic_ops.py | 3 anos atrás | |
benchmark.py | 3 anos atrás | |
config.yml | 3 anos atrás | |
dataloader.py | 3 anos atrás | |
forward.py | 3 anos atrás | |
logger.py | 3 anos atrás | |
utils.py | 3 anos atrás |
High speed performance Jittor implementation Code accompanying the paper "Deep Hough Transform for Semantic Line Detection" (ECCV 2020, PAMI 2021). arXiv2003.04676 | Online Demo | Project page | New dataset | Line Annotator
Network inference FPS and speedup ratio (without post processing):
Tesla V100 (16G PCI-E) | Tesla V100 | RTX TITAN | |||||||
bs=1 | bs=4 | bs=8 | bs=1 | bs=4 | bs=8 | bs=1 | bs=4 | bs=8 | |
Jittor | 89 | 115 | 120 | 88 | 108 | 113 | 27 | 74 | 106 |
Pytorch | 38 | 75 | 82 | 10 | 34 | 53 | 9 | 15 | 34 |
Speedup | 2.34 | 1.53 | 1.46 | 8.80 | 3.18 | 2.13 | 3.00 | 4.93 | 3.12 |
jittor
numpy
scipy
opencv-python
scikit-image
pytorch 1.0<=1.3
tqdm
yml
Pretrain model (based on ResNet50-FPN): http://data.kaizhao.net/projects/deep-hough-transform/dht_r50_fpn_sel-c9a29d40.pth (SEL dataset) and http://data.kaizhao.net/projects/deep-hough-transform/dht_r50_nkl_d97b97138.pth (NKL dataset / used in online demo)
Download original SEL dataset from here and extract to data/
directory. After that, the directory structure should be like:
data
├── ICCV2017_JTLEE_gtlines_all
├── ICCV2017_JTLEE_gt_pri_lines_for_test
├── ICCV2017_JTLEE_images
├── prepare_data_JTLEE.py
├── Readme.txt
├── test_idx_1716.txt
└── train_idx_1716.txt
Then run python script to generate parametric space label.
cd deep-hough-transfrom
python data/prepare_data_JTLEE.py --root './data/ICCV2017_JTLEE_images/' --label './data/ICCV2017_JTLEE_gtlines_all' --save-dir './data/training/JTLEE_resize_100_100/' --list './data/training/JTLEE.lst' --prefix 'JTLEE_resize_100_100' --fixsize 400 --numangle 100 --numrho 100
For NKL dataset, you can download the dataset and put it to data dir. Then run python script to generate parametric space label.
cd deep-hough-transform
python data/prepare_data_NKL.py --root './data/NKL' --label './data/NKL' --save-dir './data/training/NKL_resize_100_100' --fixsize 400
Generate visualization results and save coordinates to _.npy file.
CUDA_VISIBLE_DEVICES=0 python forward.py --model (your_best_model.pth) --tmp (your_result_save_dir)
If our method/dataset are useful to your research, please consider to cite us:
@article{hu2020jittor,
title={Jittor: a novel deep learning framework with meta-operators and unified graph execution},
author={Hu, Shi-Min and Liang, Dun and Yang, Guo-Ye and Yang, Guo-Wei and Zhou, Wen-Yang},
journal={Information Sciences},
volume={63},
number={222103},
pages={1--21},
year={2020}
}
@article{zhao2021deep,
author = {Kai Zhao and Qi Han and Chang-bin Zhang and Jun Xu and Ming-ming Cheng},
title = {Deep Hough Transform for Semantic Line Detection},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year = {2021},
doi = {10.1109/TPAMI.2021.3077129}
}
@inproceedings{eccv2020line,
title={Deep Hough Transform for Semantic Line Detection},
author={Qi Han and Kai Zhao and Jun Xu and Ming-Ming Cheng},
booktitle={ECCV},
pages={750--766},
year={2020}
}
This project is licensed under the Creative Commons NonCommercial (CC BY-NC 3.0) license where only non-commercial usage is allowed. For commercial usage, please contact us.