无人机视角的行人小目标检测
Du kan inte välja fler än 25 ämnen Ämnen måste starta med en bokstav eller siffra, kan innehålla bindestreck ('-') och vara max 35 tecken långa.

README.md 4.8KB

8 månader sedan
8 månader sedan
8 månader sedan
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
  1. # TPH-YOLOv5
  2. This repo is the implementation of ["TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios"](https://openaccess.thecvf.com/content/ICCV2021W/VisDrone/html/Zhu_TPH-YOLOv5_Improved_YOLOv5_Based_on_Transformer_Prediction_Head_for_Object_ICCVW_2021_paper.html) and ["TPH-YOLOv5++: Boosting Object Detection on Drone-Captured Scenarios with Cross-Layer Asymmetric Transformer"](https://www.mdpi.com/2072-4292/15/6/1687).
  3. On [VisDrone Challenge 2021](http://aiskyeye.com/), TPH-YOLOv5 wins 4th place and achieves well-matched results with 1st place model.
  4. ![image](result.png)
  5. You can get [VisDrone-DET2021: The Vision Meets Drone Object Detection Challenge Results](https://openaccess.thecvf.com/content/ICCV2021W/VisDrone/html/Cao_VisDrone-DET2021_The_Vision_Meets_Drone_Object_Detection_Challenge_Results_ICCVW_2021_paper.html) for more information. The TPH-YOLOv5++, as an improved version, significantly improves inference efficiency and reduces computational costs while maintaining detection performance compared to TPH-YOLOv5.
  6. # Install
  7. ```bash
  8. $ git clone https://github.com/cv516Buaa/tph-yolov5
  9. $ cd tph-yolov5
  10. $ pip install -r requirements.txt
  11. ```
  12. # Convert labels
  13. VisDrone2YOLO_lable.py transfer VisDrone annotiations to yolo labels.
  14. You should set the path of VisDrone dataset in VisDrone2YOLO_lable.py first.
  15. ```bash
  16. $ python VisDrone2YOLO_lable.py
  17. ```
  18. # Inference
  19. * `Datasets` : [VisDrone](http://aiskyeye.com/download/object-detection-2/), [UAVDT](https://sites.google.com/view/grli-uavdt/%E9%A6%96%E9%A1%B5)
  20. * `Weights` (PyTorch
  21. v1.10):
  22. * `yolov5l-xs-1.pt`: | [Baidu Drive(pw: vibe)](https://pan.baidu.com/s/1APETgMoeCOvZi1GsBZERrg). | [Google Drive](https://drive.google.com/file/d/1nGeKl3qOa26v3haGSDmLjeA0cjDD9p61/view?usp=sharing) |
  23. * `yolov5l-xs-2.pt`: | [Baidu Drive(pw: vffz)](https://pan.baidu.com/s/19S84EevP86yJIvnv9KYXDA). | [Google Drive](https://drive.google.com/file/d/1VmORvxNtvMVMvmY7cCwvp0BoL6L3RGiq/view?usp=sharing) |
  24. val.py runs inference on VisDrone2019-DET-val, using weights trained with TPH-YOLOv5.
  25. (We provide two weights trained by two different models based on YOLOv5l.)
  26. ```bash
  27. $ python val.py --weights ./weights/yolov5l-xs-1.pt --img 1996 --data ./data/VisDrone.yaml
  28. yolov5l-xs-2.pt
  29. --augment --save-txt --save-conf --task val --batch-size 8 --verbose --name v5l-xs
  30. ```
  31. ![image](./images/result_in_VisDrone.png)
  32. Inference on UAVDT is similar and results of TPH-YOLOv5++ on UAVDT are as follow:
  33. ![image](./images/result_in_UAVDT.png)
  34. # Ensemble
  35. If you inference dataset with different models, then you can ensemble the result by weighted boxes fusion using wbf.py.
  36. You should set img path and txt path in wbf.py.
  37. ```bash
  38. $ python wbf.py
  39. ```
  40. # Train
  41. train.py allows you to train new model from strach.
  42. ```bash
  43. $ python train.py --img 1536 --adam --batch 4 --epochs 80 --data ./data/VisDrone.yaml --weights yolov5l.pt --hy data/hyps/hyp.VisDrone.yaml --cfg models/yolov5l-xs-tph.yaml --name v5l-xs-tph
  44. $ python train.py --img 1536 --adam --batch 4 --epochs 80 --data ./data/VisDrone.yaml --weights yolov5l.pt --hy data/hyps/hyp.VisDrone.yaml --cfg models/yolov5l-tph-plus.yaml --name v5l-tph-plus
  45. ```
  46. ![image](train.png)
  47. # Description of TPH-YOLOv5, TPH-YOLOv5++ and citations
  48. - https://arxiv.org/abs/2108.11539
  49. - https://openaccess.thecvf.com/content/ICCV2021W/VisDrone/html/Zhu_TPH-YOLOv5_Improved_YOLOv5_Based_on_Transformer_Prediction_Head_for_Object_ICCVW_2021_paper.html
  50. - https://www.mdpi.com/2072-4292/15/6/1687
  51. If you have any question, please discuss with me by sending email to lyushuchang@buaa.edu.cn or liubinghao@buaa.edu.cn
  52. If you find this code useful please cite:
  53. ```
  54. @InProceedings{Zhu_2021_ICCV,
  55. author = {Zhu, Xingkui and Lyu, Shuchang and Wang, Xu and Zhao, Qi},
  56. title = {TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios},
  57. booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
  58. month = {October},
  59. year = {2021},
  60. pages = {2778-2788}
  61. }
  62. @Article{rs15061687,
  63. AUTHOR = {Zhao, Qi and Liu, Binghao and Lyu, Shuchang and Wang, Chunlei and Zhang, Hong},
  64. TITLE = {TPH-YOLOv5++: Boosting Object Detection on Drone-Captured Scenarios with Cross-Layer Asymmetric Transformer},
  65. JOURNAL = {Remote Sensing},
  66. VOLUME = {15},
  67. YEAR = {2023},
  68. NUMBER = {6},
  69. ARTICLE-NUMBER = {1687},
  70. URL = {https://www.mdpi.com/2072-4292/15/6/1687},
  71. ISSN = {2072-4292},
  72. DOI = {10.3390/rs15061687}
  73. }
  74. ```
  75. # References
  76. Thanks to their great works
  77. * [ultralytics/yolov5](https://github.com/ultralytics/yolov5)
  78. * [SwinTransformer](https://github.com/microsoft/Swin-Transformer)
  79. * [WBF](https://github.com/ZFTurbo/Weighted-Boxes-Fusion)