PyCharm reformat (#4209)

* PyCharm reformat

* YAML reformat

* Markdown reformat
This commit is contained in:
Glenn Jocher 2021-07-28 23:35:14 +02:00 committed by GitHub
parent 750465edae
commit b60b62e874
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
38 changed files with 647 additions and 604 deletions

View File

@ -7,21 +7,24 @@ assignees: ''
--- ---
Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following, otherwise it is non-actionable, and we can not help you: Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following,
- **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo otherwise it is non-actionable, and we can not help you:
- **Common dataset**: coco.yaml or coco128.yaml
- **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments
If this is a custom dataset/training question you **must include** your `train*.jpg`, `val*.jpg` and `results.png` figures, or we can not help you. You can generate these with `utils.plot_results()`. - **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo
- **Common dataset**: coco.yaml or coco128.yaml
- **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments
If this is a custom dataset/training question you **must include** your `train*.jpg`, `val*.jpg` and `results.png`
figures, or we can not help you. You can generate these with `utils.plot_results()`.
## 🐛 Bug ## 🐛 Bug
A clear and concise description of what the bug is.
A clear and concise description of what the bug is.
## To Reproduce (REQUIRED) ## To Reproduce (REQUIRED)
Input: Input:
``` ```
import torch import torch
@ -30,6 +33,7 @@ c = a / 0
``` ```
Output: Output:
``` ```
Traceback (most recent call last): Traceback (most recent call last):
File "/Users/glennjocher/opt/anaconda3/envs/env1/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code File "/Users/glennjocher/opt/anaconda3/envs/env1/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
@ -39,17 +43,17 @@ Traceback (most recent call last):
RuntimeError: ZeroDivisionError RuntimeError: ZeroDivisionError
``` ```
## Expected behavior ## Expected behavior
A clear and concise description of what you expected to happen. A clear and concise description of what you expected to happen.
## Environment ## Environment
If applicable, add screenshots to help explain your problem. If applicable, add screenshots to help explain your problem.
- OS: [e.g. Ubuntu] - OS: [e.g. Ubuntu]
- GPU [e.g. 2080 Ti] - GPU [e.g. 2080 Ti]
## Additional context ## Additional context
Add any other context about the problem here. Add any other context about the problem here.

View File

@ -13,7 +13,8 @@ assignees: ''
## Motivation ## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too --> <!-- Please outline the motivation for the proposal. Is your feature request related to a problem?
e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
## Pitch ## Pitch

View File

@ -9,5 +9,4 @@ assignees: ''
## ❔Question ## ❔Question
## Additional context ## Additional context

View File

@ -8,32 +8,44 @@ We love your input! We want to make contributing to YOLOv5 as easy and transpare
- Proposing a new feature - Proposing a new feature
- Becoming a maintainer - Becoming a maintainer
YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be helping push the frontiers of what's possible in AI 😃! YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be
helping push the frontiers of what's possible in AI 😃!
## Submitting a Pull Request (PR) 🛠️ ## Submitting a Pull Request (PR) 🛠️
Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps: Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
### 1. Select File to Update ### 1. Select File to Update
Select `requirements.txt` to update by clicking on it in GitHub. Select `requirements.txt` to update by clicking on it in GitHub.
<p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p> <p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
### 2. Click 'Edit this file' ### 2. Click 'Edit this file'
Button is in top-right corner. Button is in top-right corner.
<p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p> <p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
### 3. Make Changes ### 3. Make Changes
Change `matplotlib` version from `3.2.2` to `3.3`. Change `matplotlib` version from `3.2.2` to `3.3`.
<p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p> <p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
### 4. Preview Changes and Submit PR ### 4. Preview Changes and Submit PR
Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch** for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**
for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose
changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
<p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p> <p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
### PR recommendations ### PR recommendations
To allow your work to be integrated as seamlessly as possible, we advise you to: To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master an automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:
- ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master an
automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may
be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature'
with the name of your local branch:
```bash ```bash
git remote add upstream https://github.com/ultralytics/yolov5.git git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream git fetch upstream
@ -41,30 +53,42 @@ git checkout feature # <----- replace 'feature' with local branch name
git merge upstream/master git merge upstream/master
git push -u origin -f git push -u origin -f
``` ```
- ✅ Verify all Continuous Integration (CI) **checks are passing**.
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ -Bruce Lee
- ✅ Verify all Continuous Integration (CI) **checks are passing**.
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase
but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ -Bruce Lee
## Submitting a Bug Report 🐛 ## Submitting a Bug Report 🐛
If you spot a problem with YOLOv5 please submit a Bug Report! If you spot a problem with YOLOv5 please submit a Bug Report!
For us to start investigating a possibel problem we need to be able to reproduce it ourselves first. We've created a few short guidelines below to help users provide what we need in order to get started. For us to start investigating a possibel problem we need to be able to reproduce it ourselves first. We've created a few
short guidelines below to help users provide what we need in order to get started.
When asking a question, people will be better able to provide help if you provide **code** that they can easily understand and use to **reproduce** the problem. This is referred to by community members as creating a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces the problem should be: When asking a question, people will be better able to provide help if you provide **code** that they can easily
understand and use to **reproduce** the problem. This is referred to by community members as creating
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces
the problem should be:
* ✅ **Minimal** Use as little code as possible that still produces the same problem * ✅ **Minimal** Use as little code as possible that still produces the same problem
* ✅ **Complete** Provide **all** parts someone else needs to reproduce your problem in the question itself * ✅ **Complete** Provide **all** parts someone else needs to reproduce your problem in the question itself
* ✅ **Reproducible** Test the code you're about to provide to make sure it reproduces the problem * ✅ **Reproducible** Test the code you're about to provide to make sure it reproduces the problem
In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code should be: In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code
should be:
* ✅ **Current** Verify that your code is up-to-date with current GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new copy to ensure your problem has not already been resolved by previous commits. * ✅ **Current** Verify that your code is up-to-date with current
* ✅ **Unmodified** Your problem must be reproducible without any modifications to the codebase in this repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️. GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new
copy to ensure your problem has not already been resolved by previous commits.
If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better understand and diagnose your problem. * ✅ **Unmodified** Your problem must be reproducible without any modifications to the codebase in this
repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **
Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better
understand and diagnose your problem.
## License ## License
By contributing, you agree that your contributions will be licensed under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/) By contributing, you agree that your contributions will be licensed under
the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/)

View File

@ -52,31 +52,33 @@ YOLOv5 🚀 is a family of object detection architectures and models pretrained
</div> </div>
## <div align="center">Documentation</div> ## <div align="center">Documentation</div>
See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment. See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment.
## <div align="center">Quick Start Examples</div> ## <div align="center">Quick Start Examples</div>
<details open> <details open>
<summary>Install</summary> <summary>Install</summary>
[**Python>=3.6.0**](https://www.python.org/) is required with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) installed including [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/): [**Python>=3.6.0**](https://www.python.org/) is required with all
[requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) installed including
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/):
<!-- $ sudo apt update && apt install -y libgl1-mesa-glx libsm6 libxext6 libxrender-dev --> <!-- $ sudo apt update && apt install -y libgl1-mesa-glx libsm6 libxext6 libxrender-dev -->
```bash ```bash
$ git clone https://github.com/ultralytics/yolov5 $ git clone https://github.com/ultralytics/yolov5
$ cd yolov5 $ cd yolov5
$ pip install -r requirements.txt $ pip install -r requirements.txt
``` ```
</details> </details>
<details open> <details open>
<summary>Inference</summary> <summary>Inference</summary>
Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download
from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases).
```python ```python
import torch import torch
@ -85,7 +87,7 @@ import torch
model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x, custom model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x, custom
# Images # Images
img = 'https://ultralytics.com/images/zidane.jpg' # or PosixPath, PIL, OpenCV, numpy, list img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
# Inference # Inference
results = model(img) results = model(img)
@ -101,7 +103,9 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
<details> <details>
<summary>Inference with detect.py</summary> <summary>Inference with detect.py</summary>
`detect.py` runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`. `detect.py` runs inference on a variety of sources, downloading models automatically from
the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
```bash ```bash
$ python detect.py --source 0 # webcam $ python detect.py --source 0 # webcam
file.jpg # image file.jpg # image
@ -117,13 +121,18 @@ $ python detect.py --source 0 # webcam
<details> <details>
<summary>Training</summary> <summary>Training</summary>
Run commands below to reproduce results on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices). Run commands below to reproduce results
on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on
first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the
largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).
```bash ```bash
$ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64 $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
yolov5m 40 yolov5m 40
yolov5l 24 yolov5l 24
yolov5x 16 yolov5x 16
``` ```
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png"> <img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
</details> </details>
@ -132,7 +141,8 @@ $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size
<summary>Tutorials</summary> <summary>Tutorials</summary>
* [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)&nbsp; 🚀 RECOMMENDED * [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)&nbsp; 🚀 RECOMMENDED
* [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)&nbsp; ☘️ RECOMMENDED * [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)&nbsp; ☘️
RECOMMENDED
* [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)&nbsp; 🌟 NEW * [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)&nbsp; 🌟 NEW
* [Supervisely Ecosystem](https://github.com/ultralytics/yolov5/issues/2518)&nbsp; 🌟 NEW * [Supervisely Ecosystem](https://github.com/ultralytics/yolov5/issues/2518)&nbsp; 🌟 NEW
* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475) * [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
@ -147,10 +157,11 @@ $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size
</details> </details>
## <div align="center">Environments and Integrations</div> ## <div align="center">Environments and Integrations</div>
Get started in seconds with our verified environments and integrations, including [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) for automatic YOLOv5 experiment logging. Click each icon below for details. Get started in seconds with our verified environments and integrations,
including [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) for automatic YOLOv5 experiment
logging. Click each icon below for details.
<div align="center"> <div align="center">
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"> <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb">
@ -173,7 +184,6 @@ Get started in seconds with our verified environments and integrations, includin
</a> </a>
</div> </div>
## <div align="center">Compete and Win</div> ## <div align="center">Compete and Win</div>
We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes! We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
@ -183,7 +193,6 @@ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competi
<img width="850" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/banner-export-competition.png"></a> <img width="850" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/banner-export-competition.png"></a>
</p> </p>
## <div align="center">Why YOLOv5</div> ## <div align="center">Why YOLOv5</div>
<p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313216-f0a5e100-9af5-11eb-8445-c682b60da2e3.png"></p> <p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313216-f0a5e100-9af5-11eb-8445-c682b60da2e3.png"></p>
@ -195,11 +204,13 @@ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competi
<details> <details>
<summary>Figure Notes (click to expand)</summary> <summary>Figure Notes (click to expand)</summary>
* GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. * GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size
* EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8. 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
* **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt` * EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
</details> * **Reproduce** by
`python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
</details>
### Pretrained Checkpoints ### Pretrained Checkpoints
@ -222,23 +233,29 @@ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competi
<details> <details>
<summary>Table Notes (click to expand)</summary> <summary>Table Notes (click to expand)</summary>
* AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy. * AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results
* AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` denote val2017 accuracy.
* Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45 --half` * AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP**
* All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation). by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
* Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment` * Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a
</details> GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and
includes FP16 inference, postprocessing and NMS. **Reproduce speed**
by `python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45 --half`
* All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
* Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale
augmentation. **Reproduce TTA** by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>
## <div align="center">Contribute</div> ## <div align="center">Contribute</div>
We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started. We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see
our [Contributing Guide](CONTRIBUTING.md) to get started.
## <div align="center">Contact</div> ## <div align="center">Contact</div>
For issues running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business or professional support requests please visit For issues running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business or
[https://ultralytics.com/contact](https://ultralytics.com/contact). professional support requests please visit [https://ultralytics.com/contact](https://ultralytics.com/contact).
<br> <br>

View File

@ -15,7 +15,7 @@ test: Argoverse-1.1/images/test/ # test images (optional) https://eval.ai/web/c
# Classes # Classes
nc: 8 # number of classes nc: 8 # number of classes
names: [ 'person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign' ] # class names names: ['person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign'] # class names
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------

View File

@ -27,7 +27,7 @@ test: # test images (optional) 1276 images
# Classes # Classes
nc: 1 # number of classes nc: 1 # number of classes
names: [ 'wheat_head' ] # class names names: ['wheat_head'] # class names
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------

View File

@ -15,7 +15,7 @@ test: # test images (optional)
# Classes # Classes
nc: 365 # number of classes nc: 365 # number of classes
names: [ 'Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup', names: ['Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup',
'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book', 'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book',
'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag', 'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag',
'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV', 'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV',
@ -55,7 +55,7 @@ names: [ 'Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Gl
'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French', 'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French',
'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell', 'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell',
'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil', 'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil',
'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis' ] 'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis']
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------

View File

@ -15,7 +15,7 @@ test: test.txt # test images (optional) 2936 images
# Classes # Classes
nc: 1 # number of classes nc: 1 # number of classes
names: [ 'object' ] # class names names: ['object'] # class names
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------

View File

@ -21,8 +21,8 @@ test: # test images (optional)
# Classes # Classes
nc: 20 # number of classes nc: 20 # number of classes
names: [ 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor' ] # class names 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # class names
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------

View File

@ -15,7 +15,7 @@ test: VisDrone2019-DET-test-dev/images # test images (optional) 1610 images
# Classes # Classes
nc: 10 # number of classes nc: 10 # number of classes
names: [ 'pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor' ] names: ['pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor']
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------

View File

@ -15,7 +15,7 @@ test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.
# Classes # Classes
nc: 80 # number of classes nc: 80 # number of classes
names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
@ -23,7 +23,7 @@ names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', '
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush' ] # class names 'hair drier', 'toothbrush'] # class names
# Download script/URL (optional) # Download script/URL (optional)

View File

@ -15,7 +15,7 @@ test: # test images (optional)
# Classes # Classes
nc: 80 # number of classes nc: 80 # number of classes
names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
@ -23,7 +23,7 @@ names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', '
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush' ] # class names 'hair drier', 'toothbrush'] # class names
# Download script/URL (optional) # Download script/URL (optional)

View File

@ -12,7 +12,7 @@ d='../datasets' # unzip directory
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
f='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB f='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB
echo 'Downloading' $url$f ' ...' echo 'Downloading' $url$f ' ...'
curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
# Download/unzip images # Download/unzip images
d='../datasets/coco/images' # unzip directory d='../datasets/coco/images' # unzip directory
@ -22,6 +22,6 @@ f2='val2017.zip' # 1G, 5k images
f3='test2017.zip' # 7G, 41k images (optional) f3='test2017.zip' # 7G, 41k images (optional)
for f in $f1 $f2; do for f in $f1 $f2; do
echo 'Downloading' $url$f '...' echo 'Downloading' $url$f '...'
curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
done done
wait # finish background tasks wait # finish background tasks

View File

@ -12,6 +12,6 @@ d='../datasets' # unzip directory
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
f='coco128.zip' # or 'coco2017labels-segments.zip', 68 MB f='coco128.zip' # or 'coco2017labels-segments.zip', 68 MB
echo 'Downloading' $url$f ' ...' echo 'Downloading' $url$f ' ...'
curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
wait # finish background tasks wait # finish background tasks

View File

@ -15,7 +15,7 @@ val: images/autosplit_val.txt # train images (relative to 'path') 10% of 847 tr
# Classes # Classes
nc: 60 # number of classes nc: 60 # number of classes
names: [ 'Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus', names: ['Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus',
'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer', 'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer',
'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car', 'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car',
'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge', 'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge',
@ -23,7 +23,7 @@ names: [ 'Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', '
'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck', 'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck',
'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed', 'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed',
'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad', 'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad',
'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower' ] # class names 'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower'] # class names
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------

View File

@ -4,55 +4,55 @@
# P5 ------------------------------------------------------------------------------------------------------------------- # P5 -------------------------------------------------------------------------------------------------------------------
# P5-640: # P5-640:
anchors_p5_640: anchors_p5_640:
- [ 10,13, 16,30, 33,23 ] # P3/8 - [10,13, 16,30, 33,23] # P3/8
- [ 30,61, 62,45, 59,119 ] # P4/16 - [30,61, 62,45, 59,119] # P4/16
- [ 116,90, 156,198, 373,326 ] # P5/32 - [116,90, 156,198, 373,326] # P5/32
# P6 ------------------------------------------------------------------------------------------------------------------- # P6 -------------------------------------------------------------------------------------------------------------------
# P6-640: thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11, 21,19, 17,41, 43,32, 39,70, 86,64, 65,131, 134,130, 120,265, 282,180, 247,354, 512,387 # P6-640: thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11, 21,19, 17,41, 43,32, 39,70, 86,64, 65,131, 134,130, 120,265, 282,180, 247,354, 512,387
anchors_p6_640: anchors_p6_640:
- [ 9,11, 21,19, 17,41 ] # P3/8 - [9,11, 21,19, 17,41] # P3/8
- [ 43,32, 39,70, 86,64 ] # P4/16 - [43,32, 39,70, 86,64] # P4/16
- [ 65,131, 134,130, 120,265 ] # P5/32 - [65,131, 134,130, 120,265] # P5/32
- [ 282,180, 247,354, 512,387 ] # P6/64 - [282,180, 247,354, 512,387] # P6/64
# P6-1280: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27, 44,40, 38,94, 96,68, 86,152, 180,137, 140,301, 303,264, 238,542, 436,615, 739,380, 925,792 # P6-1280: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27, 44,40, 38,94, 96,68, 86,152, 180,137, 140,301, 303,264, 238,542, 436,615, 739,380, 925,792
anchors_p6_1280: anchors_p6_1280:
- [ 19,27, 44,40, 38,94 ] # P3/8 - [19,27, 44,40, 38,94] # P3/8
- [ 96,68, 86,152, 180,137 ] # P4/16 - [96,68, 86,152, 180,137] # P4/16
- [ 140,301, 303,264, 238,542 ] # P5/32 - [140,301, 303,264, 238,542] # P5/32
- [ 436,615, 739,380, 925,792 ] # P6/64 - [436,615, 739,380, 925,792] # P6/64
# P6-1920: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41, 67,59, 57,141, 144,103, 129,227, 270,205, 209,452, 455,396, 358,812, 653,922, 1109,570, 1387,1187 # P6-1920: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41, 67,59, 57,141, 144,103, 129,227, 270,205, 209,452, 455,396, 358,812, 653,922, 1109,570, 1387,1187
anchors_p6_1920: anchors_p6_1920:
- [ 28,41, 67,59, 57,141 ] # P3/8 - [28,41, 67,59, 57,141] # P3/8
- [ 144,103, 129,227, 270,205 ] # P4/16 - [144,103, 129,227, 270,205] # P4/16
- [ 209,452, 455,396, 358,812 ] # P5/32 - [209,452, 455,396, 358,812] # P5/32
- [ 653,922, 1109,570, 1387,1187 ] # P6/64 - [653,922, 1109,570, 1387,1187] # P6/64
# P7 ------------------------------------------------------------------------------------------------------------------- # P7 -------------------------------------------------------------------------------------------------------------------
# P7-640: thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11, 13,30, 29,20, 30,46, 61,38, 39,92, 78,80, 146,66, 79,163, 149,150, 321,143, 157,303, 257,402, 359,290, 524,372 # P7-640: thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11, 13,30, 29,20, 30,46, 61,38, 39,92, 78,80, 146,66, 79,163, 149,150, 321,143, 157,303, 257,402, 359,290, 524,372
anchors_p7_640: anchors_p7_640:
- [ 11,11, 13,30, 29,20 ] # P3/8 - [11,11, 13,30, 29,20] # P3/8
- [ 30,46, 61,38, 39,92 ] # P4/16 - [30,46, 61,38, 39,92] # P4/16
- [ 78,80, 146,66, 79,163 ] # P5/32 - [78,80, 146,66, 79,163] # P5/32
- [ 149,150, 321,143, 157,303 ] # P6/64 - [149,150, 321,143, 157,303] # P6/64
- [ 257,402, 359,290, 524,372 ] # P7/128 - [257,402, 359,290, 524,372] # P7/128
# P7-1280: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1280, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 19,22, 54,36, 32,77, 70,83, 138,71, 75,173, 165,159, 148,334, 375,151, 334,317, 251,626, 499,474, 750,326, 534,814, 1079,818 # P7-1280: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1280, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 19,22, 54,36, 32,77, 70,83, 138,71, 75,173, 165,159, 148,334, 375,151, 334,317, 251,626, 499,474, 750,326, 534,814, 1079,818
anchors_p7_1280: anchors_p7_1280:
- [ 19,22, 54,36, 32,77 ] # P3/8 - [19,22, 54,36, 32,77] # P3/8
- [ 70,83, 138,71, 75,173 ] # P4/16 - [70,83, 138,71, 75,173] # P4/16
- [ 165,159, 148,334, 375,151 ] # P5/32 - [165,159, 148,334, 375,151] # P5/32
- [ 334,317, 251,626, 499,474 ] # P6/64 - [334,317, 251,626, 499,474] # P6/64
- [ 750,326, 534,814, 1079,818 ] # P7/128 - [750,326, 534,814, 1079,818] # P7/128
# P7-1920: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1920, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 29,34, 81,55, 47,115, 105,124, 207,107, 113,259, 247,238, 222,500, 563,227, 501,476, 376,939, 749,711, 1126,489, 801,1222, 1618,1227 # P7-1920: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1920, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 29,34, 81,55, 47,115, 105,124, 207,107, 113,259, 247,238, 222,500, 563,227, 501,476, 376,939, 749,711, 1126,489, 801,1222, 1618,1227
anchors_p7_1920: anchors_p7_1920:
- [ 29,34, 81,55, 47,115 ] # P3/8 - [29,34, 81,55, 47,115] # P3/8
- [ 105,124, 207,107, 113,259 ] # P4/16 - [105,124, 207,107, 113,259] # P4/16
- [ 247,238, 222,500, 563,227 ] # P5/32 - [247,238, 222,500, 563,227] # P5/32
- [ 501,476, 376,939, 749,711 ] # P6/64 - [501,476, 376,939, 749,711] # P6/64
- [ 1126,489, 801,1222, 1618,1227 ] # P7/128 - [1126,489, 801,1222, 1618,1227] # P7/128

View File

@ -3,47 +3,47 @@ nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple width_multiple: 1.0 # layer channel multiple
anchors: anchors:
- [ 10,13, 16,30, 33,23 ] # P3/8 - [10,13, 16,30, 33,23] # P3/8
- [ 30,61, 62,45, 59,119 ] # P4/16 - [30,61, 62,45, 59,119] # P4/16
- [ 116,90, 156,198, 373,326 ] # P5/32 - [116,90, 156,198, 373,326] # P5/32
# darknet53 backbone # darknet53 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Conv, [ 32, 3, 1 ] ], # 0 [[-1, 1, Conv, [32, 3, 1]], # 0
[ -1, 1, Conv, [ 64, 3, 2 ] ], # 1-P1/2 [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
[ -1, 1, Bottleneck, [ 64 ] ], [-1, 1, Bottleneck, [64]],
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 3-P2/4 [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
[ -1, 2, Bottleneck, [ 128 ] ], [-1, 2, Bottleneck, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 5-P3/8 [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
[ -1, 8, Bottleneck, [ 256 ] ], [-1, 8, Bottleneck, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 7-P4/16 [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
[ -1, 8, Bottleneck, [ 512 ] ], [-1, 8, Bottleneck, [512]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P5/32 [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
[ -1, 4, Bottleneck, [ 1024 ] ], # 10 [-1, 4, Bottleneck, [1024]], # 10
] ]
# YOLOv3-SPP head # YOLOv3-SPP head
head: head:
[ [ -1, 1, Bottleneck, [ 1024, False ] ], [[-1, 1, Bottleneck, [1024, False]],
[ -1, 1, SPP, [ 512, [ 5, 9, 13 ] ] ], [-1, 1, SPP, [512, [5, 9, 13]]],
[ -1, 1, Conv, [ 1024, 3, 1 ] ], [-1, 1, Conv, [1024, 3, 1]],
[ -1, 1, Conv, [ 512, 1, 1 ] ], [-1, 1, Conv, [512, 1, 1]],
[ -1, 1, Conv, [ 1024, 3, 1 ] ], # 15 (P5/32-large) [-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
[ -2, 1, Conv, [ 256, 1, 1 ] ], [-2, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 8], 1, Concat, [1]], # cat backbone P4
[ -1, 1, Bottleneck, [ 512, False ] ], [-1, 1, Bottleneck, [512, False]],
[ -1, 1, Bottleneck, [ 512, False ] ], [-1, 1, Bottleneck, [512, False]],
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, Conv, [ 512, 3, 1 ] ], # 22 (P4/16-medium) [-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
[ -2, 1, Conv, [ 128, 1, 1 ] ], [-2, 1, Conv, [128, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 6], 1, Concat, [1]], # cat backbone P3
[ -1, 1, Bottleneck, [ 256, False ] ], [-1, 1, Bottleneck, [256, False]],
[ -1, 2, Bottleneck, [ 256, False ] ], # 27 (P3/8-small) [-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
[ [ 27, 22, 15 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5) [[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
] ]

View File

@ -3,37 +3,37 @@ nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple width_multiple: 1.0 # layer channel multiple
anchors: anchors:
- [ 10,14, 23,27, 37,58 ] # P4/16 - [10,14, 23,27, 37,58] # P4/16
- [ 81,82, 135,169, 344,319 ] # P5/32 - [81,82, 135,169, 344,319] # P5/32
# YOLOv3-tiny backbone # YOLOv3-tiny backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Conv, [ 16, 3, 1 ] ], # 0 [[-1, 1, Conv, [16, 3, 1]], # 0
[ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 1-P1/2 [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 1-P1/2
[ -1, 1, Conv, [ 32, 3, 1 ] ], [-1, 1, Conv, [32, 3, 1]],
[ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 3-P2/4 [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 3-P2/4
[ -1, 1, Conv, [ 64, 3, 1 ] ], [-1, 1, Conv, [64, 3, 1]],
[ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 5-P3/8 [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 5-P3/8
[ -1, 1, Conv, [ 128, 3, 1 ] ], [-1, 1, Conv, [128, 3, 1]],
[ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 7-P4/16 [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 7-P4/16
[ -1, 1, Conv, [ 256, 3, 1 ] ], [-1, 1, Conv, [256, 3, 1]],
[ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 9-P5/32 [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 9-P5/32
[ -1, 1, Conv, [ 512, 3, 1 ] ], [-1, 1, Conv, [512, 3, 1]],
[ -1, 1, nn.ZeroPad2d, [ [ 0, 1, 0, 1 ] ] ], # 11 [-1, 1, nn.ZeroPad2d, [[0, 1, 0, 1]]], # 11
[ -1, 1, nn.MaxPool2d, [ 2, 1, 0 ] ], # 12 [-1, 1, nn.MaxPool2d, [2, 1, 0]], # 12
] ]
# YOLOv3-tiny head # YOLOv3-tiny head
head: head:
[ [ -1, 1, Conv, [ 1024, 3, 1 ] ], [[-1, 1, Conv, [1024, 3, 1]],
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, Conv, [ 512, 3, 1 ] ], # 15 (P5/32-large) [-1, 1, Conv, [512, 3, 1]], # 15 (P5/32-large)
[ -2, 1, Conv, [ 128, 1, 1 ] ], [-2, 1, Conv, [128, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 8], 1, Concat, [1]], # cat backbone P4
[ -1, 1, Conv, [ 256, 3, 1 ] ], # 19 (P4/16-medium) [-1, 1, Conv, [256, 3, 1]], # 19 (P4/16-medium)
[ [ 19, 15 ], 1, Detect, [ nc, anchors ] ], # Detect(P4, P5) [[19, 15], 1, Detect, [nc, anchors]], # Detect(P4, P5)
] ]

View File

@ -3,47 +3,47 @@ nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple width_multiple: 1.0 # layer channel multiple
anchors: anchors:
- [ 10,13, 16,30, 33,23 ] # P3/8 - [10,13, 16,30, 33,23] # P3/8
- [ 30,61, 62,45, 59,119 ] # P4/16 - [30,61, 62,45, 59,119] # P4/16
- [ 116,90, 156,198, 373,326 ] # P5/32 - [116,90, 156,198, 373,326] # P5/32
# darknet53 backbone # darknet53 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Conv, [ 32, 3, 1 ] ], # 0 [[-1, 1, Conv, [32, 3, 1]], # 0
[ -1, 1, Conv, [ 64, 3, 2 ] ], # 1-P1/2 [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
[ -1, 1, Bottleneck, [ 64 ] ], [-1, 1, Bottleneck, [64]],
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 3-P2/4 [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
[ -1, 2, Bottleneck, [ 128 ] ], [-1, 2, Bottleneck, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 5-P3/8 [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
[ -1, 8, Bottleneck, [ 256 ] ], [-1, 8, Bottleneck, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 7-P4/16 [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
[ -1, 8, Bottleneck, [ 512 ] ], [-1, 8, Bottleneck, [512]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P5/32 [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
[ -1, 4, Bottleneck, [ 1024 ] ], # 10 [-1, 4, Bottleneck, [1024]], # 10
] ]
# YOLOv3 head # YOLOv3 head
head: head:
[ [ -1, 1, Bottleneck, [ 1024, False ] ], [[-1, 1, Bottleneck, [1024, False]],
[ -1, 1, Conv, [ 512, [ 1, 1 ] ] ], [-1, 1, Conv, [512, [1, 1]]],
[ -1, 1, Conv, [ 1024, 3, 1 ] ], [-1, 1, Conv, [1024, 3, 1]],
[ -1, 1, Conv, [ 512, 1, 1 ] ], [-1, 1, Conv, [512, 1, 1]],
[ -1, 1, Conv, [ 1024, 3, 1 ] ], # 15 (P5/32-large) [-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
[ -2, 1, Conv, [ 256, 1, 1 ] ], [-2, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 8], 1, Concat, [1]], # cat backbone P4
[ -1, 1, Bottleneck, [ 512, False ] ], [-1, 1, Bottleneck, [512, False]],
[ -1, 1, Bottleneck, [ 512, False ] ], [-1, 1, Bottleneck, [512, False]],
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, Conv, [ 512, 3, 1 ] ], # 22 (P4/16-medium) [-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
[ -2, 1, Conv, [ 128, 1, 1 ] ], [-2, 1, Conv, [128, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 6], 1, Concat, [1]], # cat backbone P3
[ -1, 1, Bottleneck, [ 256, False ] ], [-1, 1, Bottleneck, [256, False]],
[ -1, 2, Bottleneck, [ 256, False ] ], # 27 (P3/8-small) [-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
[ [ 27, 22, 15 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5) [[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
] ]

View File

@ -3,38 +3,38 @@ nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple width_multiple: 1.0 # layer channel multiple
anchors: anchors:
- [ 10,13, 16,30, 33,23 ] # P3/8 - [10,13, 16,30, 33,23] # P3/8
- [ 30,61, 62,45, 59,119 ] # P4/16 - [30,61, 62,45, 59,119] # P4/16
- [ 116,90, 156,198, 373,326 ] # P5/32 - [116,90, 156,198, 373,326] # P5/32
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, Bottleneck, [ 128 ] ], [-1, 3, Bottleneck, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, BottleneckCSP, [ 256 ] ], [-1, 9, BottleneckCSP, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, BottleneckCSP, [ 512 ] ], [-1, 9, BottleneckCSP, [512]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ], [-1, 1, SPP, [1024, [5, 9, 13]]],
[ -1, 6, BottleneckCSP, [ 1024 ] ], # 9 [-1, 6, BottleneckCSP, [1024]], # 9
] ]
# YOLOv5 FPN head # YOLOv5 FPN head
head: head:
[ [ -1, 3, BottleneckCSP, [ 1024, False ] ], # 10 (P5/32-large) [[-1, 3, BottleneckCSP, [1024, False]], # 10 (P5/32-large)
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 1, Conv, [ 512, 1, 1 ] ], [-1, 1, Conv, [512, 1, 1]],
[ -1, 3, BottleneckCSP, [ 512, False ] ], # 14 (P4/16-medium) [-1, 3, BottleneckCSP, [512, False]], # 14 (P4/16-medium)
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 3, BottleneckCSP, [ 256, False ] ], # 18 (P3/8-small) [-1, 3, BottleneckCSP, [256, False]], # 18 (P3/8-small)
[ [ 18, 14, 10 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5) [[18, 14, 10], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
] ]

View File

@ -7,46 +7,46 @@ anchors: 3
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, C3, [ 128 ] ], [-1, 3, C3, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, C3, [ 256 ] ], [-1, 9, C3, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, C3, [ 512 ] ], [-1, 9, C3, [512]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ], [-1, 1, SPP, [1024, [5, 9, 13]]],
[ -1, 3, C3, [ 1024, False ] ], # 9 [-1, 3, C3, [1024, False]], # 9
] ]
# YOLOv5 head # YOLOv5 head
head: head:
[ [ -1, 1, Conv, [ 512, 1, 1 ] ], [[-1, 1, Conv, [512, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 3, C3, [ 512, False ] ], # 13 [-1, 3, C3, [512, False]], # 13
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 3, C3, [ 256, False ] ], # 17 (P3/8-small) [-1, 3, C3, [256, False]], # 17 (P3/8-small)
[ -1, 1, Conv, [ 128, 1, 1 ] ], [-1, 1, Conv, [128, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 2 ], 1, Concat, [ 1 ] ], # cat backbone P2 [[-1, 2], 1, Concat, [1]], # cat backbone P2
[ -1, 1, C3, [ 128, False ] ], # 21 (P2/4-xsmall) [-1, 1, C3, [128, False]], # 21 (P2/4-xsmall)
[ -1, 1, Conv, [ 128, 3, 2 ] ], [-1, 1, Conv, [128, 3, 2]],
[ [ -1, 18 ], 1, Concat, [ 1 ] ], # cat head P3 [[-1, 18], 1, Concat, [1]], # cat head P3
[ -1, 3, C3, [ 256, False ] ], # 24 (P3/8-small) [-1, 3, C3, [256, False]], # 24 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ], [-1, 1, Conv, [256, 3, 2]],
[ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4 [[-1, 14], 1, Concat, [1]], # cat head P4
[ -1, 3, C3, [ 512, False ] ], # 27 (P4/16-medium) [-1, 3, C3, [512, False]], # 27 (P4/16-medium)
[ -1, 1, Conv, [ 512, 3, 2 ] ], [-1, 1, Conv, [512, 3, 2]],
[ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat head P5 [[-1, 10], 1, Concat, [1]], # cat head P5
[ -1, 3, C3, [ 1024, False ] ], # 30 (P5/32-large) [-1, 3, C3, [1024, False]], # 30 (P5/32-large)
[ [ 24, 27, 30 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5) [[24, 27, 30], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
] ]

View File

@ -7,48 +7,48 @@ anchors: 3
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, C3, [ 128 ] ], [-1, 3, C3, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, C3, [ 256 ] ], [-1, 9, C3, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, C3, [ 512 ] ], [-1, 9, C3, [512]],
[ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[ -1, 3, C3, [ 768 ] ], [-1, 3, C3, [768]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64 [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ], [-1, 1, SPP, [1024, [3, 5, 7]]],
[ -1, 3, C3, [ 1024, False ] ], # 11 [-1, 3, C3, [1024, False]], # 11
] ]
# YOLOv5 head # YOLOv5 head
head: head:
[ [ -1, 1, Conv, [ 768, 1, 1 ] ], [[-1, 1, Conv, [768, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5 [[-1, 8], 1, Concat, [1]], # cat backbone P5
[ -1, 3, C3, [ 768, False ] ], # 15 [-1, 3, C3, [768, False]], # 15
[ -1, 1, Conv, [ 512, 1, 1 ] ], [-1, 1, Conv, [512, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 3, C3, [ 512, False ] ], # 19 [-1, 3, C3, [512, False]], # 19
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small) [-1, 3, C3, [256, False]], # 23 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ], [-1, 1, Conv, [256, 3, 2]],
[ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4 [[-1, 20], 1, Concat, [1]], # cat head P4
[ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium) [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[ -1, 1, Conv, [ 512, 3, 2 ] ], [-1, 1, Conv, [512, 3, 2]],
[ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5 [[-1, 16], 1, Concat, [1]], # cat head P5
[ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large) [-1, 3, C3, [768, False]], # 29 (P5/32-large)
[ -1, 1, Conv, [ 768, 3, 2 ] ], [-1, 1, Conv, [768, 3, 2]],
[ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6 [[-1, 12], 1, Concat, [1]], # cat head P6
[ -1, 3, C3, [ 1024, False ] ], # 32 (P5/64-xlarge) [-1, 3, C3, [1024, False]], # 32 (P5/64-xlarge)
[ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6) [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
] ]

View File

@ -7,59 +7,59 @@ anchors: 3
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, C3, [ 128 ] ], [-1, 3, C3, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, C3, [ 256 ] ], [-1, 9, C3, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, C3, [ 512 ] ], [-1, 9, C3, [512]],
[ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[ -1, 3, C3, [ 768 ] ], [-1, 3, C3, [768]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64 [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[ -1, 3, C3, [ 1024 ] ], [-1, 3, C3, [1024]],
[ -1, 1, Conv, [ 1280, 3, 2 ] ], # 11-P7/128 [-1, 1, Conv, [1280, 3, 2]], # 11-P7/128
[ -1, 1, SPP, [ 1280, [ 3, 5 ] ] ], [-1, 1, SPP, [1280, [3, 5]]],
[ -1, 3, C3, [ 1280, False ] ], # 13 [-1, 3, C3, [1280, False]], # 13
] ]
# YOLOv5 head # YOLOv5 head
head: head:
[ [ -1, 1, Conv, [ 1024, 1, 1 ] ], [[-1, 1, Conv, [1024, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat backbone P6 [[-1, 10], 1, Concat, [1]], # cat backbone P6
[ -1, 3, C3, [ 1024, False ] ], # 17 [-1, 3, C3, [1024, False]], # 17
[ -1, 1, Conv, [ 768, 1, 1 ] ], [-1, 1, Conv, [768, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5 [[-1, 8], 1, Concat, [1]], # cat backbone P5
[ -1, 3, C3, [ 768, False ] ], # 21 [-1, 3, C3, [768, False]], # 21
[ -1, 1, Conv, [ 512, 1, 1 ] ], [-1, 1, Conv, [512, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 3, C3, [ 512, False ] ], # 25 [-1, 3, C3, [512, False]], # 25
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 3, C3, [ 256, False ] ], # 29 (P3/8-small) [-1, 3, C3, [256, False]], # 29 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ], [-1, 1, Conv, [256, 3, 2]],
[ [ -1, 26 ], 1, Concat, [ 1 ] ], # cat head P4 [[-1, 26], 1, Concat, [1]], # cat head P4
[ -1, 3, C3, [ 512, False ] ], # 32 (P4/16-medium) [-1, 3, C3, [512, False]], # 32 (P4/16-medium)
[ -1, 1, Conv, [ 512, 3, 2 ] ], [-1, 1, Conv, [512, 3, 2]],
[ [ -1, 22 ], 1, Concat, [ 1 ] ], # cat head P5 [[-1, 22], 1, Concat, [1]], # cat head P5
[ -1, 3, C3, [ 768, False ] ], # 35 (P5/32-large) [-1, 3, C3, [768, False]], # 35 (P5/32-large)
[ -1, 1, Conv, [ 768, 3, 2 ] ], [-1, 1, Conv, [768, 3, 2]],
[ [ -1, 18 ], 1, Concat, [ 1 ] ], # cat head P6 [[-1, 18], 1, Concat, [1]], # cat head P6
[ -1, 3, C3, [ 1024, False ] ], # 38 (P6/64-xlarge) [-1, 3, C3, [1024, False]], # 38 (P6/64-xlarge)
[ -1, 1, Conv, [ 1024, 3, 2 ] ], [-1, 1, Conv, [1024, 3, 2]],
[ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P7 [[-1, 14], 1, Concat, [1]], # cat head P7
[ -1, 3, C3, [ 1280, False ] ], # 41 (P7/128-xxlarge) [-1, 3, C3, [1280, False]], # 41 (P7/128-xxlarge)
[ [ 29, 32, 35, 38, 41 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6, P7) [[29, 32, 35, 38, 41], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6, P7)
] ]

View File

@ -3,44 +3,44 @@ nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple width_multiple: 1.0 # layer channel multiple
anchors: anchors:
- [ 10,13, 16,30, 33,23 ] # P3/8 - [10,13, 16,30, 33,23] # P3/8
- [ 30,61, 62,45, 59,119 ] # P4/16 - [30,61, 62,45, 59,119] # P4/16
- [ 116,90, 156,198, 373,326 ] # P5/32 - [116,90, 156,198, 373,326] # P5/32
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, BottleneckCSP, [ 128 ] ], [-1, 3, BottleneckCSP, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, BottleneckCSP, [ 256 ] ], [-1, 9, BottleneckCSP, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, BottleneckCSP, [ 512 ] ], [-1, 9, BottleneckCSP, [512]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ], [-1, 1, SPP, [1024, [5, 9, 13]]],
[ -1, 3, BottleneckCSP, [ 1024, False ] ], # 9 [-1, 3, BottleneckCSP, [1024, False]], # 9
] ]
# YOLOv5 PANet head # YOLOv5 PANet head
head: head:
[ [ -1, 1, Conv, [ 512, 1, 1 ] ], [[-1, 1, Conv, [512, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 3, BottleneckCSP, [ 512, False ] ], # 13 [-1, 3, BottleneckCSP, [512, False]], # 13
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 3, BottleneckCSP, [ 256, False ] ], # 17 (P3/8-small) [-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ], [-1, 1, Conv, [256, 3, 2]],
[ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4 [[-1, 14], 1, Concat, [1]], # cat head P4
[ -1, 3, BottleneckCSP, [ 512, False ] ], # 20 (P4/16-medium) [-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium)
[ -1, 1, Conv, [ 512, 3, 2 ] ], [-1, 1, Conv, [512, 3, 2]],
[ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat head P5 [[-1, 10], 1, Concat, [1]], # cat head P5
[ -1, 3, BottleneckCSP, [ 1024, False ] ], # 23 (P5/32-large) [-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large)
[ [ 17, 20, 23 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5) [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
] ]

View File

@ -3,56 +3,56 @@ nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple width_multiple: 1.0 # layer channel multiple
anchors: anchors:
- [ 19,27, 44,40, 38,94 ] # P3/8 - [19,27, 44,40, 38,94] # P3/8
- [ 96,68, 86,152, 180,137 ] # P4/16 - [96,68, 86,152, 180,137] # P4/16
- [ 140,301, 303,264, 238,542 ] # P5/32 - [140,301, 303,264, 238,542] # P5/32
- [ 436,615, 739,380, 925,792 ] # P6/64 - [436,615, 739,380, 925,792] # P6/64
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, C3, [ 128 ] ], [-1, 3, C3, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, C3, [ 256 ] ], [-1, 9, C3, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, C3, [ 512 ] ], [-1, 9, C3, [512]],
[ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[ -1, 3, C3, [ 768 ] ], [-1, 3, C3, [768]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64 [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ], [-1, 1, SPP, [1024, [3, 5, 7]]],
[ -1, 3, C3, [ 1024, False ] ], # 11 [-1, 3, C3, [1024, False]], # 11
] ]
# YOLOv5 head # YOLOv5 head
head: head:
[ [ -1, 1, Conv, [ 768, 1, 1 ] ], [[-1, 1, Conv, [768, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5 [[-1, 8], 1, Concat, [1]], # cat backbone P5
[ -1, 3, C3, [ 768, False ] ], # 15 [-1, 3, C3, [768, False]], # 15
[ -1, 1, Conv, [ 512, 1, 1 ] ], [-1, 1, Conv, [512, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 3, C3, [ 512, False ] ], # 19 [-1, 3, C3, [512, False]], # 19
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small) [-1, 3, C3, [256, False]], # 23 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ], [-1, 1, Conv, [256, 3, 2]],
[ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4 [[-1, 20], 1, Concat, [1]], # cat head P4
[ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium) [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[ -1, 1, Conv, [ 512, 3, 2 ] ], [-1, 1, Conv, [512, 3, 2]],
[ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5 [[-1, 16], 1, Concat, [1]], # cat head P5
[ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large) [-1, 3, C3, [768, False]], # 29 (P5/32-large)
[ -1, 1, Conv, [ 768, 3, 2 ] ], [-1, 1, Conv, [768, 3, 2]],
[ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6 [[-1, 12], 1, Concat, [1]], # cat head P6
[ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge) [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6) [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
] ]

View File

@ -3,56 +3,56 @@ nc: 80 # number of classes
depth_multiple: 0.67 # model depth multiple depth_multiple: 0.67 # model depth multiple
width_multiple: 0.75 # layer channel multiple width_multiple: 0.75 # layer channel multiple
anchors: anchors:
- [ 19,27, 44,40, 38,94 ] # P3/8 - [19,27, 44,40, 38,94] # P3/8
- [ 96,68, 86,152, 180,137 ] # P4/16 - [96,68, 86,152, 180,137] # P4/16
- [ 140,301, 303,264, 238,542 ] # P5/32 - [140,301, 303,264, 238,542] # P5/32
- [ 436,615, 739,380, 925,792 ] # P6/64 - [436,615, 739,380, 925,792] # P6/64
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, C3, [ 128 ] ], [-1, 3, C3, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, C3, [ 256 ] ], [-1, 9, C3, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, C3, [ 512 ] ], [-1, 9, C3, [512]],
[ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[ -1, 3, C3, [ 768 ] ], [-1, 3, C3, [768]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64 [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ], [-1, 1, SPP, [1024, [3, 5, 7]]],
[ -1, 3, C3, [ 1024, False ] ], # 11 [-1, 3, C3, [1024, False]], # 11
] ]
# YOLOv5 head # YOLOv5 head
head: head:
[ [ -1, 1, Conv, [ 768, 1, 1 ] ], [[-1, 1, Conv, [768, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5 [[-1, 8], 1, Concat, [1]], # cat backbone P5
[ -1, 3, C3, [ 768, False ] ], # 15 [-1, 3, C3, [768, False]], # 15
[ -1, 1, Conv, [ 512, 1, 1 ] ], [-1, 1, Conv, [512, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 3, C3, [ 512, False ] ], # 19 [-1, 3, C3, [512, False]], # 19
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small) [-1, 3, C3, [256, False]], # 23 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ], [-1, 1, Conv, [256, 3, 2]],
[ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4 [[-1, 20], 1, Concat, [1]], # cat head P4
[ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium) [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[ -1, 1, Conv, [ 512, 3, 2 ] ], [-1, 1, Conv, [512, 3, 2]],
[ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5 [[-1, 16], 1, Concat, [1]], # cat head P5
[ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large) [-1, 3, C3, [768, False]], # 29 (P5/32-large)
[ -1, 1, Conv, [ 768, 3, 2 ] ], [-1, 1, Conv, [768, 3, 2]],
[ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6 [[-1, 12], 1, Concat, [1]], # cat head P6
[ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge) [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6) [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
] ]

View File

@ -3,44 +3,44 @@ nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple width_multiple: 0.50 # layer channel multiple
anchors: anchors:
- [ 10,13, 16,30, 33,23 ] # P3/8 - [10,13, 16,30, 33,23] # P3/8
- [ 30,61, 62,45, 59,119 ] # P4/16 - [30,61, 62,45, 59,119] # P4/16
- [ 116,90, 156,198, 373,326 ] # P5/32 - [116,90, 156,198, 373,326] # P5/32
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, C3, [ 128 ] ], [-1, 3, C3, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, C3, [ 256 ] ], [-1, 9, C3, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, C3, [ 512 ] ], [-1, 9, C3, [512]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ], [-1, 1, SPP, [1024, [5, 9, 13]]],
[ -1, 3, C3TR, [ 1024, False ] ], # 9 <-------- C3TR() Transformer module [-1, 3, C3TR, [1024, False]], # 9 <-------- C3TR() Transformer module
] ]
# YOLOv5 head # YOLOv5 head
head: head:
[ [ -1, 1, Conv, [ 512, 1, 1 ] ], [[-1, 1, Conv, [512, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 3, C3, [ 512, False ] ], # 13 [-1, 3, C3, [512, False]], # 13
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 3, C3, [ 256, False ] ], # 17 (P3/8-small) [-1, 3, C3, [256, False]], # 17 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ], [-1, 1, Conv, [256, 3, 2]],
[ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4 [[-1, 14], 1, Concat, [1]], # cat head P4
[ -1, 3, C3, [ 512, False ] ], # 20 (P4/16-medium) [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[ -1, 1, Conv, [ 512, 3, 2 ] ], [-1, 1, Conv, [512, 3, 2]],
[ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat head P5 [[-1, 10], 1, Concat, [1]], # cat head P5
[ -1, 3, C3, [ 1024, False ] ], # 23 (P5/32-large) [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[ [ 17, 20, 23 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5) [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
] ]

View File

@ -3,56 +3,56 @@ nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple width_multiple: 0.50 # layer channel multiple
anchors: anchors:
- [ 19,27, 44,40, 38,94 ] # P3/8 - [19,27, 44,40, 38,94] # P3/8
- [ 96,68, 86,152, 180,137 ] # P4/16 - [96,68, 86,152, 180,137] # P4/16
- [ 140,301, 303,264, 238,542 ] # P5/32 - [140,301, 303,264, 238,542] # P5/32
- [ 436,615, 739,380, 925,792 ] # P6/64 - [436,615, 739,380, 925,792] # P6/64
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, C3, [ 128 ] ], [-1, 3, C3, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, C3, [ 256 ] ], [-1, 9, C3, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, C3, [ 512 ] ], [-1, 9, C3, [512]],
[ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[ -1, 3, C3, [ 768 ] ], [-1, 3, C3, [768]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64 [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ], [-1, 1, SPP, [1024, [3, 5, 7]]],
[ -1, 3, C3, [ 1024, False ] ], # 11 [-1, 3, C3, [1024, False]], # 11
] ]
# YOLOv5 head # YOLOv5 head
head: head:
[ [ -1, 1, Conv, [ 768, 1, 1 ] ], [[-1, 1, Conv, [768, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5 [[-1, 8], 1, Concat, [1]], # cat backbone P5
[ -1, 3, C3, [ 768, False ] ], # 15 [-1, 3, C3, [768, False]], # 15
[ -1, 1, Conv, [ 512, 1, 1 ] ], [-1, 1, Conv, [512, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 3, C3, [ 512, False ] ], # 19 [-1, 3, C3, [512, False]], # 19
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small) [-1, 3, C3, [256, False]], # 23 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ], [-1, 1, Conv, [256, 3, 2]],
[ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4 [[-1, 20], 1, Concat, [1]], # cat head P4
[ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium) [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[ -1, 1, Conv, [ 512, 3, 2 ] ], [-1, 1, Conv, [512, 3, 2]],
[ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5 [[-1, 16], 1, Concat, [1]], # cat head P5
[ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large) [-1, 3, C3, [768, False]], # 29 (P5/32-large)
[ -1, 1, Conv, [ 768, 3, 2 ] ], [-1, 1, Conv, [768, 3, 2]],
[ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6 [[-1, 12], 1, Concat, [1]], # cat head P6
[ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge) [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6) [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
] ]

View File

@ -3,56 +3,56 @@ nc: 80 # number of classes
depth_multiple: 1.33 # model depth multiple depth_multiple: 1.33 # model depth multiple
width_multiple: 1.25 # layer channel multiple width_multiple: 1.25 # layer channel multiple
anchors: anchors:
- [ 19,27, 44,40, 38,94 ] # P3/8 - [19,27, 44,40, 38,94] # P3/8
- [ 96,68, 86,152, 180,137 ] # P4/16 - [96,68, 86,152, 180,137] # P4/16
- [ 140,301, 303,264, 238,542 ] # P5/32 - [140,301, 303,264, 238,542] # P5/32
- [ 436,615, 739,380, 925,792 ] # P6/64 - [436,615, 739,380, 925,792] # P6/64
# YOLOv5 backbone # YOLOv5 backbone
backbone: backbone:
# [from, number, module, args] # [from, number, module, args]
[ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2 [[-1, 1, Focus, [64, 3]], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[ -1, 3, C3, [ 128 ] ], [-1, 3, C3, [128]],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[ -1, 9, C3, [ 256 ] ], [-1, 9, C3, [256]],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[ -1, 9, C3, [ 512 ] ], [-1, 9, C3, [512]],
[ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32 [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[ -1, 3, C3, [ 768 ] ], [-1, 3, C3, [768]],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64 [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ], [-1, 1, SPP, [1024, [3, 5, 7]]],
[ -1, 3, C3, [ 1024, False ] ], # 11 [-1, 3, C3, [1024, False]], # 11
] ]
# YOLOv5 head # YOLOv5 head
head: head:
[ [ -1, 1, Conv, [ 768, 1, 1 ] ], [[-1, 1, Conv, [768, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5 [[-1, 8], 1, Concat, [1]], # cat backbone P5
[ -1, 3, C3, [ 768, False ] ], # 15 [-1, 3, C3, [768, False]], # 15
[ -1, 1, Conv, [ 512, 1, 1 ] ], [-1, 1, Conv, [512, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4 [[-1, 6], 1, Concat, [1]], # cat backbone P4
[ -1, 3, C3, [ 512, False ] ], # 19 [-1, 3, C3, [512, False]], # 19
[ -1, 1, Conv, [ 256, 1, 1 ] ], [-1, 1, Conv, [256, 1, 1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ], [-1, 1, nn.Upsample, [None, 2, 'nearest']],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3 [[-1, 4], 1, Concat, [1]], # cat backbone P3
[ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small) [-1, 3, C3, [256, False]], # 23 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ], [-1, 1, Conv, [256, 3, 2]],
[ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4 [[-1, 20], 1, Concat, [1]], # cat head P4
[ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium) [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[ -1, 1, Conv, [ 512, 3, 2 ] ], [-1, 1, Conv, [512, 3, 2]],
[ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5 [[-1, 16], 1, Concat, [1]], # cat head P5
[ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large) [-1, 3, C3, [768, False]], # 29 (P5/32-large)
[ -1, 1, Conv, [ 768, 3, 2 ] ], [-1, 1, Conv, [768, 3, 2]],
[ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6 [[-1, 12], 1, Concat, [1]], # cat head P6
[ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge) [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6) [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
] ]

View File

@ -83,7 +83,6 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
if resume: if resume:
weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp
# Config # Config
plots = not evolve # create plots plots = not evolve # create plots
cuda = device.type != 'cpu' cuda = device.type != 'cpu'
@ -96,7 +95,6 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}' # check assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}' # check
is_coco = data.endswith('coco.yaml') and nc == 80 # COCO dataset is_coco = data.endswith('coco.yaml') and nc == 80 # COCO dataset
# Model # Model
pretrained = weights.endswith('.pt') pretrained = weights.endswith('.pt')
if pretrained: if pretrained:

View File

@ -115,7 +115,6 @@ def get_token(cookie="./cookie"):
return line.split()[-1] return line.split()[-1]
return "" return ""
# Google utils: https://cloud.google.com/storage/docs/reference/libraries ---------------------------------------------- # Google utils: https://cloud.google.com/storage/docs/reference/libraries ----------------------------------------------
# #
# #

View File

@ -1,7 +1,8 @@
# YOLOv5 experiment logging utils # YOLOv5 experiment logging utils
import torch
import warnings import warnings
from threading import Thread from threading import Thread
import torch
from torch.utils.tensorboard import SummaryWriter from torch.utils.tensorboard import SummaryWriter
from utils.general import colorstr, emojis from utils.general import colorstr, emojis

View File

@ -1,5 +1,4 @@
import argparse import argparse
import yaml
from wandb_utils import WandbLogger from wandb_utils import WandbLogger

View File

@ -1,7 +1,8 @@
import sys import sys
import wandb
from pathlib import Path from pathlib import Path
import wandb
FILE = Path(__file__).absolute() FILE = Path(__file__).absolute()
sys.path.append(FILE.parents[2].as_posix()) # add utils/ to path sys.path.append(FILE.parents[2].as_posix()) # add utils/ to path

View File

@ -25,9 +25,9 @@ parameters:
data: data:
value: "data/coco128.yaml" value: "data/coco128.yaml"
batch_size: batch_size:
values: [ 64 ] values: [64]
epochs: epochs:
values: [ 10 ] values: [10]
lr0: lr0:
distribution: uniform distribution: uniform

View File

@ -3,9 +3,10 @@
import logging import logging
import os import os
import sys import sys
import yaml
from contextlib import contextmanager from contextlib import contextmanager
from pathlib import Path from pathlib import Path
import yaml
from tqdm import tqdm from tqdm import tqdm
FILE = Path(__file__).absolute() FILE = Path(__file__).absolute()

1
val.py
View File

@ -13,7 +13,6 @@ from threading import Thread
import numpy as np import numpy as np
import torch import torch
import yaml
from tqdm import tqdm from tqdm import tqdm
FILE = Path(__file__).absolute() FILE = Path(__file__).absolute()