|
|
@@ -661,7 +661,7 @@ |
|
|
|
"id": "eyTZYGgRjnMc" |
|
|
|
}, |
|
|
|
"source": [ |
|
|
|
"### 2.1 COCO val2017\n", |
|
|
|
"## COCO val2017\n", |
|
|
|
"Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L14) dataset (1GB - 5000 images), and test model accuracy." |
|
|
|
] |
|
|
|
}, |
|
|
@@ -786,7 +786,7 @@ |
|
|
|
"id": "rc_KbFk0juX2" |
|
|
|
}, |
|
|
|
"source": [ |
|
|
|
"### 2.2 COCO test-dev2017\n", |
|
|
|
"## COCO test-dev2017\n", |
|
|
|
"Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (20,000 images). Results are saved to a `*.json` file which can be submitted to the evaluation server at https://competitions.codalab.org/competitions/20794." |
|
|
|
] |
|
|
|
}, |
|
|
@@ -996,15 +996,22 @@ |
|
|
|
} |
|
|
|
] |
|
|
|
}, |
|
|
|
{ |
|
|
|
"cell_type": "markdown", |
|
|
|
"metadata": { |
|
|
|
"id": "15glLzbQx5u0" |
|
|
|
}, |
|
|
|
"source": [ |
|
|
|
"# 4. Visualize" |
|
|
|
] |
|
|
|
}, |
|
|
|
{ |
|
|
|
"cell_type": "markdown", |
|
|
|
"metadata": { |
|
|
|
"id": "DLI1JmHU7B0l" |
|
|
|
}, |
|
|
|
"source": [ |
|
|
|
"# 4. Visualize\n", |
|
|
|
"\n", |
|
|
|
"## 4.1 Weights & Biases Logging (🚀 NEW)\n", |
|
|
|
"## Weights & Biases Logging (🚀 NEW)\n", |
|
|
|
"\n", |
|
|
|
"[Weights & Biases](https://www.wandb.com/) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration among team members. To enable W&B logging install `wandb`, and then train normally (you will be guided setup on first use).\n", |
|
|
|
"```bash\n", |
|
|
@@ -1022,7 +1029,7 @@ |
|
|
|
"id": "-WPvRbS5Swl6" |
|
|
|
}, |
|
|
|
"source": [ |
|
|
|
"## 4.2 Local Logging\n", |
|
|
|
"## Local Logging\n", |
|
|
|
"\n", |
|
|
|
"All results are logged by default to the `runs/exp0` directory, with a new directory created for each new training as `runs/exp1`, `runs/exp2`, etc. View train and test jpgs to see mosaics, labels/predictions and augmentation effects. Note a **Mosaic Dataloader** is used for training (shown below), a new concept developed by Ultralytics and first featured in [YOLOv4](https://arxiv.org/abs/2004.10934)." |
|
|
|
] |
|
|
@@ -1093,7 +1100,7 @@ |
|
|
|
"id": "Zelyeqbyt3GD" |
|
|
|
}, |
|
|
|
"source": [ |
|
|
|
"## Environments\n", |
|
|
|
"# Environments\n", |
|
|
|
"\n", |
|
|
|
"YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\n", |
|
|
|
"\n", |