* Reject webcam inference on Colab/Kaggle
Improve user error understanding for https://github.com/ultralytics/yolov5/issues/8180
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* cleanup
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Update dataloaders.py
* Update dataloaders.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* AutoBatch checks against failed solutions
@kalenmike this is a simple improvement to AutoBatch to verify that returned solutions have not already failed, i.e. return batch-size 8 when 8 already produced CUDA out of memory.
This is a halfway fix until I can implement a 'final solution' that will actively verify the solved-for batch size rather than passively assume it works.
* Update autobatch.py
* Update autobatch.py
* Update and rename ci-testing.yml to ci.yml
* Update ci.yml
* Update ci.yml
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update and rename ci.yml to ci-tests.yml
* Update ci-tests.yml
* Update ci-tests.yml
* Rename ci-tests.yml to ci-testing.yml
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Add `psutil` and `ipython` to requirements.txt
Lightweight packages used by YOLOv5 for system utilization (psutil) and interactive notebooks (IPython)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* sort alphabetically
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Apple Metal Performance Shader (MPS) device support
Following https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
Should work with Apple M1 devices with PyTorch nightly installed with command `--device mps`. Usage examples:
```bash
python train.py --device mps
python detect.py --device mps
python val.py --device mps
```
* Update device strategy to fix MPS issue
* Write .yaml file when exporting model to openvino
Write .yaml file automatically when exporting model to openvino to be used during inference
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update export.py
* Update export.py
* Load metadata on inference
* Update common.py
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Code refactor for general.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update restapi.py
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Add PyTorch AMP check
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Cleanup
* Cleanup
* Cleanup
* Robust for DDP
* Fixes
* Add amp enabled boolean to check_train_batch_size
* Simplify
* space to prefix
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Add DWConvTranspose2d() module
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add DWConvTranspose2d() module
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix
* Fix
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Improve mAP0.5-0.95
Two changes provided
1. Added limit on the maximum number of detections for each image likewise pycocotools
2. Rework process_batch function
Changes #2 solved issue #4251
I also independently encountered the problem described in issue #4251 that the values for the same thresholds do not match when changing the limits in the torch.linspace function.
These changes solve this problem.
Currently during validation yolov5x.pt model the following results were obtained:
from yolov5 validation
Class Images Labels P R mAP@.5 mAP@.5:.95: 100%|██████████| 157/157 [01:07<00:00, 2.33it/s]
all 5000 36335 0.743 0.626 0.682 0.506
from pycocotools
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.505
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.685
These results are very close, although not completely pass the competition issue #2258.
I think it's problem with false positive bboxes matched ignored criteria, but this is not actual for custom datasets and does not require an additional solution.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Remove line to retain pycocotools results
* Update val.py
* Update val.py
* Remove to device op
* Higher precision int conversion
* Update val.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>