* precommit: yapf
* align isort
* fix
# Conflicts:
# utils/plots.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update setup.cfg
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update setup.cfg
* Update setup.cfg
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update wandb_utils.py
* Update augmentations.py
* Update setup.cfg
* Update yolo.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update val.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* simplify colorstr
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* val run fix
* export.py last comma
* Update export.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update hubconf.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* PyTorch Hub tuple fix
* PyTorch Hub tuple fix2
* PyTorch Hub tuple fix3
* Update setup
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update loss.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update loss.py
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Update loss.py
implement the quality focal loss which is a more general case of focal loss
more detail in https://arxiv.org/abs/2006.04388
In the obj loss (or the case cls loss with label smooth), the targets is no long barely be 0 or 1 (can be 0.7), in this case, the normal focal loss is not work accurately
quality focal loss in behave the same as focal loss when the target is equal to 0 or 1, and work accurately when targets in (0, 1)
example:
targets:
tensor([[0.6225, 0.0000, 0.0000],
[0.9000, 0.0000, 0.0000],
[1.0000, 0.0000, 0.0000]])
___________________________
pred_prob:
tensor([[0.6225, 0.2689, 0.1192],
[0.7773, 0.5000, 0.2227],
[0.8176, 0.8808, 0.1978]])
____________________________
focal_loss
tensor([[0.0937, 0.0328, 0.0039],
[0.0166, 0.1838, 0.0199],
[0.0039, 1.3186, 0.0145]])
______________
qfocal_loss
tensor([[7.5373e-08, 3.2768e-02, 3.9179e-03],
[4.8601e-03, 1.8380e-01, 1.9857e-02],
[3.9233e-03, 1.3186e+00, 1.4545e-02]])
we can see that targets[0][0] = 0.6255 is almost the same as pred_prob[0][0] = 0.6225,
the targets[1][0] = 0.9 is greater then pred_prob[1][0] = 0.7773 by 0.1227
however, the focal loss[0][0] = 0.0937 larger then focal loss[1][0] = 0.0166 (which against the purpose of focal loss)
for the quality focal loss , it implement the case of targets not equal to 0 or 1
* Update loss.py