* Copy wandb param dict before training to avoid overwrites.
Copy the hyperparameter dict retrieved from wandb configuration before passing it to `train()`. Training overwrites parameters in the dictionary (eg scaling obj/box/cls gains), which causes the values reported in wandb to not match the input values. This is confusing as it makes it hard to reproduce a run, and also throws off wandb's Bayesian sweep algorithm.
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* precommit: isort
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update isort config
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update name
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* add callbacks to train function in wandb sweep
Fix following https://github.com/ultralytics/yolov5/pull/4688 which modified the function signature to `train`
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>