* Remove NCOLS from tqdm
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* reformat
* Single-line argparser argument
* Update README.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update README.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Logger consolidation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* write date in checkpoint file
write date in checkpoint file
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* isoformat
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* precommit: isort
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update isort config
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update name
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* use os.path.relpath instead of relative_to
* use os.path.relpath instead of relative_to
* Remove os.path from val.py
* Remove os.path from train.py
* Update detect.py import to os
* Update export.py import to os
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This PR adds a new training argument `--save-period` to save training checkpoints every `x` epochs. To save training every 50 epochs for example:
```
python train.py --save-period 50 # saves epoch50.pt, epoch100.pt, epoch150.pt, ... etc.
```
This saved checkpoints in addition to existing last.pt and best.pt checkpoints and does not affect their behavior. Default value is -1, i.e. disabled.
* Validate best.pt on train end
* 0.7 iou for COCO only
* pass callbacks
* active model.float() if not half
* print Validating best.pt...
* add newline
* Add cache-on-disk and cache-directory to cache images on disk
* Fix load_image with cache_on_disk
* Add no_cache flag for load_image
* Revert the parts('logging' and a new line) that do not need to be modified
* Add the assertion for shapes of cached images
* Add a suffix string for cached images
* Fix boundary-error of letterbox for load_mosaic
* Add prefix as cache-key of cache-on-disk
* Update cache-function on disk
* Add psutil in requirements.txt
* Update train.py
* Cleanup1
* Cleanup2
* Skip existing npy
* Include re-space
* Export return character fix
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add freeze as an argument
I train on different platforms and sometimes I want to freeze some layers. I have to go into the code and change it and also keep track of how many layers I froze on what platform. Please add the number of layers to freeze as an argument in future versions thanks.
* Update train.py
* Update train.py
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>