* Add cache-on-disk and cache-directory to cache images on disk
* Fix load_image with cache_on_disk
* Add no_cache flag for load_image
* Revert the parts('logging' and a new line) that do not need to be modified
* Add the assertion for shapes of cached images
* Add a suffix string for cached images
* Fix boundary-error of letterbox for load_mosaic
* Add prefix as cache-key of cache-on-disk
* Update cache-function on disk
* Add psutil in requirements.txt
* Update train.py
* Cleanup1
* Cleanup2
* Skip existing npy
* Include re-space
* Export return character fix
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add freeze as an argument
I train on different platforms and sometimes I want to freeze some layers. I have to go into the code and change it and also keep track of how many layers I froze on what platform. Please add the number of layers to freeze as an argument in future versions thanks.
* Update train.py
* Update train.py
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* evolve command accepts argument for number of generations
* evolve generations argument used in evolve for loop
* evolve argument boolean fixes
* default to 300 evolve generations
* Update train.py
Co-authored-by: John San Soucie <jsansoucie@whoi.edu>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* gradient accumulation during warmup in train.py
Context:
`accumulate` is the number of batches/gradients accumulated before calling the next optimizer.step().
During warmup, it is ramped up from 1 to the final value nbs / batch_size.
Although I have not seen this in other libraries, I like the idea. During warmup, as grads are large, too large steps are more of on issue than gradient noise due to small steps.
The bug:
The condition to perform the opt step is wrong
> if ni % accumulate == 0:
This produces irregular step sizes if `accumulate` is not constant. It becomes relevant when batch_size is small and `accumulate` changes many times during warmup.
This demo also shows the proposed solution, to use a ">=" condition instead:
https://colab.research.google.com/drive/1MA2z2eCXYB_BC5UZqgXueqL_y1Tz_XVq?usp=sharing
Further, I propose not to restrict the number of warmup iterations to >= 1000. If the user changes hyp['warmup_epochs'], this causes unexpected behavior. Also, it makes evolution unstable if this parameter was to be optimized.
* replace last_opt_step tracking by do_step(ni)
* add docstrings
* move down nw
* Update train.py
* revert math import move
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Slightly modify CLI execution
This simple change makes it easier to run the primary functions of this
repo (train/detect/test) from within Python. An object which represents
`opt` can be constructed and fed to the `main` function of each of these
modules, rather than having to call the lower level functions directly,
or run the module as a script.
* Update export.py
Add CLI parsing update for more convenient module usage within Python.
Co-authored-by: Lewis Belcher <lb@desupervised.io>