Ayush Chaurasia
96fcde40b8
W&B feature improvements ( #1258 )
* W&B feature improvements
This PR add:
* Class to id labels. Now, the caption of bounding boxes will display the class name and the class confidence score.
* The project name is set to "Yolov5" and the run name will be set to opt.logdir
* cleanup
* remove parenthesis on caption
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Ayush Chaurasia
ca290dca24
Weights & Biases (W&B) Feature Addition ( #1235 )
* Add wandb metric logging and bounding box debugging
* Improve formatting, readability
* Remove mutliple path for init, improve formatting
* Add wandb params
* Remove typecasting in bbox coordinates and reformat
* Cleanup
* add wandb to requirements.txt
* minor updates to test.py
* general reorg
* reduce --log-imgs to 10
* clean wandb import
* reverse wandb import assert
* add except AssertionError to try import
* move wandb init to all global ranks
* replace print() with logger.info()
* replace print() with logger.info()
* move wandb.init() bug fix
* project PosixPath to basename bug fix
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Glenn Jocher
453acdec67
Update tensorboard logging
4 anos atrás
Jirka Borovec
c67e72200e
fix compatibility for hyper config ( #1146 )
* fix/hyper
* Hyp giou check to train.py
* restore general.py
* train.py overwrite fix
* restore general.py and pep8 update
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Glenn Jocher
4d3680c81d
Minor import and spelling updates ( #1133 )
4 anos atrás
Jirka Borovec
00917a6225
update expt name comment and folder parsing for training ( #978 )
* comment
* fix parsing
* fix evolve
* folder
* tqdm
* Update train.py
* Update train.py
* reinstate anchors into meta dict
anchor evolution is working correctly now
* reinstate logger
prefer the single line readout for concise logging, which helps simplify notebook and tutorials etc.
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Glenn Jocher
0ada058f63
Generalized regression criterion renaming ( #1120 )
4 anos atrás
Glenn Jocher
5fac5ad165
Precision-Recall Curve Feature Addition ( #1107 )
* initial commit
* Update general.py
Indent update
* Update general.py
refactor duplicate code
* 200 dpi
4 anos atrás
Glenn Jocher
66676eb039
init_torch_seeds >> init_seeds bug fix
4 anos atrás
Glenn Jocher
f1c63e2784
add mosaic and warmup to hyperparameters ( #931 )
4 anos atrás
Glenn Jocher
a62a45b2dd
prevent testloader caching on --notest
4 anos atrás
Glenn Jocher
c8e51812a5
hyp evolution force-autoanchor fix
4 anos atrás
Glenn Jocher
c687d5c129
reorganize train initialization steps
4 anos atrás
Glenn Jocher
44cdcc7e0b
hyp['anchors'] evolution update
4 anos atrás
NanoCode012
d8274d0434
Fix results_file not renaming ( #903 )
4 anos atrás
Glenn Jocher
281d78c105
Update train.py ( #902 )
* Update train.py with simplified ckpt names
* Return default hyps to hyp.scratch.yaml
Leave line commented for future use once mystery of best finetuning hyps to apply becomes clearer.
* Force test_batch*_pred.jpg replot on final epoch
This will allow you to see predictions final testing run after training completes in runs/exp0
4 anos atrás
Naman Gupta
6f3db5e662
Remove autoanchor and class checks on resumed training ( #889 )
* Class frequency not calculated on resuming training
Calculation of class frequency is not needed when resuming training.
Anchors can still be recalculated whether resuming or not.
* Check rank for autoanchor
* Update train.py
no autoanchor checks on resume
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Glenn Jocher
f06e2d518c
opt.image_weights bug fix ( #885 )
4 anos atrás
Glenn Jocher
69ff781ca5
opt.img_weights bug fix ( #885 )
4 anos atrás
Glenn Jocher
08e97a2f88
Update hyperparameters to add lrf, anchors
4 anos atrás
Glenn Jocher
a21bd0687c
Update train.py forward simplification
4 anos atrás
Glenn Jocher
09402a2174
torch.from_tensor() bug fix
4 anos atrás
Glenn Jocher
83dc540b1d
remove ema.ema hasattr(ema, 'module') check
4 anos atrás
Glenn Jocher
4447f4b937
--resume to same runs/exp directory ( #765 )
* initial commit
* add weight backup dir on resume
4 anos atrás
NanoCode012
fb4fc8cd02
Fix ema attribute error in DDP mode ( #775 )
* Fix ema error in DDP mode
* Update train.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Glenn Jocher
ebafd1ead5
single command --resume ( #756 )
* single command --resume
* else check files, remove TODO
* argparse.Namespace()
* tensorboard lr
* bug fix in get_latest_run()
4 anos atrás
Glenn Jocher
916d4aad9a
v3.0 Release ( #725 )
* initial commit
* remove yolov3-spp from test.py study
* update study --img range
* update mAP
* cleanup and speed updates
* update README plot
4 anos atrás
NanoCode012
0892c44bc4
Fix Logging ( #719 )
* Add logging setup
* Fix fusing layers message
* Fix logging does not have end
* Add logging
* Change logging to use logger
* Update yolo.py
I tried this in a cloned branch, and everything seems to work fine
* Update yolo.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Marc
a925f283a7
max workers for dataloader ( #722 )
4 anos atrás
NanoCode012
4949401a94
Fix redundant outputs via Logging in DDP training ( #500 )
* Change print to logging
* Clean function set_logging
* Add line spacing
* Change leftover prints to log
* Fix scanning labels output
* Fix rank naming
* Change leftover print to logging
* Reorganized DDP variables
* Fix type error
* Make quotes consistent
* Fix spelling
* Clean function call
* Add line spacing
* Update datasets.py
* Update train.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Glenn Jocher
e71fd0ec0b
Model freeze capability ( #679 )
4 anos atrás
Glenn Jocher
8e5c66579b
update train.py remove save_json final_epoch
4 anos atrás
Glenn Jocher
41523e2c91
Dataset autodownload feature addition ( #685 )
* initial commit
* move download scripts into data/scripts
* new check_dataset() function in general.py
* move check_dataset() out of with context
* Update general.py
* DDP update
* Update general.py
4 anos atrás
NanoCode012
3d8ed0a76b
Fix missing model.stride in DP and DDP mode ( #683 )
4 anos atrás
Glenn Jocher
a0ac5adb7b
Single-source training update ( #680 )
4 anos atrás
Glenn Jocher
3c6e2f7668
Single-source training ( #680 )
* Single-source training
* Extract hyperparameters into seperate files
* weight decay scientific notation yaml reader bug fix
* remove import glob
* intersect_dicts() implementation
* 'or' bug fix
* .to(device) bug fix
4 anos atrás
NanoCode012
d7cfbc47ab
Fix unrecognized local rank argument ( #676 )
4 anos atrás
Glenn Jocher
93684531c6
train.py --logdir argparser addition ( #660 )
* train.py --logdir argparser addition
* train.py --logdir argparser addition
4 anos atrás
NanoCode012
886b9841c8
Add Multi-Node support for DDP Training ( #504 )
* Add support for multi-node DDP
* Remove local_rank confusion
* Fix spacing
4 anos atrás
lorenzomammana
728efa6576
Fix missing imports ( #627 )
* Fix missing imports
* Update detect.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Glenn Jocher
eb99dff9ef
import random bug fix ( #614 )
4 anos atrás
Jirka Borovec
d5b6416c87
Explicit Imports ( #498 )
* expand imports
* optimize
* miss
* fix
4 anos atrás
Glenn Jocher
f1096b2cf7
hyperparameter evolution update ( #566 )
4 anos atrás
Glenn Jocher
c1a2a7a411
hyperparameter evolution bug fix ( #566 )
4 anos atrás
Glenn Jocher
8074745908
hyperparameter evolution bug fix ( #566 )
4 anos atrás
Glenn Jocher
e32abb5fb9
hyperparameter evolution bug fix ( #566 )
4 anos atrás
Glenn Jocher
8056fe2db8
hyperparameter evolution bug fix ( #566 )
4 anos atrás
Glenn Jocher
127cbeb3f5
hyperparameter expansion to flips, perspective, mixup
4 anos atrás
Glenn Jocher
bcd452c482
replace random_affine() with random_perspective()
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 anos atrás
Liu Changyu
c020875b17
PyTorch 1.6.0 update with native AMP ( #573 )
* PyTorch have Automatic Mixed Precision (AMP) Training.
* Fixed the problem of inconsistent code length indentation
* Fixed the problem of inconsistent code length indentation
* Mixed precision training is turned on by default
4 anos atrás