You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

datasets.py 44KB

4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
[WIP] Feature/ddp fixed (#401) * Squashed commit of the following: commit d738487089e41c22b3b1cd73aa7c1c40320a6ebf Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 14 17:33:38 2020 +0700 Adding world_size Reduce calls to torch.distributed. For use in create_dataloader. commit e742dd9619d29306c7541821238d3d7cddcdc508 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 14 15:38:48 2020 +0800 Make SyncBN a choice commit e90d4004387e6103fecad745f8cbc2edc918e906 Merge: 5bf8beb cd90360 Author: yzchen <Chenyzsjtu@gmail.com> Date: Tue Jul 14 15:32:10 2020 +0800 Merge pull request #6 from NanoCode012/patch-5 Update train.py commit cd9036017e7f8bd519a8b62adab0f47ea67f4962 Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 14 13:39:29 2020 +0700 Update train.py Remove redundant `opt.` prefix. commit 5bf8bebe8873afb18b762fe1f409aca116fac073 Merge: c9558a9 a1c8406 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 14 14:09:51 2020 +0800 Merge branch 'master' of https://github.com/ultralytics/yolov5 into feature/DDP_fixed commit c9558a9b51547febb03d9c1ca42e2ef0fc15bb31 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 14 13:51:34 2020 +0800 Add device allocation for loss compute commit 4f08c692fb5e943a89e0ee354ef6c80a50eeb28d Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Thu Jul 9 11:16:27 2020 +0800 Revert drop_last commit 1dabe33a5a223b758cc761fc8741c6224205a34b Merge: a1ce9b1 4b8450b Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Thu Jul 9 11:15:49 2020 +0800 Merge branch 'feature/DDP_fixed' of https://github.com/MagicFrogSJTU/yolov5 into feature/DDP_fixed commit a1ce9b1e96b71d7fcb9d3e8143013eb8cebe5e27 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Thu Jul 9 11:15:21 2020 +0800 fix lr warning commit 4b8450b46db76e5e58cd95df965d4736077cfb0e Merge: b9a50ae 02c63ef Author: yzchen <Chenyzsjtu@gmail.com> Date: Wed Jul 8 21:24:24 2020 +0800 Merge pull request #4 from NanoCode012/patch-4 Add drop_last for multi gpu commit 02c63ef81cf98b28b10344fe2cce08a03b143941 Author: NanoCode012 <kevinvong@rocketmail.com> Date: Wed Jul 8 10:08:30 2020 +0700 Add drop_last for multi gpu commit b9a50aed48ab1536f94d49269977e2accd67748f Merge: ec2dc6c 121d90b Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 7 19:48:04 2020 +0800 Merge branch 'master' of https://github.com/ultralytics/yolov5 into feature/DDP_fixed commit ec2dc6cc56de43ddff939e14c450672d0fbf9b3d Merge: d0326e3 82a6182 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 7 19:34:31 2020 +0800 Merge branch 'feature/DDP_fixed' of https://github.com/MagicFrogSJTU/yolov5 into feature/DDP_fixed commit d0326e398dfeeeac611ccc64198d4fe91b7aa969 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 7 19:31:24 2020 +0800 Add SyncBN commit 82a6182b3ad0689a4432b631b438004e5acb3b74 Merge: 96fa40a 050b2a5 Author: yzchen <Chenyzsjtu@gmail.com> Date: Tue Jul 7 19:21:01 2020 +0800 Merge pull request #1 from NanoCode012/patch-2 Convert BatchNorm to SyncBatchNorm commit 050b2a5a79a89c9405854d439a1f70f892139b1c Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 7 12:38:14 2020 +0700 Add cleanup for process_group commit 2aa330139f3cc1237aeb3132245ed7e5d6da1683 Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 7 12:07:40 2020 +0700 Remove apex.parallel. Use torch.nn.parallel For future compatibility commit 77c8e27e603bea9a69e7647587ca8d509dc1990d Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 7 01:54:39 2020 +0700 Convert BatchNorm to SyncBatchNorm commit 96fa40a3a925e4ffd815fe329e1b5181ec92adc8 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Mon Jul 6 21:53:56 2020 +0800 Fix the datset inconsistency problem commit 16e7c269d062c8d16c4d4ff70cc80fd87935dc95 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Mon Jul 6 11:34:03 2020 +0800 Add loss multiplication to preserver the single-process performance commit e83805563065ffd2e38f85abe008fc662cc17909 Merge: 625bb49 3bdea3f Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Fri Jul 3 20:56:30 2020 +0800 Merge branch 'master' of https://github.com/ultralytics/yolov5 into feature/DDP_fixed commit 625bb49f4e52d781143fea0af36d14e5be8b040c Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Thu Jul 2 22:45:15 2020 +0800 DDP established * Squashed commit of the following: commit 94147314e559a6bdd13cb9de62490d385c27596f Merge: 65157e2 37acbdc Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Thu Jul 16 14:00:17 2020 +0800 Merge branch 'master' of https://github.com/ultralytics/yolov4 into feature/DDP_fixed commit 37acbdc0b6ef8c3343560834b914c83bbb0abbd1 Author: Glenn Jocher <glenn.jocher@ultralytics.com> Date: Wed Jul 15 20:03:41 2020 -0700 update test.py --save-txt commit b8c2da4a0d6880afd7857207340706666071145b Author: Glenn Jocher <glenn.jocher@ultralytics.com> Date: Wed Jul 15 20:00:48 2020 -0700 update test.py --save-txt commit 65157e2fc97d371bc576e18b424e130eb3026917 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Wed Jul 15 16:44:13 2020 +0800 Revert the README.md removal commit 1c802bfa503623661d8617ca3f259835d27c5345 Merge: cd55b44 0f3b8bb Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Wed Jul 15 16:43:38 2020 +0800 Merge branch 'feature/DDP_fixed' of https://github.com/MagicFrogSJTU/yolov5 into feature/DDP_fixed commit cd55b445c4dcd8003ff4b0b46b64adf7c16e5ce7 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Wed Jul 15 16:42:33 2020 +0800 fix the DDP performance deterioration bug. commit 0f3b8bb1fae5885474ba861bbbd1924fb622ee93 Author: Glenn Jocher <glenn.jocher@ultralytics.com> Date: Wed Jul 15 00:28:53 2020 -0700 Delete README.md commit f5921ba1e35475f24b062456a890238cb7a3cf94 Merge: 85ab2f3 bd3fdbb Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Wed Jul 15 11:20:17 2020 +0800 Merge branch 'feature/DDP_fixed' of https://github.com/MagicFrogSJTU/yolov5 into feature/DDP_fixed commit bd3fdbbf1b08ef87931eef49fa8340621caa7e87 Author: Glenn Jocher <glenn.jocher@ultralytics.com> Date: Tue Jul 14 18:38:20 2020 -0700 Update README.md commit c1a97a7767ccb2aa9afc7a5e72fd159e7c62ec02 Merge: 2bf86b8 f796708 Author: Glenn Jocher <glenn.jocher@ultralytics.com> Date: Tue Jul 14 18:36:53 2020 -0700 Merge branch 'master' into feature/DDP_fixed commit 2bf86b892fa2fd712f6530903a0d9b8533d7447a Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 14 22:18:15 2020 +0700 Fixed world_size not found when called from test commit 85ab2f38cdda28b61ad15a3a5a14c3aafb620dc8 Merge: 5a19011 c8357ad Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 14 22:19:58 2020 +0800 Merge branch 'feature/DDP_fixed' of https://github.com/MagicFrogSJTU/yolov5 into feature/DDP_fixed commit 5a19011949398d06e744d8d5521ab4e6dfa06ab7 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 14 22:19:15 2020 +0800 Add assertion for <=2 gpus DDP commit c8357ad5b15a0e6aeef4d7fe67ca9637f7322a4d Merge: e742dd9 787582f Author: yzchen <Chenyzsjtu@gmail.com> Date: Tue Jul 14 22:10:02 2020 +0800 Merge pull request #8 from MagicFrogSJTU/NanoCode012-patch-1 Modify number of dataloaders' workers commit 787582f97251834f955ef05a77072b8c673a8397 Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 14 20:38:58 2020 +0700 Fixed issue with single gpu not having world_size commit 63648925288d63a21174a4dd28f92dbfebfeb75a Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 14 19:16:15 2020 +0700 Add assert message for clarification Clarify why assertion was thrown to users commit 69364d6050e048d0d8834e0f30ce84da3f6a13f3 Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 14 17:36:48 2020 +0700 Changed number of workers check commit d738487089e41c22b3b1cd73aa7c1c40320a6ebf Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 14 17:33:38 2020 +0700 Adding world_size Reduce calls to torch.distributed. For use in create_dataloader. commit e742dd9619d29306c7541821238d3d7cddcdc508 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 14 15:38:48 2020 +0800 Make SyncBN a choice commit e90d4004387e6103fecad745f8cbc2edc918e906 Merge: 5bf8beb cd90360 Author: yzchen <Chenyzsjtu@gmail.com> Date: Tue Jul 14 15:32:10 2020 +0800 Merge pull request #6 from NanoCode012/patch-5 Update train.py commit cd9036017e7f8bd519a8b62adab0f47ea67f4962 Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 14 13:39:29 2020 +0700 Update train.py Remove redundant `opt.` prefix. commit 5bf8bebe8873afb18b762fe1f409aca116fac073 Merge: c9558a9 a1c8406 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 14 14:09:51 2020 +0800 Merge branch 'master' of https://github.com/ultralytics/yolov5 into feature/DDP_fixed commit c9558a9b51547febb03d9c1ca42e2ef0fc15bb31 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 14 13:51:34 2020 +0800 Add device allocation for loss compute commit 4f08c692fb5e943a89e0ee354ef6c80a50eeb28d Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Thu Jul 9 11:16:27 2020 +0800 Revert drop_last commit 1dabe33a5a223b758cc761fc8741c6224205a34b Merge: a1ce9b1 4b8450b Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Thu Jul 9 11:15:49 2020 +0800 Merge branch 'feature/DDP_fixed' of https://github.com/MagicFrogSJTU/yolov5 into feature/DDP_fixed commit a1ce9b1e96b71d7fcb9d3e8143013eb8cebe5e27 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Thu Jul 9 11:15:21 2020 +0800 fix lr warning commit 4b8450b46db76e5e58cd95df965d4736077cfb0e Merge: b9a50ae 02c63ef Author: yzchen <Chenyzsjtu@gmail.com> Date: Wed Jul 8 21:24:24 2020 +0800 Merge pull request #4 from NanoCode012/patch-4 Add drop_last for multi gpu commit 02c63ef81cf98b28b10344fe2cce08a03b143941 Author: NanoCode012 <kevinvong@rocketmail.com> Date: Wed Jul 8 10:08:30 2020 +0700 Add drop_last for multi gpu commit b9a50aed48ab1536f94d49269977e2accd67748f Merge: ec2dc6c 121d90b Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 7 19:48:04 2020 +0800 Merge branch 'master' of https://github.com/ultralytics/yolov5 into feature/DDP_fixed commit ec2dc6cc56de43ddff939e14c450672d0fbf9b3d Merge: d0326e3 82a6182 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 7 19:34:31 2020 +0800 Merge branch 'feature/DDP_fixed' of https://github.com/MagicFrogSJTU/yolov5 into feature/DDP_fixed commit d0326e398dfeeeac611ccc64198d4fe91b7aa969 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Tue Jul 7 19:31:24 2020 +0800 Add SyncBN commit 82a6182b3ad0689a4432b631b438004e5acb3b74 Merge: 96fa40a 050b2a5 Author: yzchen <Chenyzsjtu@gmail.com> Date: Tue Jul 7 19:21:01 2020 +0800 Merge pull request #1 from NanoCode012/patch-2 Convert BatchNorm to SyncBatchNorm commit 050b2a5a79a89c9405854d439a1f70f892139b1c Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 7 12:38:14 2020 +0700 Add cleanup for process_group commit 2aa330139f3cc1237aeb3132245ed7e5d6da1683 Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 7 12:07:40 2020 +0700 Remove apex.parallel. Use torch.nn.parallel For future compatibility commit 77c8e27e603bea9a69e7647587ca8d509dc1990d Author: NanoCode012 <kevinvong@rocketmail.com> Date: Tue Jul 7 01:54:39 2020 +0700 Convert BatchNorm to SyncBatchNorm commit 96fa40a3a925e4ffd815fe329e1b5181ec92adc8 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Mon Jul 6 21:53:56 2020 +0800 Fix the datset inconsistency problem commit 16e7c269d062c8d16c4d4ff70cc80fd87935dc95 Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Mon Jul 6 11:34:03 2020 +0800 Add loss multiplication to preserver the single-process performance commit e83805563065ffd2e38f85abe008fc662cc17909 Merge: 625bb49 3bdea3f Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Fri Jul 3 20:56:30 2020 +0800 Merge branch 'master' of https://github.com/ultralytics/yolov5 into feature/DDP_fixed commit 625bb49f4e52d781143fea0af36d14e5be8b040c Author: yizhi.chen <chenyzsjtu@outlook.com> Date: Thu Jul 2 22:45:15 2020 +0800 DDP established * Fixed destroy_process_group in DP mode * Update torch_utils.py * Update utils.py Revert build_targets() to current master. * Update datasets.py * Fixed world_size attribute not found Co-authored-by: NanoCode012 <kevinvong@rocketmail.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
4 yıl önce
Add TensorFlow and TFLite export (#1127) * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * Put representative dataset in tfl_int8 block * detect.py TF inference * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * detect.py TF inference * Put representative dataset in tfl_int8 block * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * implement C3() and SiLU() * Fix reshape dim to support dynamic batching * Add epsilon argument in tf_BN, which is different between TF and PT * Set stride to None if not using PyTorch, and do not warmup without PyTorch * Add list support in check_img_size() * Add list input support in detect.py * sys.path.append('./') to run from yolov5/ * Add int8 quantization support for TensorFlow 2.5 * Add get_coco128.sh * Remove --no-tfl-detect in models/tf.py (Use tf-android-tfl-detect branch for EdgeTPU) * Update requirements.txt * Replace torch.load() with attempt_load() * Update requirements.txt * Add --tf-raw-resize to set half_pixel_centers=False * Add --agnostic-nms for TF class-agnostic NMS * Cleanup after merge * Cleanup2 after merge * Cleanup3 after merge * Add tf.py docstring with credit and usage * pb saved_model and tflite use only one model in detect.py * Add use cases in docstring of tf.py * Remove redundant `stride` definition * Remove keras direct import * Fix `check_requirements(('tensorflow>=2.4.1',))` Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
3 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
Add TensorFlow and TFLite export (#1127) * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * Put representative dataset in tfl_int8 block * detect.py TF inference * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * detect.py TF inference * Put representative dataset in tfl_int8 block * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * implement C3() and SiLU() * Fix reshape dim to support dynamic batching * Add epsilon argument in tf_BN, which is different between TF and PT * Set stride to None if not using PyTorch, and do not warmup without PyTorch * Add list support in check_img_size() * Add list input support in detect.py * sys.path.append('./') to run from yolov5/ * Add int8 quantization support for TensorFlow 2.5 * Add get_coco128.sh * Remove --no-tfl-detect in models/tf.py (Use tf-android-tfl-detect branch for EdgeTPU) * Update requirements.txt * Replace torch.load() with attempt_load() * Update requirements.txt * Add --tf-raw-resize to set half_pixel_centers=False * Add --agnostic-nms for TF class-agnostic NMS * Cleanup after merge * Cleanup2 after merge * Cleanup3 after merge * Add tf.py docstring with credit and usage * pb saved_model and tflite use only one model in detect.py * Add use cases in docstring of tf.py * Remove redundant `stride` definition * Remove keras direct import * Fix `check_requirements(('tensorflow>=2.4.1',))` Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
3 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
Add TensorFlow and TFLite export (#1127) * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * Put representative dataset in tfl_int8 block * detect.py TF inference * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * detect.py TF inference * Put representative dataset in tfl_int8 block * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * implement C3() and SiLU() * Fix reshape dim to support dynamic batching * Add epsilon argument in tf_BN, which is different between TF and PT * Set stride to None if not using PyTorch, and do not warmup without PyTorch * Add list support in check_img_size() * Add list input support in detect.py * sys.path.append('./') to run from yolov5/ * Add int8 quantization support for TensorFlow 2.5 * Add get_coco128.sh * Remove --no-tfl-detect in models/tf.py (Use tf-android-tfl-detect branch for EdgeTPU) * Update requirements.txt * Replace torch.load() with attempt_load() * Update requirements.txt * Add --tf-raw-resize to set half_pixel_centers=False * Add --agnostic-nms for TF class-agnostic NMS * Cleanup after merge * Cleanup2 after merge * Cleanup3 after merge * Add tf.py docstring with credit and usage * pb saved_model and tflite use only one model in detect.py * Add use cases in docstring of tf.py * Remove redundant `stride` definition * Remove keras direct import * Fix `check_requirements(('tensorflow>=2.4.1',))` Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
3 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
Add TensorFlow and TFLite export (#1127) * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * Put representative dataset in tfl_int8 block * detect.py TF inference * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * detect.py TF inference * Put representative dataset in tfl_int8 block * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * implement C3() and SiLU() * Fix reshape dim to support dynamic batching * Add epsilon argument in tf_BN, which is different between TF and PT * Set stride to None if not using PyTorch, and do not warmup without PyTorch * Add list support in check_img_size() * Add list input support in detect.py * sys.path.append('./') to run from yolov5/ * Add int8 quantization support for TensorFlow 2.5 * Add get_coco128.sh * Remove --no-tfl-detect in models/tf.py (Use tf-android-tfl-detect branch for EdgeTPU) * Update requirements.txt * Replace torch.load() with attempt_load() * Update requirements.txt * Add --tf-raw-resize to set half_pixel_centers=False * Add --agnostic-nms for TF class-agnostic NMS * Cleanup after merge * Cleanup2 after merge * Cleanup3 after merge * Add tf.py docstring with credit and usage * pb saved_model and tflite use only one model in detect.py * Add use cases in docstring of tf.py * Remove redundant `stride` definition * Remove keras direct import * Fix `check_requirements(('tensorflow>=2.4.1',))` Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
3 yıl önce
4 yıl önce
4 yıl önce
3 yıl önce
4 yıl önce
Add TensorFlow and TFLite export (#1127) * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * Put representative dataset in tfl_int8 block * detect.py TF inference * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * detect.py TF inference * Put representative dataset in tfl_int8 block * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * implement C3() and SiLU() * Fix reshape dim to support dynamic batching * Add epsilon argument in tf_BN, which is different between TF and PT * Set stride to None if not using PyTorch, and do not warmup without PyTorch * Add list support in check_img_size() * Add list input support in detect.py * sys.path.append('./') to run from yolov5/ * Add int8 quantization support for TensorFlow 2.5 * Add get_coco128.sh * Remove --no-tfl-detect in models/tf.py (Use tf-android-tfl-detect branch for EdgeTPU) * Update requirements.txt * Replace torch.load() with attempt_load() * Update requirements.txt * Add --tf-raw-resize to set half_pixel_centers=False * Add --agnostic-nms for TF class-agnostic NMS * Cleanup after merge * Cleanup2 after merge * Cleanup3 after merge * Add tf.py docstring with credit and usage * pb saved_model and tflite use only one model in detect.py * Add use cases in docstring of tf.py * Remove redundant `stride` definition * Remove keras direct import * Fix `check_requirements(('tensorflow>=2.4.1',))` Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
3 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
Add TensorFlow and TFLite export (#1127) * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * Put representative dataset in tfl_int8 block * detect.py TF inference * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * Add models/tf.py for TensorFlow and TFLite export * Set auto=False for int8 calibration * Update requirements.txt for TensorFlow and TFLite export * Read anchors directly from PyTorch weights * Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export * Remove check_anchor_order, check_file, set_logging from import * Reformat code and optimize imports * Autodownload model and check cfg * update --source path, img-size to 320, single output * Adjust representative_dataset * detect.py TF inference * Put representative dataset in tfl_int8 block * weights to string * weights to string * cleanup tf.py * Add --dynamic-batch-size * Add xywh normalization to reduce calibration error * Update requirements.txt TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error * Fix imports Move C3 from models.experimental to models.common * implement C3() and SiLU() * Fix reshape dim to support dynamic batching * Add epsilon argument in tf_BN, which is different between TF and PT * Set stride to None if not using PyTorch, and do not warmup without PyTorch * Add list support in check_img_size() * Add list input support in detect.py * sys.path.append('./') to run from yolov5/ * Add int8 quantization support for TensorFlow 2.5 * Add get_coco128.sh * Remove --no-tfl-detect in models/tf.py (Use tf-android-tfl-detect branch for EdgeTPU) * Update requirements.txt * Replace torch.load() with attempt_load() * Update requirements.txt * Add --tf-raw-resize to set half_pixel_centers=False * Add --agnostic-nms for TF class-agnostic NMS * Cleanup after merge * Cleanup2 after merge * Cleanup3 after merge * Add tf.py docstring with credit and usage * pb saved_model and tflite use only one model in detect.py * Add use cases in docstring of tf.py * Remove redundant `stride` definition * Remove keras direct import * Fix `check_requirements(('tensorflow>=2.4.1',))` Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
3 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
3 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
4 yıl önce
4 yıl önce
4 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
4 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
4 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
4 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
4 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
YOLOv5 Segmentation Dataloader Updates (#2188) * Update C3 module * Update C3 module * Update C3 module * Update C3 module * update * update * update * update * update * update * update * update * update * updates * updates * updates * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * updates * updates * updates * updates * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update datasets * update * update * update * update attempt_downlaod() * merge * merge * update * update * update * update * update * update * update * update * update * update * parameterize eps * comments * gs-multiple * update * max_nms implemented * Create one_cycle() function * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * GitHub API rate limit fix * update * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * ComputeLoss * astuple * epochs * update * update * ComputeLoss() * update * update * update * update * update * update * update * update * update * update * update * merge * merge * merge * merge * update * update * update * update * commit=tag == tags[-1] * Update cudnn.benchmark * update * update * update * updates * updates * updates * updates * updates * updates * updates * update * update * update * update * update * mosaic9 * update * update * update * update * update * update * institute cache versioning * only display on existing cache * reverse cache exists booleans
3 yıl önce
4 yıl önce
3 yıl önce
3 yıl önce
3 yıl önce
3 yıl önce
3 yıl önce
3 yıl önce
3 yıl önce
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033
  1. # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
  2. """
  3. Dataloaders and dataset utils
  4. """
  5. import glob
  6. import hashlib
  7. import json
  8. import logging
  9. import os
  10. import random
  11. import shutil
  12. import time
  13. from itertools import repeat
  14. from multiprocessing.pool import ThreadPool, Pool
  15. from pathlib import Path
  16. from threading import Thread
  17. from zipfile import ZipFile
  18. import cv2
  19. import numpy as np
  20. import torch
  21. import torch.nn.functional as F
  22. import yaml
  23. from PIL import Image, ExifTags
  24. from torch.utils.data import Dataset
  25. from tqdm import tqdm
  26. from utils.augmentations import Albumentations, augment_hsv, copy_paste, letterbox, mixup, random_perspective
  27. from utils.general import check_dataset, check_requirements, check_yaml, clean_str, segments2boxes, \
  28. xywh2xyxy, xywhn2xyxy, xyxy2xywhn, xyn2xy
  29. from utils.torch_utils import torch_distributed_zero_first
  30. # Parameters
  31. HELP_URL = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data'
  32. IMG_FORMATS = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng', 'webp', 'mpo'] # acceptable image suffixes
  33. VID_FORMATS = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes
  34. NUM_THREADS = min(8, os.cpu_count()) # number of multiprocessing threads
  35. # Get orientation exif tag
  36. for orientation in ExifTags.TAGS.keys():
  37. if ExifTags.TAGS[orientation] == 'Orientation':
  38. break
  39. def get_hash(paths):
  40. # Returns a single hash value of a list of paths (files or dirs)
  41. size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes
  42. h = hashlib.md5(str(size).encode()) # hash sizes
  43. h.update(''.join(paths).encode()) # hash paths
  44. return h.hexdigest() # return hash
  45. def exif_size(img):
  46. # Returns exif-corrected PIL size
  47. s = img.size # (width, height)
  48. try:
  49. rotation = dict(img._getexif().items())[orientation]
  50. if rotation == 6: # rotation 270
  51. s = (s[1], s[0])
  52. elif rotation == 8: # rotation 90
  53. s = (s[1], s[0])
  54. except:
  55. pass
  56. return s
  57. def exif_transpose(image):
  58. """
  59. Transpose a PIL image accordingly if it has an EXIF Orientation tag.
  60. From https://github.com/python-pillow/Pillow/blob/master/src/PIL/ImageOps.py
  61. :param image: The image to transpose.
  62. :return: An image.
  63. """
  64. exif = image.getexif()
  65. orientation = exif.get(0x0112, 1) # default 1
  66. if orientation > 1:
  67. method = {2: Image.FLIP_LEFT_RIGHT,
  68. 3: Image.ROTATE_180,
  69. 4: Image.FLIP_TOP_BOTTOM,
  70. 5: Image.TRANSPOSE,
  71. 6: Image.ROTATE_270,
  72. 7: Image.TRANSVERSE,
  73. 8: Image.ROTATE_90,
  74. }.get(orientation)
  75. if method is not None:
  76. image = image.transpose(method)
  77. del exif[0x0112]
  78. image.info["exif"] = exif.tobytes()
  79. return image
  80. def create_dataloader(path, imgsz, batch_size, stride, single_cls=False, hyp=None, augment=False, cache=False, pad=0.0,
  81. rect=False, rank=-1, workers=8, image_weights=False, quad=False, prefix=''):
  82. # Make sure only the first process in DDP process the dataset first, and the following others can use the cache
  83. with torch_distributed_zero_first(rank):
  84. dataset = LoadImagesAndLabels(path, imgsz, batch_size,
  85. augment=augment, # augment images
  86. hyp=hyp, # augmentation hyperparameters
  87. rect=rect, # rectangular training
  88. cache_images=cache,
  89. single_cls=single_cls,
  90. stride=int(stride),
  91. pad=pad,
  92. image_weights=image_weights,
  93. prefix=prefix)
  94. batch_size = min(batch_size, len(dataset))
  95. nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, workers]) # number of workers
  96. sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None
  97. loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader
  98. # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader()
  99. dataloader = loader(dataset,
  100. batch_size=batch_size,
  101. num_workers=nw,
  102. sampler=sampler,
  103. pin_memory=True,
  104. collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn)
  105. return dataloader, dataset
  106. class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader):
  107. """ Dataloader that reuses workers
  108. Uses same syntax as vanilla DataLoader
  109. """
  110. def __init__(self, *args, **kwargs):
  111. super().__init__(*args, **kwargs)
  112. object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))
  113. self.iterator = super().__iter__()
  114. def __len__(self):
  115. return len(self.batch_sampler.sampler)
  116. def __iter__(self):
  117. for i in range(len(self)):
  118. yield next(self.iterator)
  119. class _RepeatSampler(object):
  120. """ Sampler that repeats forever
  121. Args:
  122. sampler (Sampler)
  123. """
  124. def __init__(self, sampler):
  125. self.sampler = sampler
  126. def __iter__(self):
  127. while True:
  128. yield from iter(self.sampler)
  129. class LoadImages:
  130. # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4`
  131. def __init__(self, path, img_size=640, stride=32, auto=True):
  132. p = str(Path(path).resolve()) # os-agnostic absolute path
  133. if '*' in p:
  134. files = sorted(glob.glob(p, recursive=True)) # glob
  135. elif os.path.isdir(p):
  136. files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir
  137. elif os.path.isfile(p):
  138. files = [p] # files
  139. else:
  140. raise Exception(f'ERROR: {p} does not exist')
  141. images = [x for x in files if x.split('.')[-1].lower() in IMG_FORMATS]
  142. videos = [x for x in files if x.split('.')[-1].lower() in VID_FORMATS]
  143. ni, nv = len(images), len(videos)
  144. self.img_size = img_size
  145. self.stride = stride
  146. self.files = images + videos
  147. self.nf = ni + nv # number of files
  148. self.video_flag = [False] * ni + [True] * nv
  149. self.mode = 'image'
  150. self.auto = auto
  151. if any(videos):
  152. self.new_video(videos[0]) # new video
  153. else:
  154. self.cap = None
  155. assert self.nf > 0, f'No images or videos found in {p}. ' \
  156. f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}'
  157. def __iter__(self):
  158. self.count = 0
  159. return self
  160. def __next__(self):
  161. if self.count == self.nf:
  162. raise StopIteration
  163. path = self.files[self.count]
  164. if self.video_flag[self.count]:
  165. # Read video
  166. self.mode = 'video'
  167. ret_val, img0 = self.cap.read()
  168. if not ret_val:
  169. self.count += 1
  170. self.cap.release()
  171. if self.count == self.nf: # last video
  172. raise StopIteration
  173. else:
  174. path = self.files[self.count]
  175. self.new_video(path)
  176. ret_val, img0 = self.cap.read()
  177. self.frame += 1
  178. print(f'video {self.count + 1}/{self.nf} ({self.frame}/{self.frames}) {path}: ', end='')
  179. else:
  180. # Read image
  181. self.count += 1
  182. img0 = cv2.imread(path) # BGR
  183. assert img0 is not None, 'Image Not Found ' + path
  184. print(f'image {self.count}/{self.nf} {path}: ', end='')
  185. # Padded resize
  186. img = letterbox(img0, self.img_size, stride=self.stride, auto=self.auto)[0]
  187. # Convert
  188. img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
  189. img = np.ascontiguousarray(img)
  190. return path, img, img0, self.cap
  191. def new_video(self, path):
  192. self.frame = 0
  193. self.cap = cv2.VideoCapture(path)
  194. self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))
  195. def __len__(self):
  196. return self.nf # number of files
  197. class LoadWebcam: # for inference
  198. # YOLOv5 local webcam dataloader, i.e. `python detect.py --source 0`
  199. def __init__(self, pipe='0', img_size=640, stride=32):
  200. self.img_size = img_size
  201. self.stride = stride
  202. self.pipe = eval(pipe) if pipe.isnumeric() else pipe
  203. self.cap = cv2.VideoCapture(self.pipe) # video capture object
  204. self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size
  205. def __iter__(self):
  206. self.count = -1
  207. return self
  208. def __next__(self):
  209. self.count += 1
  210. if cv2.waitKey(1) == ord('q'): # q to quit
  211. self.cap.release()
  212. cv2.destroyAllWindows()
  213. raise StopIteration
  214. # Read frame
  215. ret_val, img0 = self.cap.read()
  216. img0 = cv2.flip(img0, 1) # flip left-right
  217. # Print
  218. assert ret_val, f'Camera Error {self.pipe}'
  219. img_path = 'webcam.jpg'
  220. print(f'webcam {self.count}: ', end='')
  221. # Padded resize
  222. img = letterbox(img0, self.img_size, stride=self.stride)[0]
  223. # Convert
  224. img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
  225. img = np.ascontiguousarray(img)
  226. return img_path, img, img0, None
  227. def __len__(self):
  228. return 0
  229. class LoadStreams:
  230. # YOLOv5 streamloader, i.e. `python detect.py --source 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP streams`
  231. def __init__(self, sources='streams.txt', img_size=640, stride=32, auto=True):
  232. self.mode = 'stream'
  233. self.img_size = img_size
  234. self.stride = stride
  235. if os.path.isfile(sources):
  236. with open(sources, 'r') as f:
  237. sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())]
  238. else:
  239. sources = [sources]
  240. n = len(sources)
  241. self.imgs, self.fps, self.frames, self.threads = [None] * n, [0] * n, [0] * n, [None] * n
  242. self.sources = [clean_str(x) for x in sources] # clean source names for later
  243. self.auto = auto
  244. for i, s in enumerate(sources): # index, source
  245. # Start thread to read frames from video stream
  246. print(f'{i + 1}/{n}: {s}... ', end='')
  247. if 'youtube.com/' in s or 'youtu.be/' in s: # if source is YouTube video
  248. check_requirements(('pafy', 'youtube_dl'))
  249. import pafy
  250. s = pafy.new(s).getbest(preftype="mp4").url # YouTube URL
  251. s = eval(s) if s.isnumeric() else s # i.e. s = '0' local webcam
  252. cap = cv2.VideoCapture(s)
  253. assert cap.isOpened(), f'Failed to open {s}'
  254. w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
  255. h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
  256. self.fps[i] = max(cap.get(cv2.CAP_PROP_FPS) % 100, 0) or 30.0 # 30 FPS fallback
  257. self.frames[i] = max(int(cap.get(cv2.CAP_PROP_FRAME_COUNT)), 0) or float('inf') # infinite stream fallback
  258. _, self.imgs[i] = cap.read() # guarantee first frame
  259. self.threads[i] = Thread(target=self.update, args=([i, cap, s]), daemon=True)
  260. print(f" success ({self.frames[i]} frames {w}x{h} at {self.fps[i]:.2f} FPS)")
  261. self.threads[i].start()
  262. print('') # newline
  263. # check for common shapes
  264. s = np.stack([letterbox(x, self.img_size, stride=self.stride, auto=self.auto)[0].shape for x in self.imgs])
  265. self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal
  266. if not self.rect:
  267. print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.')
  268. def update(self, i, cap, stream):
  269. # Read stream `i` frames in daemon thread
  270. n, f, read = 0, self.frames[i], 1 # frame number, frame array, inference every 'read' frame
  271. while cap.isOpened() and n < f:
  272. n += 1
  273. # _, self.imgs[index] = cap.read()
  274. cap.grab()
  275. if n % read == 0:
  276. success, im = cap.retrieve()
  277. if success:
  278. self.imgs[i] = im
  279. else:
  280. print('WARNING: Video stream unresponsive, please check your IP camera connection.')
  281. self.imgs[i] *= 0
  282. cap.open(stream) # re-open stream if signal was lost
  283. time.sleep(1 / self.fps[i]) # wait time
  284. def __iter__(self):
  285. self.count = -1
  286. return self
  287. def __next__(self):
  288. self.count += 1
  289. if not all(x.is_alive() for x in self.threads) or cv2.waitKey(1) == ord('q'): # q to quit
  290. cv2.destroyAllWindows()
  291. raise StopIteration
  292. # Letterbox
  293. img0 = self.imgs.copy()
  294. img = [letterbox(x, self.img_size, stride=self.stride, auto=self.rect and self.auto)[0] for x in img0]
  295. # Stack
  296. img = np.stack(img, 0)
  297. # Convert
  298. img = img[..., ::-1].transpose((0, 3, 1, 2)) # BGR to RGB, BHWC to BCHW
  299. img = np.ascontiguousarray(img)
  300. return self.sources, img, img0, None
  301. def __len__(self):
  302. return len(self.sources) # 1E12 frames = 32 streams at 30 FPS for 30 years
  303. def img2label_paths(img_paths):
  304. # Define label paths as a function of image paths
  305. sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings
  306. return [sb.join(x.rsplit(sa, 1)).rsplit('.', 1)[0] + '.txt' for x in img_paths]
  307. class LoadImagesAndLabels(Dataset):
  308. # YOLOv5 train_loader/val_loader, loads images and labels for training and validation
  309. cache_version = 0.6 # dataset labels *.cache version
  310. def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False,
  311. cache_images=False, single_cls=False, stride=32, pad=0.0, prefix=''):
  312. self.img_size = img_size
  313. self.augment = augment
  314. self.hyp = hyp
  315. self.image_weights = image_weights
  316. self.rect = False if image_weights else rect
  317. self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training)
  318. self.mosaic_border = [-img_size // 2, -img_size // 2]
  319. self.stride = stride
  320. self.path = path
  321. self.albumentations = Albumentations() if augment else None
  322. try:
  323. f = [] # image files
  324. for p in path if isinstance(path, list) else [path]:
  325. p = Path(p) # os-agnostic
  326. if p.is_dir(): # dir
  327. f += glob.glob(str(p / '**' / '*.*'), recursive=True)
  328. # f = list(p.rglob('**/*.*')) # pathlib
  329. elif p.is_file(): # file
  330. with open(p, 'r') as t:
  331. t = t.read().strip().splitlines()
  332. parent = str(p.parent) + os.sep
  333. f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path
  334. # f += [p.parent / x.lstrip(os.sep) for x in t] # local to global path (pathlib)
  335. else:
  336. raise Exception(f'{prefix}{p} does not exist')
  337. self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in IMG_FORMATS])
  338. # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in img_formats]) # pathlib
  339. assert self.img_files, f'{prefix}No images found'
  340. except Exception as e:
  341. raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {HELP_URL}')
  342. # Check cache
  343. self.label_files = img2label_paths(self.img_files) # labels
  344. cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache')
  345. try:
  346. cache, exists = np.load(cache_path, allow_pickle=True).item(), True # load dict
  347. assert cache['version'] == self.cache_version # same version
  348. assert cache['hash'] == get_hash(self.label_files + self.img_files) # same hash
  349. except:
  350. cache, exists = self.cache_labels(cache_path, prefix), False # cache
  351. # Display cache
  352. nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupted, total
  353. if exists:
  354. d = f"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted"
  355. tqdm(None, desc=prefix + d, total=n, initial=n) # display cache results
  356. if cache['msgs']:
  357. logging.info('\n'.join(cache['msgs'])) # display warnings
  358. assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {HELP_URL}'
  359. # Read cache
  360. [cache.pop(k) for k in ('hash', 'version', 'msgs')] # remove items
  361. labels, shapes, self.segments = zip(*cache.values())
  362. self.labels = list(labels)
  363. self.shapes = np.array(shapes, dtype=np.float64)
  364. self.img_files = list(cache.keys()) # update
  365. self.label_files = img2label_paths(cache.keys()) # update
  366. n = len(shapes) # number of images
  367. bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index
  368. nb = bi[-1] + 1 # number of batches
  369. self.batch = bi # batch index of image
  370. self.n = n
  371. self.indices = range(n)
  372. # Update labels
  373. include_class = [] # filter labels to include only these classes (optional)
  374. include_class_array = np.array(include_class).reshape(1, -1)
  375. for i, (label, segment) in enumerate(zip(self.labels, self.segments)):
  376. if include_class:
  377. j = (label[:, 0:1] == include_class_array).any(1)
  378. self.labels[i] = label[j]
  379. if segment:
  380. self.segments[i] = segment[j]
  381. if single_cls: # single-class training, merge all classes into 0
  382. self.labels[i][:, 0] = 0
  383. if segment:
  384. self.segments[i][:, 0] = 0
  385. # Rectangular Training
  386. if self.rect:
  387. # Sort by aspect ratio
  388. s = self.shapes # wh
  389. ar = s[:, 1] / s[:, 0] # aspect ratio
  390. irect = ar.argsort()
  391. self.img_files = [self.img_files[i] for i in irect]
  392. self.label_files = [self.label_files[i] for i in irect]
  393. self.labels = [self.labels[i] for i in irect]
  394. self.shapes = s[irect] # wh
  395. ar = ar[irect]
  396. # Set training image shapes
  397. shapes = [[1, 1]] * nb
  398. for i in range(nb):
  399. ari = ar[bi == i]
  400. mini, maxi = ari.min(), ari.max()
  401. if maxi < 1:
  402. shapes[i] = [maxi, 1]
  403. elif mini > 1:
  404. shapes[i] = [1, 1 / mini]
  405. self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride
  406. # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM)
  407. self.imgs, self.img_npy = [None] * n, [None] * n
  408. if cache_images:
  409. if cache_images == 'disk':
  410. self.im_cache_dir = Path(Path(self.img_files[0]).parent.as_posix() + '_npy')
  411. self.img_npy = [self.im_cache_dir / Path(f).with_suffix('.npy').name for f in self.img_files]
  412. self.im_cache_dir.mkdir(parents=True, exist_ok=True)
  413. gb = 0 # Gigabytes of cached images
  414. self.img_hw0, self.img_hw = [None] * n, [None] * n
  415. results = ThreadPool(NUM_THREADS).imap(lambda x: load_image(*x), zip(repeat(self), range(n)))
  416. pbar = tqdm(enumerate(results), total=n)
  417. for i, x in pbar:
  418. if cache_images == 'disk':
  419. if not self.img_npy[i].exists():
  420. np.save(self.img_npy[i].as_posix(), x[0])
  421. gb += self.img_npy[i].stat().st_size
  422. else:
  423. self.imgs[i], self.img_hw0[i], self.img_hw[i] = x # im, hw_orig, hw_resized = load_image(self, i)
  424. gb += self.imgs[i].nbytes
  425. pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB {cache_images})'
  426. pbar.close()
  427. def cache_labels(self, path=Path('./labels.cache'), prefix=''):
  428. # Cache dataset labels, check images and read shapes
  429. x = {} # dict
  430. nm, nf, ne, nc, msgs = 0, 0, 0, 0, [] # number missing, found, empty, corrupt, messages
  431. desc = f"{prefix}Scanning '{path.parent / path.stem}' images and labels..."
  432. with Pool(NUM_THREADS) as pool:
  433. pbar = tqdm(pool.imap(verify_image_label, zip(self.img_files, self.label_files, repeat(prefix))),
  434. desc=desc, total=len(self.img_files))
  435. for im_file, l, shape, segments, nm_f, nf_f, ne_f, nc_f, msg in pbar:
  436. nm += nm_f
  437. nf += nf_f
  438. ne += ne_f
  439. nc += nc_f
  440. if im_file:
  441. x[im_file] = [l, shape, segments]
  442. if msg:
  443. msgs.append(msg)
  444. pbar.desc = f"{desc}{nf} found, {nm} missing, {ne} empty, {nc} corrupted"
  445. pbar.close()
  446. if msgs:
  447. logging.info('\n'.join(msgs))
  448. if nf == 0:
  449. logging.info(f'{prefix}WARNING: No labels found in {path}. See {HELP_URL}')
  450. x['hash'] = get_hash(self.label_files + self.img_files)
  451. x['results'] = nf, nm, ne, nc, len(self.img_files)
  452. x['msgs'] = msgs # warnings
  453. x['version'] = self.cache_version # cache version
  454. try:
  455. np.save(path, x) # save cache for next time
  456. path.with_suffix('.cache.npy').rename(path) # remove .npy suffix
  457. logging.info(f'{prefix}New cache created: {path}')
  458. except Exception as e:
  459. logging.info(f'{prefix}WARNING: Cache directory {path.parent} is not writeable: {e}') # path not writeable
  460. return x
  461. def __len__(self):
  462. return len(self.img_files)
  463. # def __iter__(self):
  464. # self.count = -1
  465. # print('ran dataset iter')
  466. # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF)
  467. # return self
  468. def __getitem__(self, index):
  469. index = self.indices[index] # linear, shuffled, or image_weights
  470. hyp = self.hyp
  471. mosaic = self.mosaic and random.random() < hyp['mosaic']
  472. if mosaic:
  473. # Load mosaic
  474. img, labels = load_mosaic(self, index)
  475. shapes = None
  476. # MixUp augmentation
  477. if random.random() < hyp['mixup']:
  478. img, labels = mixup(img, labels, *load_mosaic(self, random.randint(0, self.n - 1)))
  479. else:
  480. # Load image
  481. img, (h0, w0), (h, w) = load_image(self, index)
  482. # Letterbox
  483. shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape
  484. img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment)
  485. shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling
  486. labels = self.labels[index].copy()
  487. if labels.size: # normalized xywh to pixel xyxy format
  488. labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1])
  489. if self.augment:
  490. img, labels = random_perspective(img, labels,
  491. degrees=hyp['degrees'],
  492. translate=hyp['translate'],
  493. scale=hyp['scale'],
  494. shear=hyp['shear'],
  495. perspective=hyp['perspective'])
  496. nl = len(labels) # number of labels
  497. if nl:
  498. labels[:, 1:5] = xyxy2xywhn(labels[:, 1:5], w=img.shape[1], h=img.shape[0], clip=True, eps=1E-3)
  499. if self.augment:
  500. # Albumentations
  501. img, labels = self.albumentations(img, labels)
  502. nl = len(labels) # update after albumentations
  503. # HSV color-space
  504. augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v'])
  505. # Flip up-down
  506. if random.random() < hyp['flipud']:
  507. img = np.flipud(img)
  508. if nl:
  509. labels[:, 2] = 1 - labels[:, 2]
  510. # Flip left-right
  511. if random.random() < hyp['fliplr']:
  512. img = np.fliplr(img)
  513. if nl:
  514. labels[:, 1] = 1 - labels[:, 1]
  515. # Cutouts
  516. # labels = cutout(img, labels, p=0.5)
  517. labels_out = torch.zeros((nl, 6))
  518. if nl:
  519. labels_out[:, 1:] = torch.from_numpy(labels)
  520. # Convert
  521. img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
  522. img = np.ascontiguousarray(img)
  523. return torch.from_numpy(img), labels_out, self.img_files[index], shapes
  524. @staticmethod
  525. def collate_fn(batch):
  526. img, label, path, shapes = zip(*batch) # transposed
  527. for i, l in enumerate(label):
  528. l[:, 0] = i # add target image index for build_targets()
  529. return torch.stack(img, 0), torch.cat(label, 0), path, shapes
  530. @staticmethod
  531. def collate_fn4(batch):
  532. img, label, path, shapes = zip(*batch) # transposed
  533. n = len(shapes) // 4
  534. img4, label4, path4, shapes4 = [], [], path[:n], shapes[:n]
  535. ho = torch.tensor([[0., 0, 0, 1, 0, 0]])
  536. wo = torch.tensor([[0., 0, 1, 0, 0, 0]])
  537. s = torch.tensor([[1, 1, .5, .5, .5, .5]]) # scale
  538. for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW
  539. i *= 4
  540. if random.random() < 0.5:
  541. im = F.interpolate(img[i].unsqueeze(0).float(), scale_factor=2., mode='bilinear', align_corners=False)[
  542. 0].type(img[i].type())
  543. l = label[i]
  544. else:
  545. im = torch.cat((torch.cat((img[i], img[i + 1]), 1), torch.cat((img[i + 2], img[i + 3]), 1)), 2)
  546. l = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s
  547. img4.append(im)
  548. label4.append(l)
  549. for i, l in enumerate(label4):
  550. l[:, 0] = i # add target image index for build_targets()
  551. return torch.stack(img4, 0), torch.cat(label4, 0), path4, shapes4
  552. # Ancillary functions --------------------------------------------------------------------------------------------------
  553. def load_image(self, i):
  554. # loads 1 image from dataset index 'i', returns im, original hw, resized hw
  555. im = self.imgs[i]
  556. if im is None: # not cached in ram
  557. npy = self.img_npy[i]
  558. if npy and npy.exists(): # load npy
  559. im = np.load(npy)
  560. else: # read image
  561. path = self.img_files[i]
  562. im = cv2.imread(path) # BGR
  563. assert im is not None, 'Image Not Found ' + path
  564. h0, w0 = im.shape[:2] # orig hw
  565. r = self.img_size / max(h0, w0) # ratio
  566. if r != 1: # if sizes are not equal
  567. im = cv2.resize(im, (int(w0 * r), int(h0 * r)),
  568. interpolation=cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR)
  569. return im, (h0, w0), im.shape[:2] # im, hw_original, hw_resized
  570. else:
  571. return self.imgs[i], self.img_hw0[i], self.img_hw[i] # im, hw_original, hw_resized
  572. def load_mosaic(self, index):
  573. # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic
  574. labels4, segments4 = [], []
  575. s = self.img_size
  576. yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y
  577. indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices
  578. random.shuffle(indices)
  579. for i, index in enumerate(indices):
  580. # Load image
  581. img, _, (h, w) = load_image(self, index)
  582. # place img in img4
  583. if i == 0: # top left
  584. img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
  585. x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
  586. x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
  587. elif i == 1: # top right
  588. x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
  589. x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
  590. elif i == 2: # bottom left
  591. x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
  592. x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
  593. elif i == 3: # bottom right
  594. x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
  595. x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
  596. img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
  597. padw = x1a - x1b
  598. padh = y1a - y1b
  599. # Labels
  600. labels, segments = self.labels[index].copy(), self.segments[index].copy()
  601. if labels.size:
  602. labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format
  603. segments = [xyn2xy(x, w, h, padw, padh) for x in segments]
  604. labels4.append(labels)
  605. segments4.extend(segments)
  606. # Concat/clip labels
  607. labels4 = np.concatenate(labels4, 0)
  608. for x in (labels4[:, 1:], *segments4):
  609. np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
  610. # img4, labels4 = replicate(img4, labels4) # replicate
  611. # Augment
  612. img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp['copy_paste'])
  613. img4, labels4 = random_perspective(img4, labels4, segments4,
  614. degrees=self.hyp['degrees'],
  615. translate=self.hyp['translate'],
  616. scale=self.hyp['scale'],
  617. shear=self.hyp['shear'],
  618. perspective=self.hyp['perspective'],
  619. border=self.mosaic_border) # border to remove
  620. return img4, labels4
  621. def load_mosaic9(self, index):
  622. # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic
  623. labels9, segments9 = [], []
  624. s = self.img_size
  625. indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices
  626. random.shuffle(indices)
  627. for i, index in enumerate(indices):
  628. # Load image
  629. img, _, (h, w) = load_image(self, index)
  630. # place img in img9
  631. if i == 0: # center
  632. img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
  633. h0, w0 = h, w
  634. c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates
  635. elif i == 1: # top
  636. c = s, s - h, s + w, s
  637. elif i == 2: # top right
  638. c = s + wp, s - h, s + wp + w, s
  639. elif i == 3: # right
  640. c = s + w0, s, s + w0 + w, s + h
  641. elif i == 4: # bottom right
  642. c = s + w0, s + hp, s + w0 + w, s + hp + h
  643. elif i == 5: # bottom
  644. c = s + w0 - w, s + h0, s + w0, s + h0 + h
  645. elif i == 6: # bottom left
  646. c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h
  647. elif i == 7: # left
  648. c = s - w, s + h0 - h, s, s + h0
  649. elif i == 8: # top left
  650. c = s - w, s + h0 - hp - h, s, s + h0 - hp
  651. padx, pady = c[:2]
  652. x1, y1, x2, y2 = [max(x, 0) for x in c] # allocate coords
  653. # Labels
  654. labels, segments = self.labels[index].copy(), self.segments[index].copy()
  655. if labels.size:
  656. labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format
  657. segments = [xyn2xy(x, w, h, padx, pady) for x in segments]
  658. labels9.append(labels)
  659. segments9.extend(segments)
  660. # Image
  661. img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax]
  662. hp, wp = h, w # height, width previous
  663. # Offset
  664. yc, xc = [int(random.uniform(0, s)) for _ in self.mosaic_border] # mosaic center x, y
  665. img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s]
  666. # Concat/clip labels
  667. labels9 = np.concatenate(labels9, 0)
  668. labels9[:, [1, 3]] -= xc
  669. labels9[:, [2, 4]] -= yc
  670. c = np.array([xc, yc]) # centers
  671. segments9 = [x - c for x in segments9]
  672. for x in (labels9[:, 1:], *segments9):
  673. np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
  674. # img9, labels9 = replicate(img9, labels9) # replicate
  675. # Augment
  676. img9, labels9 = random_perspective(img9, labels9, segments9,
  677. degrees=self.hyp['degrees'],
  678. translate=self.hyp['translate'],
  679. scale=self.hyp['scale'],
  680. shear=self.hyp['shear'],
  681. perspective=self.hyp['perspective'],
  682. border=self.mosaic_border) # border to remove
  683. return img9, labels9
  684. def create_folder(path='./new'):
  685. # Create folder
  686. if os.path.exists(path):
  687. shutil.rmtree(path) # delete output folder
  688. os.makedirs(path) # make new output folder
  689. def flatten_recursive(path='../datasets/coco128'):
  690. # Flatten a recursive directory by bringing all files to top level
  691. new_path = Path(path + '_flat')
  692. create_folder(new_path)
  693. for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)):
  694. shutil.copyfile(file, new_path / Path(file).name)
  695. def extract_boxes(path='../datasets/coco128'): # from utils.datasets import *; extract_boxes()
  696. # Convert detection dataset into classification dataset, with one directory per class
  697. path = Path(path) # images dir
  698. shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing
  699. files = list(path.rglob('*.*'))
  700. n = len(files) # number of files
  701. for im_file in tqdm(files, total=n):
  702. if im_file.suffix[1:] in IMG_FORMATS:
  703. # image
  704. im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB
  705. h, w = im.shape[:2]
  706. # labels
  707. lb_file = Path(img2label_paths([str(im_file)])[0])
  708. if Path(lb_file).exists():
  709. with open(lb_file, 'r') as f:
  710. lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels
  711. for j, x in enumerate(lb):
  712. c = int(x[0]) # class
  713. f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename
  714. if not f.parent.is_dir():
  715. f.parent.mkdir(parents=True)
  716. b = x[1:] * [w, h, w, h] # box
  717. # b[2:] = b[2:].max() # rectangle to square
  718. b[2:] = b[2:] * 1.2 + 3 # pad
  719. b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int)
  720. b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image
  721. b[[1, 3]] = np.clip(b[[1, 3]], 0, h)
  722. assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}'
  723. def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False):
  724. """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files
  725. Usage: from utils.datasets import *; autosplit()
  726. Arguments
  727. path: Path to images directory
  728. weights: Train, val, test weights (list, tuple)
  729. annotated_only: Only use images with an annotated txt file
  730. """
  731. path = Path(path) # images dir
  732. files = sum([list(path.rglob(f"*.{img_ext}")) for img_ext in IMG_FORMATS], []) # image files only
  733. n = len(files) # number of files
  734. random.seed(0) # for reproducibility
  735. indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split
  736. txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files
  737. [(path.parent / x).unlink(missing_ok=True) for x in txt] # remove existing
  738. print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only)
  739. for i, img in tqdm(zip(indices, files), total=n):
  740. if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label
  741. with open(path.parent / txt[i], 'a') as f:
  742. f.write('./' + img.relative_to(path.parent).as_posix() + '\n') # add image to txt file
  743. def verify_image_label(args):
  744. # Verify one image-label pair
  745. im_file, lb_file, prefix = args
  746. nm, nf, ne, nc, msg, segments = 0, 0, 0, 0, '', [] # number (missing, found, empty, corrupt), message, segments
  747. try:
  748. # verify images
  749. im = Image.open(im_file)
  750. im.verify() # PIL verify
  751. shape = exif_size(im) # image size
  752. assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels'
  753. assert im.format.lower() in IMG_FORMATS, f'invalid image format {im.format}'
  754. if im.format.lower() in ('jpg', 'jpeg'):
  755. with open(im_file, 'rb') as f:
  756. f.seek(-2, 2)
  757. if f.read() != b'\xff\xd9': # corrupt JPEG
  758. Image.open(im_file).save(im_file, format='JPEG', subsampling=0, quality=100) # re-save image
  759. msg = f'{prefix}WARNING: {im_file}: corrupt JPEG restored and saved'
  760. # verify labels
  761. if os.path.isfile(lb_file):
  762. nf = 1 # label found
  763. with open(lb_file, 'r') as f:
  764. l = [x.split() for x in f.read().strip().splitlines() if len(x)]
  765. if any([len(x) > 8 for x in l]): # is segment
  766. classes = np.array([x[0] for x in l], dtype=np.float32)
  767. segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in l] # (cls, xy1...)
  768. l = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh)
  769. l = np.array(l, dtype=np.float32)
  770. nl = len(l)
  771. if nl:
  772. assert l.shape[1] == 5, f'labels require 5 columns, {l.shape[1]} columns detected'
  773. assert (l >= 0).all(), f'negative label values {l[l < 0]}'
  774. assert (l[:, 1:] <= 1).all(), f'non-normalized or out of bounds coordinates {l[:, 1:][l[:, 1:] > 1]}'
  775. l = np.unique(l, axis=0) # remove duplicate rows
  776. if len(l) < nl:
  777. segments = np.unique(segments, axis=0)
  778. msg = f'{prefix}WARNING: {im_file}: {nl - len(l)} duplicate labels removed'
  779. else:
  780. ne = 1 # label empty
  781. l = np.zeros((0, 5), dtype=np.float32)
  782. else:
  783. nm = 1 # label missing
  784. l = np.zeros((0, 5), dtype=np.float32)
  785. return im_file, l, shape, segments, nm, nf, ne, nc, msg
  786. except Exception as e:
  787. nc = 1
  788. msg = f'{prefix}WARNING: {im_file}: ignoring corrupt image/label: {e}'
  789. return [None, None, None, None, nm, nf, ne, nc, msg]
  790. def dataset_stats(path='coco128.yaml', autodownload=False, verbose=False, profile=False, hub=False):
  791. """ Return dataset statistics dictionary with images and instances counts per split per class
  792. To run in parent directory: export PYTHONPATH="$PWD/yolov5"
  793. Usage1: from utils.datasets import *; dataset_stats('coco128.yaml', autodownload=True)
  794. Usage2: from utils.datasets import *; dataset_stats('../datasets/coco128_with_yaml.zip')
  795. Arguments
  796. path: Path to data.yaml or data.zip (with data.yaml inside data.zip)
  797. autodownload: Attempt to download dataset if not found locally
  798. verbose: Print stats dictionary
  799. """
  800. def round_labels(labels):
  801. # Update labels to integer class and 6 decimal place floats
  802. return [[int(c), *[round(x, 4) for x in points]] for c, *points in labels]
  803. def unzip(path):
  804. # Unzip data.zip TODO: CONSTRAINT: path/to/abc.zip MUST unzip to 'path/to/abc/'
  805. if str(path).endswith('.zip'): # path is data.zip
  806. assert Path(path).is_file(), f'Error unzipping {path}, file not found'
  807. ZipFile(path).extractall(path=path.parent) # unzip
  808. dir = path.with_suffix('') # dataset directory == zip name
  809. return True, str(dir), next(dir.rglob('*.yaml')) # zipped, data_dir, yaml_path
  810. else: # path is data.yaml
  811. return False, None, path
  812. def hub_ops(f, max_dim=1920):
  813. # HUB ops for 1 image 'f': resize and save at reduced quality in /dataset-hub for web/app viewing
  814. f_new = im_dir / Path(f).name # dataset-hub image filename
  815. try: # use PIL
  816. im = Image.open(f)
  817. r = max_dim / max(im.height, im.width) # ratio
  818. if r < 1.0: # image too large
  819. im = im.resize((int(im.width * r), int(im.height * r)))
  820. im.save(f_new, quality=75) # save
  821. except Exception as e: # use OpenCV
  822. print(f'WARNING: HUB ops PIL failure {f}: {e}')
  823. im = cv2.imread(f)
  824. im_height, im_width = im.shape[:2]
  825. r = max_dim / max(im_height, im_width) # ratio
  826. if r < 1.0: # image too large
  827. im = cv2.resize(im, (int(im_width * r), int(im_height * r)), interpolation=cv2.INTER_LINEAR)
  828. cv2.imwrite(str(f_new), im)
  829. zipped, data_dir, yaml_path = unzip(Path(path))
  830. with open(check_yaml(yaml_path), errors='ignore') as f:
  831. data = yaml.safe_load(f) # data dict
  832. if zipped:
  833. data['path'] = data_dir # TODO: should this be dir.resolve()?
  834. check_dataset(data, autodownload) # download dataset if missing
  835. hub_dir = Path(data['path'] + ('-hub' if hub else ''))
  836. stats = {'nc': data['nc'], 'names': data['names']} # statistics dictionary
  837. for split in 'train', 'val', 'test':
  838. if data.get(split) is None:
  839. stats[split] = None # i.e. no test set
  840. continue
  841. x = []
  842. dataset = LoadImagesAndLabels(data[split]) # load dataset
  843. for label in tqdm(dataset.labels, total=dataset.n, desc='Statistics'):
  844. x.append(np.bincount(label[:, 0].astype(int), minlength=data['nc']))
  845. x = np.array(x) # shape(128x80)
  846. stats[split] = {'instance_stats': {'total': int(x.sum()), 'per_class': x.sum(0).tolist()},
  847. 'image_stats': {'total': dataset.n, 'unlabelled': int(np.all(x == 0, 1).sum()),
  848. 'per_class': (x > 0).sum(0).tolist()},
  849. 'labels': [{str(Path(k).name): round_labels(v.tolist())} for k, v in
  850. zip(dataset.img_files, dataset.labels)]}
  851. if hub:
  852. im_dir = hub_dir / 'images'
  853. im_dir.mkdir(parents=True, exist_ok=True)
  854. for _ in tqdm(ThreadPool(NUM_THREADS).imap(hub_ops, dataset.img_files), total=dataset.n, desc='HUB Ops'):
  855. pass
  856. # Profile
  857. stats_path = hub_dir / 'stats.json'
  858. if profile:
  859. for _ in range(1):
  860. file = stats_path.with_suffix('.npy')
  861. t1 = time.time()
  862. np.save(file, stats)
  863. t2 = time.time()
  864. x = np.load(file, allow_pickle=True)
  865. print(f'stats.npy times: {time.time() - t2:.3f}s read, {t2 - t1:.3f}s write')
  866. file = stats_path.with_suffix('.json')
  867. t1 = time.time()
  868. with open(file, 'w') as f:
  869. json.dump(stats, f) # save stats *.json
  870. t2 = time.time()
  871. with open(file, 'r') as f:
  872. x = json.load(f) # load hyps dict
  873. print(f'stats.json times: {time.time() - t2:.3f}s read, {t2 - t1:.3f}s write')
  874. # Save, print and return
  875. if hub:
  876. print(f'Saving {stats_path.resolve()}...')
  877. with open(stats_path, 'w') as f:
  878. json.dump(stats, f) # save stats.json
  879. if verbose:
  880. print(json.dumps(stats, indent=2, sort_keys=False))
  881. return stats