Du kannst nicht mehr als 25 Themen auswählen Themen müssen entweder mit einem Buchstaben oder einer Ziffer beginnen. Sie können Bindestriche („-“) enthalten und bis zu 35 Zeichen lang sein.

tutorial.ipynb 46KB

vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
Merge `develop` branch into `master` (#3518) * update ci-testing.yml (#3322) * update ci-testing.yml * update greetings.yml * bring back os matrix * update ci-testing.yml (#3322) * update ci-testing.yml * update greetings.yml * bring back os matrix * Enable direct `--weights URL` definition (#3373) * Enable direct `--weights URL` definition @KalenMike this PR will enable direct --weights URL definition. Example use case: ``` python train.py --weights https://storage.googleapis.com/bucket/dir/model.pt ``` * cleanup * bug fixes * weights = attempt_download(weights) * Update experimental.py * Update hubconf.py * return bug fix * comment mirror * min_bytes * Update tutorial.ipynb (#3368) add Open in Kaggle badge * `cv2.imread(img, -1)` for IMREAD_UNCHANGED (#3379) * Update datasets.py * comment Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> * COCO evolution fix (#3388) * COCO evolution fix * cleanup * update print * print fix * Create `is_pip()` function (#3391) Returns `True` if file is part of pip package. Useful for contextual behavior modification. ```python def is_pip(): # Is file in a pip package? return 'site-packages' in Path(__file__).absolute().parts ``` * Revert "`cv2.imread(img, -1)` for IMREAD_UNCHANGED (#3379)" (#3395) This reverts commit 21a9607e00f1365b21d8c4bd81bdbf5fc0efea24. * Update FLOPs description (#3422) * Update README.md * Changing FLOPS to FLOPs. Co-authored-by: BuildTools <unconfigured@null.spigotmc.org> * Parse URL authentication (#3424) * Parse URL authentication * urllib.parse.unquote() * improved error handling * improved error handling * remove %3F * update check_file() * Add FLOPs title to table (#3453) * Suppress jit trace warning + graph once (#3454) * Suppress jit trace warning + graph once Suppress harmless jit trace warning on TensorBoard add_graph call. Also fix multiple add_graph() calls bug, now only on batch 0. * Update train.py * Update MixUp augmentation `alpha=beta=32.0` (#3455) Per VOC empirical results https://github.com/ultralytics/yolov5/issues/3380#issuecomment-853001307 by @developer0hye * Add `timeout()` class (#3460) * Add `timeout()` class * rearrange order * Faster HSV augmentation (#3462) remove datatype conversion process that can be skipped * Add `check_git_status()` 5 second timeout (#3464) * Add check_git_status() 5 second timeout This should prevent the SSH Git bug that we were discussing @KalenMike * cleanup * replace timeout with check_output built-in timeout * Improved `check_requirements()` offline-handling (#3466) Improve robustness of `check_requirements()` function to offline environments (do not attempt pip installs when offline). * Add `output_names` argument for ONNX export with dynamic axes (#3456) * Add output names & dynamic axes for onnx export Add output_names and dynamic_axes names for all outputs in torch.onnx.export. The first four outputs of the model will have names output0, output1, output2, output3 * use first output only + cleanup Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> * Revert FP16 `test.py` and `detect.py` inference to FP32 default (#3423) * fixed inference bug ,while use half precision * replace --use-half with --half * replace space and PEP8 in detect.py * PEP8 detect.py * update --half help comment * Update test.py * revert space Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> * Add additional links/resources to stale.yml message (#3467) * Update stale.yml * cleanup * Update stale.yml * reformat * Update stale.yml HUB URL (#3468) * Stale `github.actor` bug fix (#3483) * Explicit `model.eval()` call `if opt.train=False` (#3475) * call model.eval() when opt.train is False call model.eval() when opt.train is False * single-line if statement * cleanup Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> * check_requirements() exclude `opencv-python` (#3495) Fix for 3rd party or contrib versions of installed OpenCV as in https://github.com/ultralytics/yolov5/issues/3494. * Earlier `assert` for cpu and half option (#3508) * early assert for cpu and half option early assert for cpu and half option * Modified comment Modified comment * Update tutorial.ipynb (#3510) * Reduce test.py results spacing (#3511) * Update README.md (#3512) * Update README.md Minor modifications * 850 width * Update greetings.yml revert greeting change as PRs will now merge to master. Co-authored-by: Piotr Skalski <SkalskiP@users.noreply.github.com> Co-authored-by: SkalskiP <piotr.skalski92@gmail.com> Co-authored-by: Peretz Cohen <pizzaz93@users.noreply.github.com> Co-authored-by: tudoulei <34886368+tudoulei@users.noreply.github.com> Co-authored-by: chocosaj <chocosaj@users.noreply.github.com> Co-authored-by: BuildTools <unconfigured@null.spigotmc.org> Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com> Co-authored-by: Sam_S <SamSamhuns@users.noreply.github.com> Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai> Co-authored-by: edificewang <609552430@qq.com>
vor 3 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
Merge `develop` branch into `master` (#3518) * update ci-testing.yml (#3322) * update ci-testing.yml * update greetings.yml * bring back os matrix * update ci-testing.yml (#3322) * update ci-testing.yml * update greetings.yml * bring back os matrix * Enable direct `--weights URL` definition (#3373) * Enable direct `--weights URL` definition @KalenMike this PR will enable direct --weights URL definition. Example use case: ``` python train.py --weights https://storage.googleapis.com/bucket/dir/model.pt ``` * cleanup * bug fixes * weights = attempt_download(weights) * Update experimental.py * Update hubconf.py * return bug fix * comment mirror * min_bytes * Update tutorial.ipynb (#3368) add Open in Kaggle badge * `cv2.imread(img, -1)` for IMREAD_UNCHANGED (#3379) * Update datasets.py * comment Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> * COCO evolution fix (#3388) * COCO evolution fix * cleanup * update print * print fix * Create `is_pip()` function (#3391) Returns `True` if file is part of pip package. Useful for contextual behavior modification. ```python def is_pip(): # Is file in a pip package? return 'site-packages' in Path(__file__).absolute().parts ``` * Revert "`cv2.imread(img, -1)` for IMREAD_UNCHANGED (#3379)" (#3395) This reverts commit 21a9607e00f1365b21d8c4bd81bdbf5fc0efea24. * Update FLOPs description (#3422) * Update README.md * Changing FLOPS to FLOPs. Co-authored-by: BuildTools <unconfigured@null.spigotmc.org> * Parse URL authentication (#3424) * Parse URL authentication * urllib.parse.unquote() * improved error handling * improved error handling * remove %3F * update check_file() * Add FLOPs title to table (#3453) * Suppress jit trace warning + graph once (#3454) * Suppress jit trace warning + graph once Suppress harmless jit trace warning on TensorBoard add_graph call. Also fix multiple add_graph() calls bug, now only on batch 0. * Update train.py * Update MixUp augmentation `alpha=beta=32.0` (#3455) Per VOC empirical results https://github.com/ultralytics/yolov5/issues/3380#issuecomment-853001307 by @developer0hye * Add `timeout()` class (#3460) * Add `timeout()` class * rearrange order * Faster HSV augmentation (#3462) remove datatype conversion process that can be skipped * Add `check_git_status()` 5 second timeout (#3464) * Add check_git_status() 5 second timeout This should prevent the SSH Git bug that we were discussing @KalenMike * cleanup * replace timeout with check_output built-in timeout * Improved `check_requirements()` offline-handling (#3466) Improve robustness of `check_requirements()` function to offline environments (do not attempt pip installs when offline). * Add `output_names` argument for ONNX export with dynamic axes (#3456) * Add output names & dynamic axes for onnx export Add output_names and dynamic_axes names for all outputs in torch.onnx.export. The first four outputs of the model will have names output0, output1, output2, output3 * use first output only + cleanup Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> * Revert FP16 `test.py` and `detect.py` inference to FP32 default (#3423) * fixed inference bug ,while use half precision * replace --use-half with --half * replace space and PEP8 in detect.py * PEP8 detect.py * update --half help comment * Update test.py * revert space Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> * Add additional links/resources to stale.yml message (#3467) * Update stale.yml * cleanup * Update stale.yml * reformat * Update stale.yml HUB URL (#3468) * Stale `github.actor` bug fix (#3483) * Explicit `model.eval()` call `if opt.train=False` (#3475) * call model.eval() when opt.train is False call model.eval() when opt.train is False * single-line if statement * cleanup Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> * check_requirements() exclude `opencv-python` (#3495) Fix for 3rd party or contrib versions of installed OpenCV as in https://github.com/ultralytics/yolov5/issues/3494. * Earlier `assert` for cpu and half option (#3508) * early assert for cpu and half option early assert for cpu and half option * Modified comment Modified comment * Update tutorial.ipynb (#3510) * Reduce test.py results spacing (#3511) * Update README.md (#3512) * Update README.md Minor modifications * 850 width * Update greetings.yml revert greeting change as PRs will now merge to master. Co-authored-by: Piotr Skalski <SkalskiP@users.noreply.github.com> Co-authored-by: SkalskiP <piotr.skalski92@gmail.com> Co-authored-by: Peretz Cohen <pizzaz93@users.noreply.github.com> Co-authored-by: tudoulei <34886368+tudoulei@users.noreply.github.com> Co-authored-by: chocosaj <chocosaj@users.noreply.github.com> Co-authored-by: BuildTools <unconfigured@null.spigotmc.org> Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com> Co-authored-by: Sam_S <SamSamhuns@users.noreply.github.com> Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai> Co-authored-by: edificewang <609552430@qq.com>
vor 3 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
vor 4 Jahren
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022
  1. {
  2. "nbformat": 4,
  3. "nbformat_minor": 0,
  4. "metadata": {
  5. "colab": {
  6. "name": "YOLOv5 Tutorial",
  7. "provenance": [],
  8. "collapsed_sections": [],
  9. "include_colab_link": true
  10. },
  11. "kernelspec": {
  12. "name": "python3",
  13. "display_name": "Python 3"
  14. },
  15. "accelerator": "GPU",
  16. "widgets": {
  17. "application/vnd.jupyter.widget-state+json": {
  18. "484511f272e64eab8b42e68dac5f7a66": {
  19. "model_module": "@jupyter-widgets/controls",
  20. "model_name": "HBoxModel",
  21. "model_module_version": "1.5.0",
  22. "state": {
  23. "_view_name": "HBoxView",
  24. "_dom_classes": [],
  25. "_model_name": "HBoxModel",
  26. "_view_module": "@jupyter-widgets/controls",
  27. "_model_module_version": "1.5.0",
  28. "_view_count": null,
  29. "_view_module_version": "1.5.0",
  30. "box_style": "",
  31. "layout": "IPY_MODEL_78cceec059784f2bb36988d3336e4d56",
  32. "_model_module": "@jupyter-widgets/controls",
  33. "children": [
  34. "IPY_MODEL_ab93d8b65c134605934ff9ec5efb1bb6",
  35. "IPY_MODEL_30df865ded4c434191bce772c9a82f3a",
  36. "IPY_MODEL_20cdc61eb3404f42a12b37901b0d85fb"
  37. ]
  38. }
  39. },
  40. "78cceec059784f2bb36988d3336e4d56": {
  41. "model_module": "@jupyter-widgets/base",
  42. "model_name": "LayoutModel",
  43. "model_module_version": "1.2.0",
  44. "state": {
  45. "_view_name": "LayoutView",
  46. "grid_template_rows": null,
  47. "right": null,
  48. "justify_content": null,
  49. "_view_module": "@jupyter-widgets/base",
  50. "overflow": null,
  51. "_model_module_version": "1.2.0",
  52. "_view_count": null,
  53. "flex_flow": null,
  54. "width": null,
  55. "min_width": null,
  56. "border": null,
  57. "align_items": null,
  58. "bottom": null,
  59. "_model_module": "@jupyter-widgets/base",
  60. "top": null,
  61. "grid_column": null,
  62. "overflow_y": null,
  63. "overflow_x": null,
  64. "grid_auto_flow": null,
  65. "grid_area": null,
  66. "grid_template_columns": null,
  67. "flex": null,
  68. "_model_name": "LayoutModel",
  69. "justify_items": null,
  70. "grid_row": null,
  71. "max_height": null,
  72. "align_content": null,
  73. "visibility": null,
  74. "align_self": null,
  75. "height": null,
  76. "min_height": null,
  77. "padding": null,
  78. "grid_auto_rows": null,
  79. "grid_gap": null,
  80. "max_width": null,
  81. "order": null,
  82. "_view_module_version": "1.2.0",
  83. "grid_template_areas": null,
  84. "object_position": null,
  85. "object_fit": null,
  86. "grid_auto_columns": null,
  87. "margin": null,
  88. "display": null,
  89. "left": null
  90. }
  91. },
  92. "ab93d8b65c134605934ff9ec5efb1bb6": {
  93. "model_module": "@jupyter-widgets/controls",
  94. "model_name": "HTMLModel",
  95. "model_module_version": "1.5.0",
  96. "state": {
  97. "_view_name": "HTMLView",
  98. "style": "IPY_MODEL_2d7239993a9645b09b221405ac682743",
  99. "_dom_classes": [],
  100. "description": "",
  101. "_model_name": "HTMLModel",
  102. "placeholder": "​",
  103. "_view_module": "@jupyter-widgets/controls",
  104. "_model_module_version": "1.5.0",
  105. "value": "100%",
  106. "_view_count": null,
  107. "_view_module_version": "1.5.0",
  108. "description_tooltip": null,
  109. "_model_module": "@jupyter-widgets/controls",
  110. "layout": "IPY_MODEL_17b5a87f92104ec7ab96bf507637d0d2"
  111. }
  112. },
  113. "30df865ded4c434191bce772c9a82f3a": {
  114. "model_module": "@jupyter-widgets/controls",
  115. "model_name": "FloatProgressModel",
  116. "model_module_version": "1.5.0",
  117. "state": {
  118. "_view_name": "ProgressView",
  119. "style": "IPY_MODEL_2358bfb2270247359e94b066b3cc3d1f",
  120. "_dom_classes": [],
  121. "description": "",
  122. "_model_name": "FloatProgressModel",
  123. "bar_style": "success",
  124. "max": 818322941,
  125. "_view_module": "@jupyter-widgets/controls",
  126. "_model_module_version": "1.5.0",
  127. "value": 818322941,
  128. "_view_count": null,
  129. "_view_module_version": "1.5.0",
  130. "orientation": "horizontal",
  131. "min": 0,
  132. "description_tooltip": null,
  133. "_model_module": "@jupyter-widgets/controls",
  134. "layout": "IPY_MODEL_3e984405db654b0b83b88b2db08baffd"
  135. }
  136. },
  137. "20cdc61eb3404f42a12b37901b0d85fb": {
  138. "model_module": "@jupyter-widgets/controls",
  139. "model_name": "HTMLModel",
  140. "model_module_version": "1.5.0",
  141. "state": {
  142. "_view_name": "HTMLView",
  143. "style": "IPY_MODEL_654d8a19b9f949c6bbdaf8b0875c931e",
  144. "_dom_classes": [],
  145. "description": "",
  146. "_model_name": "HTMLModel",
  147. "placeholder": "​",
  148. "_view_module": "@jupyter-widgets/controls",
  149. "_model_module_version": "1.5.0",
  150. "value": " 780M/780M [00:33&lt;00:00, 24.4MB/s]",
  151. "_view_count": null,
  152. "_view_module_version": "1.5.0",
  153. "description_tooltip": null,
  154. "_model_module": "@jupyter-widgets/controls",
  155. "layout": "IPY_MODEL_896030c5d13b415aaa05032818d81a6e"
  156. }
  157. },
  158. "2d7239993a9645b09b221405ac682743": {
  159. "model_module": "@jupyter-widgets/controls",
  160. "model_name": "DescriptionStyleModel",
  161. "model_module_version": "1.5.0",
  162. "state": {
  163. "_view_name": "StyleView",
  164. "_model_name": "DescriptionStyleModel",
  165. "description_width": "",
  166. "_view_module": "@jupyter-widgets/base",
  167. "_model_module_version": "1.5.0",
  168. "_view_count": null,
  169. "_view_module_version": "1.2.0",
  170. "_model_module": "@jupyter-widgets/controls"
  171. }
  172. },
  173. "17b5a87f92104ec7ab96bf507637d0d2": {
  174. "model_module": "@jupyter-widgets/base",
  175. "model_name": "LayoutModel",
  176. "model_module_version": "1.2.0",
  177. "state": {
  178. "_view_name": "LayoutView",
  179. "grid_template_rows": null,
  180. "right": null,
  181. "justify_content": null,
  182. "_view_module": "@jupyter-widgets/base",
  183. "overflow": null,
  184. "_model_module_version": "1.2.0",
  185. "_view_count": null,
  186. "flex_flow": null,
  187. "width": null,
  188. "min_width": null,
  189. "border": null,
  190. "align_items": null,
  191. "bottom": null,
  192. "_model_module": "@jupyter-widgets/base",
  193. "top": null,
  194. "grid_column": null,
  195. "overflow_y": null,
  196. "overflow_x": null,
  197. "grid_auto_flow": null,
  198. "grid_area": null,
  199. "grid_template_columns": null,
  200. "flex": null,
  201. "_model_name": "LayoutModel",
  202. "justify_items": null,
  203. "grid_row": null,
  204. "max_height": null,
  205. "align_content": null,
  206. "visibility": null,
  207. "align_self": null,
  208. "height": null,
  209. "min_height": null,
  210. "padding": null,
  211. "grid_auto_rows": null,
  212. "grid_gap": null,
  213. "max_width": null,
  214. "order": null,
  215. "_view_module_version": "1.2.0",
  216. "grid_template_areas": null,
  217. "object_position": null,
  218. "object_fit": null,
  219. "grid_auto_columns": null,
  220. "margin": null,
  221. "display": null,
  222. "left": null
  223. }
  224. },
  225. "2358bfb2270247359e94b066b3cc3d1f": {
  226. "model_module": "@jupyter-widgets/controls",
  227. "model_name": "ProgressStyleModel",
  228. "model_module_version": "1.5.0",
  229. "state": {
  230. "_view_name": "StyleView",
  231. "_model_name": "ProgressStyleModel",
  232. "description_width": "",
  233. "_view_module": "@jupyter-widgets/base",
  234. "_model_module_version": "1.5.0",
  235. "_view_count": null,
  236. "_view_module_version": "1.2.0",
  237. "bar_color": null,
  238. "_model_module": "@jupyter-widgets/controls"
  239. }
  240. },
  241. "3e984405db654b0b83b88b2db08baffd": {
  242. "model_module": "@jupyter-widgets/base",
  243. "model_name": "LayoutModel",
  244. "model_module_version": "1.2.0",
  245. "state": {
  246. "_view_name": "LayoutView",
  247. "grid_template_rows": null,
  248. "right": null,
  249. "justify_content": null,
  250. "_view_module": "@jupyter-widgets/base",
  251. "overflow": null,
  252. "_model_module_version": "1.2.0",
  253. "_view_count": null,
  254. "flex_flow": null,
  255. "width": null,
  256. "min_width": null,
  257. "border": null,
  258. "align_items": null,
  259. "bottom": null,
  260. "_model_module": "@jupyter-widgets/base",
  261. "top": null,
  262. "grid_column": null,
  263. "overflow_y": null,
  264. "overflow_x": null,
  265. "grid_auto_flow": null,
  266. "grid_area": null,
  267. "grid_template_columns": null,
  268. "flex": null,
  269. "_model_name": "LayoutModel",
  270. "justify_items": null,
  271. "grid_row": null,
  272. "max_height": null,
  273. "align_content": null,
  274. "visibility": null,
  275. "align_self": null,
  276. "height": null,
  277. "min_height": null,
  278. "padding": null,
  279. "grid_auto_rows": null,
  280. "grid_gap": null,
  281. "max_width": null,
  282. "order": null,
  283. "_view_module_version": "1.2.0",
  284. "grid_template_areas": null,
  285. "object_position": null,
  286. "object_fit": null,
  287. "grid_auto_columns": null,
  288. "margin": null,
  289. "display": null,
  290. "left": null
  291. }
  292. },
  293. "654d8a19b9f949c6bbdaf8b0875c931e": {
  294. "model_module": "@jupyter-widgets/controls",
  295. "model_name": "DescriptionStyleModel",
  296. "model_module_version": "1.5.0",
  297. "state": {
  298. "_view_name": "StyleView",
  299. "_model_name": "DescriptionStyleModel",
  300. "description_width": "",
  301. "_view_module": "@jupyter-widgets/base",
  302. "_model_module_version": "1.5.0",
  303. "_view_count": null,
  304. "_view_module_version": "1.2.0",
  305. "_model_module": "@jupyter-widgets/controls"
  306. }
  307. },
  308. "896030c5d13b415aaa05032818d81a6e": {
  309. "model_module": "@jupyter-widgets/base",
  310. "model_name": "LayoutModel",
  311. "model_module_version": "1.2.0",
  312. "state": {
  313. "_view_name": "LayoutView",
  314. "grid_template_rows": null,
  315. "right": null,
  316. "justify_content": null,
  317. "_view_module": "@jupyter-widgets/base",
  318. "overflow": null,
  319. "_model_module_version": "1.2.0",
  320. "_view_count": null,
  321. "flex_flow": null,
  322. "width": null,
  323. "min_width": null,
  324. "border": null,
  325. "align_items": null,
  326. "bottom": null,
  327. "_model_module": "@jupyter-widgets/base",
  328. "top": null,
  329. "grid_column": null,
  330. "overflow_y": null,
  331. "overflow_x": null,
  332. "grid_auto_flow": null,
  333. "grid_area": null,
  334. "grid_template_columns": null,
  335. "flex": null,
  336. "_model_name": "LayoutModel",
  337. "justify_items": null,
  338. "grid_row": null,
  339. "max_height": null,
  340. "align_content": null,
  341. "visibility": null,
  342. "align_self": null,
  343. "height": null,
  344. "min_height": null,
  345. "padding": null,
  346. "grid_auto_rows": null,
  347. "grid_gap": null,
  348. "max_width": null,
  349. "order": null,
  350. "_view_module_version": "1.2.0",
  351. "grid_template_areas": null,
  352. "object_position": null,
  353. "object_fit": null,
  354. "grid_auto_columns": null,
  355. "margin": null,
  356. "display": null,
  357. "left": null
  358. }
  359. }
  360. }
  361. }
  362. },
  363. "cells": [
  364. {
  365. "cell_type": "markdown",
  366. "metadata": {
  367. "id": "view-in-github",
  368. "colab_type": "text"
  369. },
  370. "source": [
  371. "<a href=\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
  372. ]
  373. },
  374. {
  375. "cell_type": "markdown",
  376. "metadata": {
  377. "id": "t6MPjfT5NrKQ"
  378. },
  379. "source": [
  380. "<a align=\"left\" href=\"https://ultralytics.com/yolov5\" target=\"_blank\">\n",
  381. "<img src=\"https://user-images.githubusercontent.com/26833433/125273437-35b3fc00-e30d-11eb-9079-46f313325424.png\"></a>\n",
  382. "\n",
  383. "This is the **official YOLOv5 🚀 notebook** by **Ultralytics**, and is freely available for redistribution under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/). \n",
  384. "For more information please visit https://github.com/ultralytics/yolov5 and https://ultralytics.com. Thank you!"
  385. ]
  386. },
  387. {
  388. "cell_type": "markdown",
  389. "metadata": {
  390. "id": "7mGmQbAO5pQb"
  391. },
  392. "source": [
  393. "# Setup\n",
  394. "\n",
  395. "Clone repo, install dependencies and check PyTorch and GPU."
  396. ]
  397. },
  398. {
  399. "cell_type": "code",
  400. "metadata": {
  401. "id": "wbvMlHd_QwMG",
  402. "colab": {
  403. "base_uri": "https://localhost:8080/"
  404. },
  405. "outputId": "4d67116a-43e9-4d84-d19e-1edd83f23a04"
  406. },
  407. "source": [
  408. "!git clone https://github.com/ultralytics/yolov5 # clone repo\n",
  409. "%cd yolov5\n",
  410. "%pip install -qr requirements.txt # install dependencies\n",
  411. "\n",
  412. "import torch\n",
  413. "from IPython.display import Image, clear_output # to display images\n",
  414. "\n",
  415. "clear_output()\n",
  416. "print(f\"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})\")"
  417. ],
  418. "execution_count": null,
  419. "outputs": [
  420. {
  421. "output_type": "stream",
  422. "text": [
  423. "Setup complete. Using torch 1.9.0+cu102 (Tesla V100-SXM2-16GB)\n"
  424. ],
  425. "name": "stdout"
  426. }
  427. ]
  428. },
  429. {
  430. "cell_type": "markdown",
  431. "metadata": {
  432. "id": "4JnkELT0cIJg"
  433. },
  434. "source": [
  435. "# 1. Inference\n",
  436. "\n",
  437. "`detect.py` runs YOLOv5 inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and saving results to `runs/detect`. Example inference sources are:\n",
  438. "\n",
  439. "```shell\n",
  440. "python detect.py --source 0 # webcam\n",
  441. " file.jpg # image \n",
  442. " file.mp4 # video\n",
  443. " path/ # directory\n",
  444. " path/*.jpg # glob\n",
  445. " 'https://youtu.be/NUsoVlDFqZg' # YouTube\n",
  446. " 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream\n",
  447. "```"
  448. ]
  449. },
  450. {
  451. "cell_type": "code",
  452. "metadata": {
  453. "id": "zR9ZbuQCH7FX",
  454. "colab": {
  455. "base_uri": "https://localhost:8080/"
  456. },
  457. "outputId": "8b728908-81ab-4861-edb0-4d0c46c439fb"
  458. },
  459. "source": [
  460. "!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/\n",
  461. "Image(filename='runs/detect/exp/zidane.jpg', width=600)"
  462. ],
  463. "execution_count": null,
  464. "outputs": [
  465. {
  466. "output_type": "stream",
  467. "text": [
  468. "\u001b[34m\u001b[1mdetect: \u001b[0mweights=['yolov5s.pt'], source=data/images/, imgsz=640, conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False\n",
  469. "YOLOv5 🚀 v5.0-367-g01cdb76 torch 1.9.0+cu102 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)\n",
  470. "\n",
  471. "Fusing layers... \n",
  472. "Model Summary: 224 layers, 7266973 parameters, 0 gradients\n",
  473. "image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 1 fire hydrant, Done. (0.007s)\n",
  474. "image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 2 ties, Done. (0.007s)\n",
  475. "Results saved to \u001b[1mruns/detect/exp\u001b[0m\n",
  476. "Done. (0.091s)\n"
  477. ],
  478. "name": "stdout"
  479. }
  480. ]
  481. },
  482. {
  483. "cell_type": "markdown",
  484. "metadata": {
  485. "id": "hkAzDWJ7cWTr"
  486. },
  487. "source": [
  488. "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
  489. "<img align=\"left\" src=\"https://user-images.githubusercontent.com/26833433/127574988-6a558aa1-d268-44b9-bf6b-62d4c605cc72.jpg\" width=\"600\">"
  490. ]
  491. },
  492. {
  493. "cell_type": "markdown",
  494. "metadata": {
  495. "id": "0eq1SMWl6Sfn"
  496. },
  497. "source": [
  498. "# 2. Validate\n",
  499. "Validate a model's accuracy on [COCO](https://cocodataset.org/#home) val or test-dev datasets. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag. Note that `pycocotools` metrics may be ~1% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation."
  500. ]
  501. },
  502. {
  503. "cell_type": "markdown",
  504. "metadata": {
  505. "id": "eyTZYGgRjnMc"
  506. },
  507. "source": [
  508. "## COCO val2017\n",
  509. "Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L14) dataset (1GB - 5000 images), and test model accuracy."
  510. ]
  511. },
  512. {
  513. "cell_type": "code",
  514. "metadata": {
  515. "id": "WQPtK1QYVaD_",
  516. "colab": {
  517. "base_uri": "https://localhost:8080/",
  518. "height": 48,
  519. "referenced_widgets": [
  520. "484511f272e64eab8b42e68dac5f7a66",
  521. "78cceec059784f2bb36988d3336e4d56",
  522. "ab93d8b65c134605934ff9ec5efb1bb6",
  523. "30df865ded4c434191bce772c9a82f3a",
  524. "20cdc61eb3404f42a12b37901b0d85fb",
  525. "2d7239993a9645b09b221405ac682743",
  526. "17b5a87f92104ec7ab96bf507637d0d2",
  527. "2358bfb2270247359e94b066b3cc3d1f",
  528. "3e984405db654b0b83b88b2db08baffd",
  529. "654d8a19b9f949c6bbdaf8b0875c931e",
  530. "896030c5d13b415aaa05032818d81a6e"
  531. ]
  532. },
  533. "outputId": "7e6f5c96-c819-43e1-cd03-d3b9878cf8de"
  534. },
  535. "source": [
  536. "# Download COCO val2017\n",
  537. "torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')\n",
  538. "!unzip -q tmp.zip -d ../datasets && rm tmp.zip"
  539. ],
  540. "execution_count": null,
  541. "outputs": [
  542. {
  543. "output_type": "display_data",
  544. "data": {
  545. "application/vnd.jupyter.widget-view+json": {
  546. "model_id": "484511f272e64eab8b42e68dac5f7a66",
  547. "version_minor": 0,
  548. "version_major": 2
  549. },
  550. "text/plain": [
  551. " 0%| | 0.00/780M [00:00<?, ?B/s]"
  552. ]
  553. },
  554. "metadata": {
  555. "tags": []
  556. }
  557. }
  558. ]
  559. },
  560. {
  561. "cell_type": "code",
  562. "metadata": {
  563. "id": "X58w8JLpMnjH",
  564. "colab": {
  565. "base_uri": "https://localhost:8080/"
  566. },
  567. "outputId": "3dd0e2fc-aecf-4108-91b1-6392da1863cb"
  568. },
  569. "source": [
  570. "# Run YOLOv5x on COCO val2017\n",
  571. "!python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half"
  572. ],
  573. "execution_count": null,
  574. "outputs": [
  575. {
  576. "output_type": "stream",
  577. "text": [
  578. "\u001b[34m\u001b[1mval: \u001b[0mdata=./data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True\n",
  579. "YOLOv5 🚀 v5.0-367-g01cdb76 torch 1.9.0+cu102 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)\n",
  580. "\n",
  581. "Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5x.pt to yolov5x.pt...\n",
  582. "100% 168M/168M [00:08<00:00, 20.6MB/s]\n",
  583. "\n",
  584. "Fusing layers... \n",
  585. "Model Summary: 476 layers, 87730285 parameters, 0 gradients\n",
  586. "\u001b[34m\u001b[1mval: \u001b[0mScanning '../datasets/coco/val2017' images and labels...4952 found, 48 missing, 0 empty, 0 corrupted: 100% 5000/5000 [00:01<00:00, 2749.96it/s]\n",
  587. "\u001b[34m\u001b[1mval: \u001b[0mNew cache created: ../datasets/coco/val2017.cache\n",
  588. " Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 157/157 [01:08<00:00, 2.28it/s]\n",
  589. " all 5000 36335 0.746 0.626 0.68 0.49\n",
  590. "Speed: 0.1ms pre-process, 5.1ms inference, 1.6ms NMS per image at shape (32, 3, 640, 640)\n",
  591. "\n",
  592. "Evaluating pycocotools mAP... saving runs/val/exp/yolov5x_predictions.json...\n",
  593. "loading annotations into memory...\n",
  594. "Done (t=0.46s)\n",
  595. "creating index...\n",
  596. "index created!\n",
  597. "Loading and preparing results...\n",
  598. "DONE (t=4.94s)\n",
  599. "creating index...\n",
  600. "index created!\n",
  601. "Running per image evaluation...\n",
  602. "Evaluate annotation type *bbox*\n",
  603. "DONE (t=83.60s).\n",
  604. "Accumulating evaluation results...\n",
  605. "DONE (t=13.22s).\n",
  606. " Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.504\n",
  607. " Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.688\n",
  608. " Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.546\n",
  609. " Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.351\n",
  610. " Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.551\n",
  611. " Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.644\n",
  612. " Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.382\n",
  613. " Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.629\n",
  614. " Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.681\n",
  615. " Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.524\n",
  616. " Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.735\n",
  617. " Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.827\n",
  618. "Results saved to \u001b[1mruns/val/exp\u001b[0m\n"
  619. ],
  620. "name": "stdout"
  621. }
  622. ]
  623. },
  624. {
  625. "cell_type": "markdown",
  626. "metadata": {
  627. "id": "rc_KbFk0juX2"
  628. },
  629. "source": [
  630. "## COCO test-dev2017\n",
  631. "Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (**20,000 images, no labels**). Results are saved to a `*.json` file which should be **zipped** and submitted to the evaluation server at https://competitions.codalab.org/competitions/20794."
  632. ]
  633. },
  634. {
  635. "cell_type": "code",
  636. "metadata": {
  637. "id": "V0AJnSeCIHyJ"
  638. },
  639. "source": [
  640. "# Download COCO test-dev2017\n",
  641. "torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip', 'tmp.zip')\n",
  642. "!unzip -q tmp.zip -d ../ && rm tmp.zip # unzip labels\n",
  643. "!f=\"test2017.zip\" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f && rm $f # 7GB, 41k images\n",
  644. "%mv ./test2017 ../coco/images # move to /coco"
  645. ],
  646. "execution_count": null,
  647. "outputs": []
  648. },
  649. {
  650. "cell_type": "code",
  651. "metadata": {
  652. "id": "29GJXAP_lPrt"
  653. },
  654. "source": [
  655. "# Run YOLOv5s on COCO test-dev2017 using --task test\n",
  656. "!python val.py --weights yolov5s.pt --data coco.yaml --task test"
  657. ],
  658. "execution_count": null,
  659. "outputs": []
  660. },
  661. {
  662. "cell_type": "markdown",
  663. "metadata": {
  664. "id": "VUOiNLtMP5aG"
  665. },
  666. "source": [
  667. "# 3. Train\n",
  668. "\n",
  669. "Download [COCO128](https://www.kaggle.com/ultralytics/coco128), a small 128-image tutorial dataset, start tensorboard and train YOLOv5s from a pretrained checkpoint for 3 epochs (note actual training is typically much longer, around **300-1000 epochs**, depending on your dataset)."
  670. ]
  671. },
  672. {
  673. "cell_type": "code",
  674. "metadata": {
  675. "id": "Knxi2ncxWffW"
  676. },
  677. "source": [
  678. "# Download COCO128\n",
  679. "torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip', 'tmp.zip')\n",
  680. "!unzip -q tmp.zip -d ../datasets && rm tmp.zip"
  681. ],
  682. "execution_count": null,
  683. "outputs": []
  684. },
  685. {
  686. "cell_type": "markdown",
  687. "metadata": {
  688. "id": "_pOkGLv1dMqh"
  689. },
  690. "source": [
  691. "Train a YOLOv5s model on [COCO128](https://www.kaggle.com/ultralytics/coco128) with `--data coco128.yaml`, starting from pretrained `--weights yolov5s.pt`, or from randomly initialized `--weights '' --cfg yolov5s.yaml`. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and **COCO, COCO128, and VOC datasets are downloaded automatically** on first use.\n",
  692. "\n",
  693. "All training results are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc.\n"
  694. ]
  695. },
  696. {
  697. "cell_type": "code",
  698. "metadata": {
  699. "id": "bOy5KI2ncnWd"
  700. },
  701. "source": [
  702. "# Tensorboard (optional)\n",
  703. "%load_ext tensorboard\n",
  704. "%tensorboard --logdir runs/train"
  705. ],
  706. "execution_count": null,
  707. "outputs": []
  708. },
  709. {
  710. "cell_type": "code",
  711. "metadata": {
  712. "id": "2fLAV42oNb7M"
  713. },
  714. "source": [
  715. "# Weights & Biases (optional)\n",
  716. "%pip install -q wandb\n",
  717. "import wandb\n",
  718. "wandb.login()"
  719. ],
  720. "execution_count": null,
  721. "outputs": []
  722. },
  723. {
  724. "cell_type": "code",
  725. "metadata": {
  726. "id": "1NcFxRcFdJ_O",
  727. "colab": {
  728. "base_uri": "https://localhost:8080/"
  729. },
  730. "outputId": "00ea4b14-a75c-44a2-a913-03b431b69de5"
  731. },
  732. "source": [
  733. "# Train YOLOv5s on COCO128 for 3 epochs\n",
  734. "!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache"
  735. ],
  736. "execution_count": null,
  737. "outputs": [
  738. {
  739. "output_type": "stream",
  740. "text": [
  741. "\u001b[34m\u001b[1mtrain: \u001b[0mweights=yolov5s.pt, cfg=, data=coco128.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=3, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=ram, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, entity=None, name=exp, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0\n",
  742. "\u001b[34m\u001b[1mgithub: \u001b[0mup to date with https://github.com/ultralytics/yolov5 ✅\n",
  743. "YOLOv5 🚀 v5.0-367-g01cdb76 torch 1.9.0+cu102 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)\n",
  744. "\n",
  745. "\u001b[34m\u001b[1mhyperparameters: \u001b[0mlr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0\n",
  746. "\u001b[34m\u001b[1mWeights & Biases: \u001b[0mrun 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs (RECOMMENDED)\n",
  747. "\u001b[34m\u001b[1mTensorBoard: \u001b[0mStart with 'tensorboard --logdir runs/train', view at http://localhost:6006/\n",
  748. "2021-08-15 14:40:43.449642: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\n",
  749. "\n",
  750. " from n params module arguments \n",
  751. " 0 -1 1 3520 models.common.Focus [3, 32, 3] \n",
  752. " 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] \n",
  753. " 2 -1 1 18816 models.common.C3 [64, 64, 1] \n",
  754. " 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] \n",
  755. " 4 -1 3 156928 models.common.C3 [128, 128, 3] \n",
  756. " 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] \n",
  757. " 6 -1 3 625152 models.common.C3 [256, 256, 3] \n",
  758. " 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] \n",
  759. " 8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]] \n",
  760. " 9 -1 1 1182720 models.common.C3 [512, 512, 1, False] \n",
  761. " 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] \n",
  762. " 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n",
  763. " 12 [-1, 6] 1 0 models.common.Concat [1] \n",
  764. " 13 -1 1 361984 models.common.C3 [512, 256, 1, False] \n",
  765. " 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] \n",
  766. " 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n",
  767. " 16 [-1, 4] 1 0 models.common.Concat [1] \n",
  768. " 17 -1 1 90880 models.common.C3 [256, 128, 1, False] \n",
  769. " 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] \n",
  770. " 19 [-1, 14] 1 0 models.common.Concat [1] \n",
  771. " 20 -1 1 296448 models.common.C3 [256, 256, 1, False] \n",
  772. " 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] \n",
  773. " 22 [-1, 10] 1 0 models.common.Concat [1] \n",
  774. " 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] \n",
  775. " 24 [17, 20, 23] 1 229245 models.yolo.Detect [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]\n",
  776. "Model Summary: 283 layers, 7276605 parameters, 7276605 gradients, 17.1 GFLOPs\n",
  777. "\n",
  778. "Transferred 362/362 items from yolov5s.pt\n",
  779. "Scaled weight_decay = 0.0005\n",
  780. "\u001b[34m\u001b[1moptimizer:\u001b[0m SGD with parameter groups 59 weight, 62 weight (no decay), 62 bias\n",
  781. "\u001b[34m\u001b[1malbumentations: \u001b[0mversion 1.0.3 required by YOLOv5, but version 0.1.12 is currently installed\n",
  782. "\u001b[34m\u001b[1mtrain: \u001b[0mScanning '../datasets/coco128/labels/train2017' images and labels...128 found, 0 missing, 2 empty, 0 corrupted: 100% 128/128 [00:00<00:00, 2440.28it/s]\n",
  783. "\u001b[34m\u001b[1mtrain: \u001b[0mNew cache created: ../datasets/coco128/labels/train2017.cache\n",
  784. "\u001b[34m\u001b[1mtrain: \u001b[0mCaching images (0.1GB ram): 100% 128/128 [00:00<00:00, 302.61it/s]\n",
  785. "\u001b[34m\u001b[1mval: \u001b[0mScanning '../datasets/coco128/labels/train2017.cache' images and labels... 128 found, 0 missing, 2 empty, 0 corrupted: 100% 128/128 [00:00<?, ?it/s]\n",
  786. "\u001b[34m\u001b[1mval: \u001b[0mCaching images (0.1GB ram): 100% 128/128 [00:00<00:00, 142.55it/s]\n",
  787. "[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)\n",
  788. "[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)\n",
  789. "Plotting labels... \n",
  790. "\n",
  791. "\u001b[34m\u001b[1mautoanchor: \u001b[0mAnalyzing anchors... anchors/target = 4.27, Best Possible Recall (BPR) = 0.9935\n",
  792. "Image sizes 640 train, 640 val\n",
  793. "Using 2 dataloader workers\n",
  794. "Logging results to runs/train/exp\n",
  795. "Starting training for 3 epochs...\n",
  796. "\n",
  797. " Epoch gpu_mem box obj cls labels img_size\n",
  798. " 0/2 3.64G 0.04492 0.0674 0.02213 298 640: 100% 8/8 [00:03<00:00, 2.05it/s]\n",
  799. " Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:00<00:00, 4.70it/s]\n",
  800. " all 128 929 0.686 0.565 0.642 0.421\n",
  801. "\n",
  802. " Epoch gpu_mem box obj cls labels img_size\n",
  803. " 1/2 5.04G 0.04403 0.0611 0.01986 232 640: 100% 8/8 [00:01<00:00, 5.59it/s]\n",
  804. " Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:00<00:00, 4.46it/s]\n",
  805. " all 128 929 0.694 0.563 0.654 0.425\n",
  806. "\n",
  807. " Epoch gpu_mem box obj cls labels img_size\n",
  808. " 2/2 5.04G 0.04616 0.07056 0.02071 214 640: 100% 8/8 [00:01<00:00, 5.94it/s]\n",
  809. " Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.52it/s]\n",
  810. " all 128 929 0.711 0.562 0.66 0.431\n",
  811. "\n",
  812. "3 epochs completed in 0.005 hours.\n",
  813. "Optimizer stripped from runs/train/exp/weights/last.pt, 14.8MB\n",
  814. "Optimizer stripped from runs/train/exp/weights/best.pt, 14.8MB\n",
  815. "Results saved to \u001b[1mruns/train/exp\u001b[0m\n"
  816. ],
  817. "name": "stdout"
  818. }
  819. ]
  820. },
  821. {
  822. "cell_type": "markdown",
  823. "metadata": {
  824. "id": "15glLzbQx5u0"
  825. },
  826. "source": [
  827. "# 4. Visualize"
  828. ]
  829. },
  830. {
  831. "cell_type": "markdown",
  832. "metadata": {
  833. "id": "DLI1JmHU7B0l"
  834. },
  835. "source": [
  836. "## Weights & Biases Logging 🌟 NEW\n",
  837. "\n",
  838. "[Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_notebook) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B `pip install wandb`, and then train normally (you will be guided through setup on first use). \n",
  839. "\n",
  840. "During training you will see live updates at [https://wandb.ai/home](https://wandb.ai/home?utm_campaign=repo_yolo_notebook), and you can create and share detailed [Reports](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY) of your results. For more information see the [YOLOv5 Weights & Biases Tutorial](https://github.com/ultralytics/yolov5/issues/1289). \n",
  841. "\n",
  842. "<img align=\"left\" src=\"https://user-images.githubusercontent.com/26833433/125274843-a27bc600-e30e-11eb-9a44-62af0b7a50a2.png\" width=\"800\">"
  843. ]
  844. },
  845. {
  846. "cell_type": "markdown",
  847. "metadata": {
  848. "id": "-WPvRbS5Swl6"
  849. },
  850. "source": [
  851. "## Local Logging\n",
  852. "\n",
  853. "All results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and val jpgs to see mosaics, labels, predictions and augmentation effects. Note an Ultralytics **Mosaic Dataloader** is used for training (shown below), which combines 4 images into 1 mosaic during training.\n",
  854. "\n",
  855. "> <img src=\"https://user-images.githubusercontent.com/26833433/131255960-b536647f-7c61-4f60-bbc5-cb2544d71b2a.jpg\" width=\"700\"> \n",
  856. "`train_batch0.jpg` shows train batch 0 mosaics and labels\n",
  857. "\n",
  858. "> <img src=\"https://user-images.githubusercontent.com/26833433/131256748-603cafc7-55d1-4e58-ab26-83657761aed9.jpg\" width=\"700\"> \n",
  859. "`test_batch0_labels.jpg` shows val batch 0 labels\n",
  860. "\n",
  861. "> <img src=\"https://user-images.githubusercontent.com/26833433/131256752-3f25d7a5-7b0f-4bb3-ab78-46343c3800fe.jpg\" width=\"700\"> \n",
  862. "`test_batch0_pred.jpg` shows val batch 0 _predictions_\n",
  863. "\n",
  864. "Training results are automatically logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) as `results.csv`, which is plotted as `results.png` (below) after training completes. You can also plot any `results.csv` file manually:\n",
  865. "\n",
  866. "```python\n",
  867. "from utils.plots import plot_results \n",
  868. "plot_results('path/to/results.csv') # plot 'results.csv' as 'results.png'\n",
  869. "```\n",
  870. "\n",
  871. "<img align=\"left\" width=\"800\" alt=\"COCO128 Training Results\" src=\"https://user-images.githubusercontent.com/26833433/126906780-8c5e2990-6116-4de6-b78a-367244a33ccf.png\">"
  872. ]
  873. },
  874. {
  875. "cell_type": "markdown",
  876. "metadata": {
  877. "id": "Zelyeqbyt3GD"
  878. },
  879. "source": [
  880. "# Environments\n",
  881. "\n",
  882. "YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\n",
  883. "\n",
  884. "- **Google Colab and Kaggle** notebooks with free GPU: <a href=\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a> <a href=\"https://www.kaggle.com/ultralytics/yolov5\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a>\n",
  885. "- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)\n",
  886. "- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)\n",
  887. "- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href=\"https://hub.docker.com/r/ultralytics/yolov5\"><img src=\"https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker\" alt=\"Docker Pulls\"></a>\n"
  888. ]
  889. },
  890. {
  891. "cell_type": "markdown",
  892. "metadata": {
  893. "id": "6Qu7Iesl0p54"
  894. },
  895. "source": [
  896. "# Status\n",
  897. "\n",
  898. "![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)\n",
  899. "\n",
  900. "If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.\n"
  901. ]
  902. },
  903. {
  904. "cell_type": "markdown",
  905. "metadata": {
  906. "id": "IEijrePND_2I"
  907. },
  908. "source": [
  909. "# Appendix\n",
  910. "\n",
  911. "Optional extras below. Unit tests validate repo functionality and should be run on any PRs submitted.\n"
  912. ]
  913. },
  914. {
  915. "cell_type": "code",
  916. "metadata": {
  917. "id": "mcKoSIK2WSzj"
  918. },
  919. "source": [
  920. "# Reproduce\n",
  921. "for x in 'yolov5s', 'yolov5m', 'yolov5l', 'yolov5x':\n",
  922. " !python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed\n",
  923. " !python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP"
  924. ],
  925. "execution_count": null,
  926. "outputs": []
  927. },
  928. {
  929. "cell_type": "code",
  930. "metadata": {
  931. "id": "GMusP4OAxFu6"
  932. },
  933. "source": [
  934. "# PyTorch Hub\n",
  935. "import torch\n",
  936. "\n",
  937. "# Model\n",
  938. "model = torch.hub.load('ultralytics/yolov5', 'yolov5s')\n",
  939. "\n",
  940. "# Images\n",
  941. "dir = 'https://ultralytics.com/images/'\n",
  942. "imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images\n",
  943. "\n",
  944. "# Inference\n",
  945. "results = model(imgs)\n",
  946. "results.print() # or .show(), .save()"
  947. ],
  948. "execution_count": null,
  949. "outputs": []
  950. },
  951. {
  952. "cell_type": "code",
  953. "metadata": {
  954. "id": "FGH0ZjkGjejy"
  955. },
  956. "source": [
  957. "# Unit tests\n",
  958. "%%shell\n",
  959. "export PYTHONPATH=\"$PWD\" # to run *.py. files in subdirectories\n",
  960. "\n",
  961. "rm -rf runs # remove runs/\n",
  962. "for m in yolov5s; do # models\n",
  963. " python train.py --weights $m.pt --epochs 3 --img 320 --device 0 # train pretrained\n",
  964. " python train.py --weights '' --cfg $m.yaml --epochs 3 --img 320 --device 0 # train scratch\n",
  965. " for d in 0 cpu; do # devices\n",
  966. " python detect.py --weights $m.pt --device $d # detect official\n",
  967. " python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom\n",
  968. " python val.py --weights $m.pt --device $d # val official\n",
  969. " python val.py --weights runs/train/exp/weights/best.pt --device $d # val custom\n",
  970. " done\n",
  971. " python hubconf.py # hub\n",
  972. " python models/yolo.py --cfg $m.yaml # inspect\n",
  973. " python export.py --weights $m.pt --img 640 --batch 1 # export\n",
  974. "done"
  975. ],
  976. "execution_count": null,
  977. "outputs": []
  978. },
  979. {
  980. "cell_type": "code",
  981. "metadata": {
  982. "id": "gogI-kwi3Tye"
  983. },
  984. "source": [
  985. "# Profile\n",
  986. "from utils.torch_utils import profile\n",
  987. "\n",
  988. "m1 = lambda x: x * torch.sigmoid(x)\n",
  989. "m2 = torch.nn.SiLU()\n",
  990. "results = profile(input=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100)"
  991. ],
  992. "execution_count": null,
  993. "outputs": []
  994. },
  995. {
  996. "cell_type": "code",
  997. "metadata": {
  998. "id": "RVRSOhEvUdb5"
  999. },
  1000. "source": [
  1001. "# Evolve\n",
  1002. "!python train.py --img 640 --batch 64 --epochs 100 --data coco128.yaml --weights yolov5s.pt --cache --noautoanchor --evolve\n",
  1003. "!d=runs/train/evolve && cp evolve.* $d && zip -r evolve.zip $d && gsutil mv evolve.zip gs://bucket # upload results (optional)"
  1004. ],
  1005. "execution_count": null,
  1006. "outputs": []
  1007. },
  1008. {
  1009. "cell_type": "code",
  1010. "metadata": {
  1011. "id": "BSgFCAcMbk1R"
  1012. },
  1013. "source": [
  1014. "# VOC\n",
  1015. "for b, m in zip([64, 48, 32, 16], ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']): # zip(batch_size, model)\n",
  1016. " !python train.py --batch {b} --weights {m}.pt --data VOC.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}"
  1017. ],
  1018. "execution_count": null,
  1019. "outputs": []
  1020. }
  1021. ]
  1022. }