* Fix to resource warning allocation; utilize file.open within a context manager
* rename fh to f
in keeping with naming convention
Co-authored-by: Ayman Saleh <aymansaleh@Aymans-MacBook-Pro-2.local>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update __init__.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* notebook_init
* notebook_init
* notebook_init
* notebook_init
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* notebook_init
* Created using Colaboratory
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Logger consolidation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Fix `save_one_box()`
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Resolves https://github.com/ultralytics/yolov5/pull/5341#issuecomment-961774729
Tests:
```python
import glob
import re
from pathlib import Path
def increment_path(path, exist_ok=False, sep='', mkdir=False):
# Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc.
path = Path(path) # os-agnostic
if path.exists() and not exist_ok:
path, suffix = (path.with_suffix(''), path.suffix) if path.is_file() else (path, '')
dirs = glob.glob(f"{path}{sep}*") # similar paths
matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs]
i = [int(m.groups()[0]) for m in matches if m] # indices
n = max(i) + 1 if i else 2 # increment number
path = Path(f"{path}{sep}{n}{suffix}") # increment path
if mkdir:
path.mkdir(parents=True, exist_ok=True) # make directory
return path
print(increment_path('runs'))
print(increment_path('export.py'))
print(increment_path('abc.def.dir'))
print(increment_path('abc.def.file'))
```
* precommit: isort
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update isort config
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update name
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Meshgrid `indexing='ij'` for PyTorch 1.10
Will not merge currently as breaks backwards compatibility.
* Meshgrid `indexing='ij'` for PyTorch 1.10
Will not merge currently as breaks backwards compatibility.
* Add check_version hard argument
* Update comment
* take EXIF orientation tags into account when fixing corrupt images
* fit 120 char
* sort imports
* Update local exif_transpose comment
We have a local inplace version that is faster than the official as the image is not copied. AutoShape() uses this for Hub models, but here it is not important as the datasets.py usage is infrequent (AutoShape() it is applied every image).
* Update datasets.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* take image files with uppercase extensions into account in autosplit
* case fix
* Refactor implementation
Removes additional variable (capital variable names are also only for global variables), and uses the same methodology as implemented earlier in datasets.py L409.
* Remove redundant rglob characters
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Autofix duplicate labels
PR changes duplicate label handling from report error and ignore image-label pair to report warning and autofix image-label pair.
This should fix this common issue for users and allow everyone to get started and get a model trained faster and easier than before.
* sign fix
* Cleanup
* Increment cache version
* all to any fix
* Add train class filter feature to datasets.py
Allows for training on a subset of total classes if `include_class` list is defined on datasets.py L448:
```python
include_class = [] # filter labels to include only these classes (optional)
```
* segments fix
* added callbacks
* added back callback to main
* added save_dir to callback output
* merged in upstream
* removed ghost code
* fixed parsing error for google temp links
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
`check_file()` is now limited to searching opt-in directories: /data, /models, /utils. This prevents large non-project directories like /.git and /venv from being searched, which may cause `check_file()` to slow significantly.