Fix for Colab hub error:
```python
import yaml
with open('yolov5s.yaml', errors='ignore') as f:
d = yaml.safe_load(f) # model dict
print(d)
---------------------------------------------------------------------------
ReaderError Traceback (most recent call last)
<ipython-input-8-1150b5143073> in <module>()
2
3 with open('yolov5s.yaml', errors='ignore') as f:
----> 4 d = yaml.safe_load(f) # model dict
5
6 print(d)
6 frames
/usr/local/lib/python3.7/dist-packages/yaml/reader.py in check_printable(self, data)
142 position = self.index+(len(self.buffer)-self.pointer)+match.start()
143 raise ReaderError(self.name, position, ord(character),
--> 144 'unicode', "special characters are not allowed")
145
146 def update(self, length):
ReaderError: unacceptable character #x1f680: special characters are not allowed
in "yolov5s.yaml", position 9
```
Checking for `onnx_dynamic` first should suppress the warning:
```log
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic
```
Resolves https://github.com/ultralytics/yolov5/pull/5341#issuecomment-961774729
Tests:
```python
import glob
import re
from pathlib import Path
def increment_path(path, exist_ok=False, sep='', mkdir=False):
# Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc.
path = Path(path) # os-agnostic
if path.exists() and not exist_ok:
path, suffix = (path.with_suffix(''), path.suffix) if path.is_file() else (path, '')
dirs = glob.glob(f"{path}{sep}*") # similar paths
matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs]
i = [int(m.groups()[0]) for m in matches if m] # indices
n = max(i) + 1 if i else 2 # increment number
path = Path(f"{path}{sep}{n}{suffix}") # increment path
if mkdir:
path.mkdir(parents=True, exist_ok=True) # make directory
return path
print(increment_path('runs'))
print(increment_path('export.py'))
print(increment_path('abc.def.dir'))
print(increment_path('abc.def.file'))
```
* write date in checkpoint file
write date in checkpoint file
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* isoformat
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* precommit: isort
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update isort config
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update name
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Handle edgetpu model inference
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Cleanup
Rename `tflite_runtime.interpreter as tflite` to `tflite_runtime.interpreter as tflri` to avoid conflict with existing `tflite` boolean
Co-authored-by: Nam Vu <nam@glodonusa.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
`check_requirements()` is unreliable for large packages like torch and tensorflow that may have multiple installation routes (i.e. conda, pip, tensorflow-cpu, etc.)
* Meshgrid `indexing='ij'` for PyTorch 1.10
Will not merge currently as breaks backwards compatibility.
* Meshgrid `indexing='ij'` for PyTorch 1.10
Will not merge currently as breaks backwards compatibility.
* Add check_version hard argument
* Update comment
* Minor fixes of the git checkout new branch
* Use em dash to quote
* Revert the change of git checkout
* Maybe we should up-to-date with the upstream/master?
* Add nano model to the download list
* Add possibility to also download the "*6" models
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* take EXIF orientation tags into account when fixing corrupt images
* fit 120 char
* sort imports
* Update local exif_transpose comment
We have a local inplace version that is faster than the official as the image is not copied. AutoShape() uses this for Hub models, but here it is not important as the datasets.py usage is infrequent (AutoShape() it is applied every image).
* Update datasets.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* take image files with uppercase extensions into account in autosplit
* case fix
* Refactor implementation
Removes additional variable (capital variable names are also only for global variables), and uses the same methodology as implemented earlier in datasets.py L409.
* Remove redundant rglob characters
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>