* Update README.md
dependencies:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
ImportError: libgthread-2.0.so.0: cannot open shared object file: No such file or directory
ImportError: libSM.so.6: cannot open shared object file: No such file or directory
ImportError: libXrender.so.1: cannot open shared object file: No such file or directory
* replace older apt-get with apt
Code commented for now until a better understanding of the issue, and also code is not cross-platform compatible.
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* ACON Activation batch-size 1 bug path
This is not a great solution to https://github.com/nmaac/acon/issues/4 but it's all I could think of at the moment.
WARNING: YOLOv5 models with MetaAconC() activations are incapable of running inference at batch-size 1 properly due to a known bug in https://github.com/nmaac/acon/issues/4 with no known solution.
* Update activations.py
* Update activations.py
* Update activations.py
* Update activations.py
Per https://pytorch.org/tutorials/recipes/script_optimized.html this should improve performance on torchscript models (and maybe coreml models also since coremltools operates on a torchscript model input, though this still requires testing).
* command line option for line thickness and hiding labels
* command line option for line thickness and hiding labels
* command line option for line thickness and hiding labels
* command line option for line thickness and hiding labels
* command line option for line thickness and hiding labels
* command line option for hiding confidence values
* Update detect.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* ONNX Simplifier
Add ONNX Simplifier to ONNX export pipeline in export.py. Will auto-install onnx-simplifier if onnx is installed but onnx-simplifier is not.
* Update general.py
* add files
* Update README.md
* Update README.md
* Update restapi.py
pretrained=True and model.eval() are used by default when loading a model now, so no need to call them manually.
* PEP8 reformat
* PEP8 reformat
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This fix should allow for visualizing YOLOv5 model graphs correctly in Tensorboard by uncommenting line 335 in train.py:
```python
if tb_writer:
tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), []) # add model graph
```
The problem was that the detect() layer checks the input size to adapt the grid if required, and tracing does not seem to like this shape check (even if the shape is fine and no grid recomputation is required). The following will warn:
0cae7576a9/train.py (L335)
Solution is below. This is a YOLOv5s model displayed in TensorBoard. You can see the Detect() layer merging the 3 layers into a single output for example, and everything appears to work and visualize correctly.
```python
tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), [])
```
<img width="893" alt="Screenshot 2021-04-11 at 01 10 09" src="https://user-images.githubusercontent.com/26833433/114286928-349bd600-9a63-11eb-941f-7139ee6cd602.png">
PR https://github.com/ultralytics/yolov5/pull/2725 introduced a very specific bug that only affects multi-GPU trainings. Apparently the cause was using the torch.cuda.amp decorator in the autoShape forward method. I've implemented amp more traditionally in this PR, and the bug is resolved.