* update detect.py in order to support torch script
This change assumes the torchscrip file was previously saved with `export.py`
* update `detect.py` for torchscript support
Simple update for torchscript support. Assumes the torchscript file has been generated with `export.py`
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* use os.path.relpath instead of relative_to
* use os.path.relpath instead of relative_to
* Remove os.path from val.py
* Remove os.path from train.py
* Update detect.py import to os
* Update export.py import to os
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add models/tf.py for TensorFlow and TFLite export
* Set auto=False for int8 calibration
* Update requirements.txt for TensorFlow and TFLite export
* Read anchors directly from PyTorch weights
* Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export
* Remove check_anchor_order, check_file, set_logging from import
* Reformat code and optimize imports
* Autodownload model and check cfg
* update --source path, img-size to 320, single output
* Adjust representative_dataset
* Put representative dataset in tfl_int8 block
* detect.py TF inference
* weights to string
* weights to string
* cleanup tf.py
* Add --dynamic-batch-size
* Add xywh normalization to reduce calibration error
* Update requirements.txt
TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error
* Fix imports
Move C3 from models.experimental to models.common
* Add models/tf.py for TensorFlow and TFLite export
* Set auto=False for int8 calibration
* Update requirements.txt for TensorFlow and TFLite export
* Read anchors directly from PyTorch weights
* Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export
* Remove check_anchor_order, check_file, set_logging from import
* Reformat code and optimize imports
* Autodownload model and check cfg
* update --source path, img-size to 320, single output
* Adjust representative_dataset
* detect.py TF inference
* Put representative dataset in tfl_int8 block
* weights to string
* weights to string
* cleanup tf.py
* Add --dynamic-batch-size
* Add xywh normalization to reduce calibration error
* Update requirements.txt
TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error
* Fix imports
Move C3 from models.experimental to models.common
* implement C3() and SiLU()
* Fix reshape dim to support dynamic batching
* Add epsilon argument in tf_BN, which is different between TF and PT
* Set stride to None if not using PyTorch, and do not warmup without PyTorch
* Add list support in check_img_size()
* Add list input support in detect.py
* sys.path.append('./') to run from yolov5/
* Add int8 quantization support for TensorFlow 2.5
* Add get_coco128.sh
* Remove --no-tfl-detect in models/tf.py (Use tf-android-tfl-detect branch for EdgeTPU)
* Update requirements.txt
* Replace torch.load() with attempt_load()
* Update requirements.txt
* Add --tf-raw-resize to set half_pixel_centers=False
* Add --agnostic-nms for TF class-agnostic NMS
* Cleanup after merge
* Cleanup2 after merge
* Cleanup3 after merge
* Add tf.py docstring with credit and usage
* pb saved_model and tflite use only one model in detect.py
* Add use cases in docstring of tf.py
* Remove redundant `stride` definition
* Remove keras direct import
* Fix `check_requirements(('tensorflow>=2.4.1',))`
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Added the recording feature for multiple streams
Thanks for the very cool repo!!
I was trying to record multiple feeds at the same time, but the current version of the detector only had one video writer and one vid_path!
So the streams were not being saved and only were initialized with one frame and this process didn't record the whole thing.
Fix:
I made a list of `vid_writer` and `vid_path` and the `i` from the loop over the `pred` took care of the writer which need to work!
I hope this helps, Thanks!
* Cleanup list lengths
* batch size variable
* Update datasets.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Slightly modify CLI execution
This simple change makes it easier to run the primary functions of this
repo (train/detect/test) from within Python. An object which represents
`opt` can be constructed and fed to the `main` function of each of these
modules, rather than having to call the lower level functions directly,
or run the module as a script.
* Update export.py
Add CLI parsing update for more convenient module usage within Python.
Co-authored-by: Lewis Belcher <lb@desupervised.io>
* Added max_det parameters in various places
* 120 character line
* PEP8
* 120 character line
* Update inference default to 1000 instances
* Update inference default to 1000 instances
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* command line option for line thickness and hiding labels
* command line option for line thickness and hiding labels
* command line option for line thickness and hiding labels
* command line option for line thickness and hiding labels
* command line option for line thickness and hiding labels
* command line option for hiding confidence values
* Update detect.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>