Unverified 提交 ec4b6dd2 authored 作者: Glenn Jocher's avatar Glenn Jocher 提交者: GitHub

Update export format docstrings (#6151)

* Update export documentation * Cleanup * Update export.py * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update README.md * Update README.md * Update README.md * Update train.py * Update train.py Co-authored-by: 's avatarpre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
上级 e1dc8943
...@@ -62,15 +62,14 @@ See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on tr ...@@ -62,15 +62,14 @@ See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on tr
<details open> <details open>
<summary>Install</summary> <summary>Install</summary>
[**Python>=3.6.0**](https://www.python.org/) is required with all Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a
[requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) installed including [**Python>=3.6.0**](https://www.python.org/) environment, including
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/): [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
<!-- $ sudo apt update && apt install -y libgl1-mesa-glx libsm6 libxext6 libxrender-dev -->
```bash ```bash
$ git clone https://github.com/ultralytics/yolov5 git clone https://github.com/ultralytics/yolov5 # clone
$ cd yolov5 cd yolov5
$ pip install -r requirements.txt pip install -r requirements.txt # install
``` ```
</details> </details>
...@@ -78,8 +77,9 @@ $ pip install -r requirements.txt ...@@ -78,8 +77,9 @@ $ pip install -r requirements.txt
<details open> <details open>
<summary>Inference</summary> <summary>Inference</summary>
Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)
from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). . [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest
YOLOv5 [release](https://github.com/ultralytics/yolov5/releases).
```python ```python
import torch import torch
...@@ -104,17 +104,17 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc. ...@@ -104,17 +104,17 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
<details> <details>
<summary>Inference with detect.py</summary> <summary>Inference with detect.py</summary>
`detect.py` runs inference on a variety of sources, downloading models automatically from `detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from
the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`. the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
```bash ```bash
$ python detect.py --source 0 # webcam python detect.py --source 0 # webcam
img.jpg # image img.jpg # image
vid.mp4 # video vid.mp4 # video
path/ # directory path/ # directory
path/*.jpg # glob path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube 'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
``` ```
</details> </details>
...@@ -122,16 +122,20 @@ $ python detect.py --source 0 # webcam ...@@ -122,16 +122,20 @@ $ python detect.py --source 0 # webcam
<details> <details>
<summary>Training</summary> <summary>Training</summary>
Run commands below to reproduce results The commands below reproduce YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh)
on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on results. [Models](https://github.com/ultralytics/yolov5/tree/master/models)
first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest
largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices). YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are
1/2/4/6/8 days on a V100 GPU ([Multi-GPU](https://github.com/ultralytics/yolov5/issues/475) times faster). Use the
largest `--batch-size` possible, or pass `--batch-size -1` for
YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). Batch sizes shown for V100-16GB.
```bash ```bash
$ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64 python train.py --data coco.yaml --cfg yolov5n.yaml --weights '' --batch-size 128
yolov5m 40 yolov5s 64
yolov5l 24 yolov5m 40
yolov5x 16 yolov5l 24
yolov5x 16
``` ```
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png"> <img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
...@@ -225,6 +229,7 @@ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competi ...@@ -225,6 +229,7 @@ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competi
### Pretrained Checkpoints ### Pretrained Checkpoints
[assets]: https://github.com/ultralytics/yolov5/releases [assets]: https://github.com/ultralytics/yolov5/releases
[TTA]: https://github.com/ultralytics/yolov5/issues/303 [TTA]: https://github.com/ultralytics/yolov5/issues/303
|Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (B) |Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (B)
...@@ -257,7 +262,6 @@ We love your input! We want to make contributing to YOLOv5 as easy and transpare ...@@ -257,7 +262,6 @@ We love your input! We want to make contributing to YOLOv5 as easy and transpare
<a href="https://github.com/ultralytics/yolov5/graphs/contributors"><img src="https://opencollective.com/ultralytics/contributors.svg?width=990" /></a> <a href="https://github.com/ultralytics/yolov5/graphs/contributors"><img src="https://opencollective.com/ultralytics/contributors.svg?width=990" /></a>
## <div align="center">Contact</div> ## <div align="center">Contact</div>
For YOLOv5 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business inquiries or For YOLOv5 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business inquiries or
......
...@@ -2,14 +2,26 @@ ...@@ -2,14 +2,26 @@
""" """
Run inference on images, videos, directories, streams, etc. Run inference on images, videos, directories, streams, etc.
Usage: Usage - sources:
$ python path/to/detect.py --weights yolov5s.pt --source 0 # webcam $ python path/to/detect.py --weights yolov5s.pt --source 0 # webcam
img.jpg # image img.jpg # image
vid.mp4 # video vid.mp4 # video
path/ # directory path/ # directory
path/*.jpg # glob path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube 'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
Usage - formats:
$ python path/to/detect.py --weights yolov5s.pt # PyTorch
yolov5s.torchscript # TorchScript
yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
yolov5s.mlmodel # CoreML (under development)
yolov5s_openvino_model # OpenVINO (under development)
yolov5s_saved_model # TensorFlow SavedModel
yolov5s.pb # TensorFlow protobuf
yolov5s.tflite # TensorFlow Lite
yolov5s_edgetpu.tflite # TensorFlow Edge TPU
yolov5s.engine # TensorRT
""" """
import argparse import argparse
......
...@@ -2,18 +2,19 @@ ...@@ -2,18 +2,19 @@
""" """
Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit
Format | Example | `--include ...` argument Format | Example | `--include ...` argument
--- | --- | --- --- | --- | ---
PyTorch | yolov5s.pt | - PyTorch | yolov5s.pt | -
TorchScript | yolov5s.torchscript | `torchscript` TorchScript | yolov5s.torchscript | `torchscript`
ONNX | yolov5s.onnx | `onnx` ONNX | yolov5s.onnx | `onnx`
CoreML | yolov5s.mlmodel | `coreml` CoreML | yolov5s.mlmodel | `coreml`
OpenVINO | yolov5s_openvino_model/ | `openvino` OpenVINO | yolov5s_openvino_model/ | `openvino`
TensorFlow SavedModel | yolov5s_saved_model/ | `saved_model` TensorFlow SavedModel | yolov5s_saved_model/ | `saved_model`
TensorFlow GraphDef | yolov5s.pb | `pb` TensorFlow GraphDef | yolov5s.pb | `pb`
TensorFlow Lite | yolov5s.tflite | `tflite` TensorFlow Lite | yolov5s.tflite | `tflite`
TensorFlow.js | yolov5s_web_model/ | `tfjs` TensorFlow Edge TPU | yolov5s_edgetpu.tflite | `edgetpu`
TensorRT | yolov5s.engine | `engine` TensorFlow.js | yolov5s_web_model/ | `tfjs`
TensorRT | yolov5s.engine | `engine`
Usage: Usage:
$ python path/to/export.py --weights yolov5s.pt --include torchscript onnx coreml openvino saved_model tflite tfjs $ python path/to/export.py --weights yolov5s.pt --include torchscript onnx coreml openvino saved_model tflite tfjs
...@@ -27,6 +28,7 @@ Inference: ...@@ -27,6 +28,7 @@ Inference:
yolov5s_saved_model yolov5s_saved_model
yolov5s.pb yolov5s.pb
yolov5s.tflite yolov5s.tflite
yolov5s_edgetpu.tflite
yolov5s.engine yolov5s.engine
TensorFlow.js: TensorFlow.js:
......
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
""" """
Train a YOLOv5 model on a custom dataset Train a YOLOv5 model on a custom dataset.
Models and datasets download automatically from the latest YOLOv5 release.
Models: https://github.com/ultralytics/yolov5/tree/master/models
Datasets: https://github.com/ultralytics/yolov5/tree/master/data
Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
Usage: Usage:
$ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640 $ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (RECOMMENDED)
$ python path/to/train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch
""" """
import argparse import argparse
import math import math
import os import os
......
...@@ -3,7 +3,19 @@ ...@@ -3,7 +3,19 @@
Validate a trained YOLOv5 model accuracy on a custom dataset Validate a trained YOLOv5 model accuracy on a custom dataset
Usage: Usage:
$ python path/to/val.py --data coco128.yaml --weights yolov5s.pt --img 640 $ python path/to/val.py --weights yolov5s.pt --data coco128.yaml --img 640
Usage - formats:
$ python path/to/val.py --weights yolov5s.pt # PyTorch
yolov5s.torchscript # TorchScript
yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
yolov5s.mlmodel # CoreML (under development)
yolov5s_openvino_model # OpenVINO (under development)
yolov5s_saved_model # TensorFlow SavedModel
yolov5s.pb # TensorFlow protobuf
yolov5s.tflite # TensorFlow Lite
yolov5s_edgetpu.tflite # TensorFlow Edge TPU
yolov5s.engine # TensorRT
""" """
import argparse import argparse
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论