- 25 4月, 2021 1 次提交
-
-
由 albinxavi 提交于
-
- 24 4月, 2021 7 次提交
-
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
* Update download() for tar.gz files * Update general.py
-
由 Glenn Jocher 提交于
* Add file_size() function * Update export.py
-
由 Glenn Jocher 提交于
* Update export.py for 2 dry runs * Update export.py
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
Per https://pytorch.org/tutorials/recipes/script_optimized.html this should improve performance on torchscript models (and maybe coreml models also since coremltools operates on a torchscript model input, though this still requires testing).
-
由 Maximilian Peters 提交于
* command line option for line thickness and hiding labels * command line option for line thickness and hiding labels * command line option for line thickness and hiding labels * command line option for line thickness and hiding labels * command line option for line thickness and hiding labels * command line option for hiding confidence values * Update detect.py Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-
- 23 4月, 2021 3 次提交
-
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
-
由 fcakyon 提交于
* more explicit function arguments * fix typo in detect.py * revert import order * revert import order * remove default value
-
- 22 4月, 2021 4 次提交
-
-
由 Glenn Jocher 提交于
* ACON Activation Function ##
🚀 Feature There is a new activation function [ACON (CVPR 2021)](https://arxiv.org/pdf/2009.04759.pdf) that unifies ReLU and Swish. ACON is simple but very effective, code is here: https://github.com/nmaac/acon/blob/main/acon.py#L19  The improvements are very significant:  ## Alternatives It also has an enhanced version meta-ACON that uses a small network to learn beta explicitly, which may influence the speed a bit. ## Additional context [Code](https://github.com/nmaac/acon) and [paper](https://arxiv.org/pdf/2009.04759.pdf). * Update activations.py -
由 r-blmnr 提交于
-
由 Glenn Jocher 提交于
* VisDrone Dataset Auto-Download * add visdrone.yaml * cleanup * add VisDrone2019-DET-test-dev * cleanup VOC
-
由 Michael Heilig 提交于
-
- 21 4月, 2021 4 次提交
-
-
由 JoshSong 提交于
* don't resize up in load_image if augmenting * cleanup Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-
由 Glenn Jocher 提交于
* Implement yaml.safe_load() * yaml.safe_dump()
-
由 Burhan 提交于
* Update detect.py * Update detect.py * Update greetings.yml * Update cropping * cleanup * Update increment_path() * Update common.py * Update detect.py * Update detect.py * Update detect.py * Update common.py * cleanup * Update detect.py Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-
由 Glenn Jocher 提交于
-
- 20 4月, 2021 1 次提交
-
-
由 Tim Stokman 提交于
* Ensure dynamic export works succesfully, onnx simplifier optional * Update export.py * add dashes Co-authored-by:
Tim <tim.stokman@hal24k.com> Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-
- 18 4月, 2021 3 次提交
-
-
由 Glenn Jocher 提交于
* Update FUNDING.yml * move FUNDING.yml to ./github
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
* PyTorch Hub cv2 .save() .show() bug fix cv2.rectangle() was failing on non-contiguous np array inputs. This checks for contiguous arrays and applies is necessary: ```python imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update ``` * Update plots.py ```python assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to plot_on_box() input image.' ``` * Update hubconf.py Expand CI tests to OpenCV image.
-
- 16 4月, 2021 2 次提交
-
-
由 Glenn Jocher 提交于
Fix for #2810 ```shell python detect.py --source 0 ``` introduced by YouTube Livestream Detection PR #2752
-
由 Glenn Jocher 提交于
* ONNX Simplifier Add ONNX Simplifier to ONNX export pipeline in export.py. Will auto-install onnx-simplifier if onnx is installed but onnx-simplifier is not. * Update general.py
-
- 15 4月, 2021 2 次提交
-
-
由 Glenn Jocher 提交于
-
由 Robin 提交于
* add files * Update README.md * Update README.md * Update restapi.py pretrained=True and model.eval() are used by default when loading a model now, so no need to call them manually. * PEP8 reformat * PEP8 reformat Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-
- 12 4月, 2021 9 次提交
-
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
* torch.jit.trace(model, img, strict=False) * Update check_file() * Update hubconf.py * Update README.md
-
由 Glenn Jocher 提交于
-
由 Ben Milanko 提交于
* Youtube livestream detection * dependancy update to auto install pafy * Remove print * include youtube_dl in deps * PEP8 reformat * youtube url check fix * reduce lines * add comment * update check_requirements * stream framerate fix * Update README.md * cleanup * PEP8 * remove cap.retrieve() failure code Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-
- 11 4月, 2021 3 次提交
-
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
This fix should allow for visualizing YOLOv5 model graphs correctly in Tensorboard by uncommenting line 335 in train.py: ```python if tb_writer: tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), []) # add model graph ``` The problem was that the detect() layer checks the input size to adapt the grid if required, and tracing does not seem to like this shape check (even if the shape is fine and no grid recomputation is required). The following will warn: https://github.com/ultralytics/yolov5/blob/0cae7576a9241110157cd154fc2237e703c2719e/train.py#L335 Solution is below. This is a YOLOv5s model displayed in TensorBoard. You can see the Detect() layer merging the 3 layers into a single output for example, and everything appears to work and visualize correctly. ```python tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), []) ``` <img width="893" alt="Screenshot 2021-04-11 at 01 10 09" src="https://user-images.githubusercontent.com/26833433/114286928-349bd600-9a63-11eb-941f-7139ee6cd602.png">
-
由 Glenn Jocher 提交于
* wandb_logging PEP8 reformat * Update wandb_utils.py
-
- 10 4月, 2021 1 次提交
-
-
由 Glenn Jocher 提交于
PR https://github.com/ultralytics/yolov5/pull/2725 introduced a very specific bug that only affects multi-GPU trainings. Apparently the cause was using the torch.cuda.amp decorator in the autoShape forward method. I've implemented amp more traditionally in this PR, and the bug is resolved.
-