- 27 3月, 2021 1 次提交
-
-
由 Ayush Chaurasia 提交于
* Fix Indentation in test.py * CI fix * Comply with PEP8: 80 characters per line
-
- 26 3月, 2021 6 次提交
-
-
由 Glenn Jocher 提交于
* Update detections() self.t = tuple() Fix multiple results.print() bug. * Update experimental.py * Update yolo.py
-
由 Glenn Jocher 提交于
Updated device selection string with fallback for non-git directories. ```python def select_device(device='', batch_size=None): # device = 'cpu' or '0' or '0,1,2,3' s = f'YOLOv5
🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string ... ``` -
由 maxupp 提交于
Co-authored-by:
Max Uppenkamp <max.uppenkamp@inform-software.com>
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
Cython should be a dependency of the remaining packages in requirements.txt, so should be installed anyway even if not a direct requirement.
-
由 Glenn Jocher 提交于
-
- 25 3月, 2021 4 次提交
-
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
This updates the default detect.py behavior to automatically save all inference images/videos/webcams unless the new argument --nosave is used (python detect.py --nosave) or unless a list of streaming sources is passed (python detect.py --source streams.txt)
-
由 Max Kolomeychenko 提交于
guide describes YOLOv5 apps collection in Supervisely Ecosystem
-
由 Glenn Jocher 提交于
Improved user-feedback following requirements auto-update.
-
- 24 3月, 2021 6 次提交
-
-
由 Glenn Jocher 提交于
Prints 'Please restart runtime or rerun command for update to take effect.' following package auto-install to inform users to restart/rerun.
-
由 Glenn Jocher 提交于
* YOLOv5 PyTorch Hub models >> check_requirements() Update YOLOv5 PyTorch Hub requirements.txt path to cache path. * Update hubconf.py
-
由 Glenn Jocher 提交于
Revert unintentional change to test batch sizes caused by PR https://github.com/ultralytics/yolov5/pull/2125
-
由 Glenn Jocher 提交于
* Update hubconf.py with check_requirements() Dependency checks have been missing from YOLOv5 PyTorch Hub model loading, causing errors in some cases when users are attempting to import hub models in unsupported environments. This should examine the YOLOv5 requirements.txt file and pip install any missing or version-conflict packages encountered. This is highly experimental (!), please let us know if this creates problems in your custom workflows. * Update hubconf.py
-
由 Glenn Jocher 提交于
* Update tensorboard>=2.4.1 Update tensorboard version to attempt to address https://github.com/ultralytics/yolov5/issues/2573 (tensorboard logging fail in Docker image). * cleanup
-
由 Glenn Jocher 提交于
* Update check_requirements() with auto-install This PR builds on an idea I had to automatically install missing dependencies rather than simply report an error message. YOLOv5 should now 1) display all dependency issues and not simply display the first missing dependency, and 2) attempt to install/update each missing/VersionConflict package. * cleanup * cleanup 2 * Check requirements.txt file exists * cleanup 3
-
- 23 3月, 2021 4 次提交
-
-
由 Ayush Chaurasia 提交于
-
由 Glenn Jocher 提交于
Exclude non-critical packages from dependency checks in detect.py. pycocotools and thop in particular are not required for inference. Issue first raised in https://github.com/ultralytics/yolov5/issues/1944 and also raised in https://github.com/ultralytics/yolov5/discussions/2556
-
由 Glenn Jocher 提交于
Fix for results.tolist() method breaking after YOLOv5 Hub profiling PRshttps://github.com/ultralytics/yolov5/pull/2460 https://github.com/ultralytics/yolov5/pull/2459 and
-
由 Ayush Chaurasia 提交于
* Init Commit * new wandb integration * Update * Use data_dict in test * Updates * Update: scope of log_img * Update: scope of log_img * Update * Update: Fix logging conditions * Add tqdm bar, support for .txt dataset format * Improve Result table Logger * Init Commit * new wandb integration * Update * Use data_dict in test * Updates * Update: scope of log_img * Update: scope of log_img * Update * Update: Fix logging conditions * Add tqdm bar, support for .txt dataset format * Improve Result table Logger * Add dataset creation in training script * Change scope: self.wandb_run * Add wandb-artifact:// natively you can now use --resume with wandb run links * Add suuport for logging dataset while training * Cleanup * Fix: Merge conflict * Fix: CI tests * Automatically use wandb config * Fix: Resume * Fix: CI * Enhance: Using val_table * More resume enhancement * FIX : CI * Add alias * Get useful opt config data * train.py cleanup * Cleanup train.py * more cleanup * Cleanup| CI fix * Reformat using PEP8 * FIX:CI * rebase * remove uneccesary changes * remove uneccesary changes * remove uneccesary changes * remove unecessary chage from test.py * FIX: resume from local checkpoint * FIX:resume * FIX:resume * Reformat * Performance improvement * Fix local resume * Fix local resume * FIX:CI * Fix: CI * Imporve image logging * (:(:Redo CI tests:):) * Remember epochs when resuming * Remember epochs when resuming * Update DDP location Potential fix for #2405 * PEP8 reformat * 0.25 confidence threshold * reset train.py plots syntax to previous * reset epochs completed syntax to previous * reset space to previous * remove brackets * reset comment to previous * Update: is_coco check, remove unused code * Remove redundant print statement * Remove wandb imports * remove dsviz logger from test.py * Remove redundant change from test.py * remove redundant changes from train.py * reformat and improvements * Fix typo * Add tqdm tqdm progress when scanning files, naming improvements Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-
- 15 3月, 2021 4 次提交
-
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
* PyTorch Hub models default to CUDA:0 if available * device as string bug fix
-
-
由 Yann Defretin 提交于
* Be able to create dataset from annotated images only Add the ability to create a dataset/splits only with images that have an annotation file, i.e a .txt file, associated to it. As we talked about this, the absence of a txt file could mean two things: * either the image wasn't yet labelled by someone, * either there is no object to detect. When it's easy to create small datasets, when you have to create datasets with thousands of images (and more coming), it's hard to track where you at and you don't want to wait to have all of them annotated before starting to train. Which means some images would lack txt files and annotations, resulting in label inconsistency as you say in #2313. By adding the annotated_only argument to the function, people could create, if they want to, datasets/splits only with images that were labelled, for sure. * Cleanup and update print() Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-
- 14 3月, 2021 4 次提交
-
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
* Add autoShape() speed profiling * Update common.py * Create README.md * Update hubconf.py * cleanuip
-
- 13 3月, 2021 4 次提交
-
-
由 Glenn Jocher 提交于
curl preferred over wget for slightly better cross platform compatibility (i.e. out of the box macos compatible).
-
由 Glenn Jocher 提交于
* labels.png class names * fontsize=10
-
由 Glenn Jocher 提交于
* Update test.py --task train val study * update argparser --task
-
由 Glenn Jocher 提交于
* Integer printout * test.py 'Labels' * Update train.py
-
- 10 3月, 2021 3 次提交
-
-
由 Glenn Jocher 提交于
-
由 Glenn Jocher 提交于
-
由 Kartikeya Sharma 提交于
* added argoverse-download ability * bugfix * add support for Argoverse dataset * Refactored code * renamed to argoverse-HD * unzip -q and YOLOv5 small cleanup items * add image counts Co-authored-by:
Kartikeya Sharma <kartikes@trinity.vision.cs.cmu.edu> Co-authored-by:
Kartikeya Sharma <kartikes@trinity-0-32.eth> Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-
- 08 3月, 2021 2 次提交
-
-
由 Glenn Jocher 提交于
* GCP sudo docker * cleanup
-
由 Glenn Jocher 提交于
-
- 07 3月, 2021 2 次提交
-
-
由 Glenn Jocher 提交于
-
由 Jan Hajek 提交于
* option for skip last layer and cuda export support * added parameter device * fix import * cleanup 1 * cleanup 2 * opt-in grid --grid will export with grid computation, default export will skip grid (same as current) * default --device cpu GPU export causes ONNX and CoreML errors. Co-authored-by:
Jan Hajek <jan.hajek@gmail.com> Co-authored-by:
Glenn Jocher <glenn.jocher@ultralytics.com>
-