1. 09 7月, 2021 2 次提交
  2. 08 7月, 2021 3 次提交
  3. 07 7月, 2021 5 次提交
  4. 06 7月, 2021 1 次提交
    • Glenn Jocher's avatar
      YOLOv5 + Albumentations integration (#3882) · 33202b7f
      Glenn Jocher 提交于
      * Albumentations integration
      
      * ToGray p=0.01
      
      * print confirmation
      
      * create instance in dataloader init method
      
      * improved version handling
      
      * transform not defined fix
      
      * assert string update
      
      * create check_version()
      
      * add spaces
      
      * update class comment
      33202b7f
  5. 05 7月, 2021 4 次提交
  6. 04 7月, 2021 4 次提交
  7. 02 7月, 2021 3 次提交
  8. 01 7月, 2021 1 次提交
  9. 30 6月, 2021 2 次提交
  10. 29 6月, 2021 3 次提交
  11. 28 6月, 2021 3 次提交
    • Glenn Jocher's avatar
      Update `feature_visualization()` (#3807) · 02719dde
      Glenn Jocher 提交于
      * Update `feature_visualization()`
      
      Only plot for data with height, width > 1
      
      * cleanup
      
      * Cleanup
      02719dde
    • Zigarss's avatar
      Add feature map visualization (#3804) · 20d45aa4
      Zigarss 提交于
      * Add feature map visualization
      
      Add a feature_visualization function to visualize the mid feature map of the model.
      
      * Update yolo.py
      
      * remove boolean from forward and reorder if statement
      
      * remove print from forward
      
      * General cleanup
      
      * Indent
      
      * Update plots.py
      Co-authored-by: 's avatarGlenn Jocher <glenn.jocher@ultralytics.com>
      20d45aa4
    • yellowdolphin's avatar
      Fix warmup `accumulate` (#3722) · 3974d725
      yellowdolphin 提交于
      * gradient accumulation during warmup in train.py
      
      Context:
      `accumulate` is the number of batches/gradients accumulated before calling the next optimizer.step().
      During warmup, it is ramped up from 1 to the final value nbs / batch_size. 
      Although I have not seen this in other libraries, I like the idea. During warmup, as grads are large, too large steps are more of on issue than gradient noise due to small steps.
      
      The bug:
      The condition to perform the opt step is wrong
      > if ni % accumulate == 0:
      This produces irregular step sizes if `accumulate` is not constant. It becomes relevant when batch_size is small and `accumulate` changes many times during warmup.
      
      This demo also shows the proposed solution, to use a ">=" condition instead:
      https://colab.research.google.com/drive/1MA2z2eCXYB_BC5UZqgXueqL_y1Tz_XVq?usp=sharing
      
      Further, I propose not to restrict the number of warmup iterations to >= 1000. If the user changes hyp['warmup_epochs'], this causes unexpected behavior. Also, it makes evolution unstable if this parameter was to be optimized.
      
      * replace last_opt_step tracking by do_step(ni)
      
      * add docstrings
      
      * move down nw
      
      * Update train.py
      
      * revert math import move
      Co-authored-by: 's avatarGlenn Jocher <glenn.jocher@ultralytics.com>
      3974d725
  12. 27 6月, 2021 1 次提交
  13. 26 6月, 2021 8 次提交