src.yolov5.utils package
Subpackages
- src.yolov5.utils.aws package
- src.yolov5.utils.wandb_logging package
- Submodules
- src.yolov5.utils.wandb_logging.log_dataset module
- src.yolov5.utils.wandb_logging.wandb_utils module
WandbLogger
WandbLogger.check_and_upload_dataset()
WandbLogger.create_dataset_table()
WandbLogger.download_dataset_artifact()
WandbLogger.download_model_artifact()
WandbLogger.end_epoch()
WandbLogger.finish_run()
WandbLogger.log()
WandbLogger.log_dataset_artifact()
WandbLogger.log_model()
WandbLogger.log_training_progress()
WandbLogger.map_val_table_path()
WandbLogger.setup_training()
check_wandb_config_file()
check_wandb_resume()
get_run_info()
process_wandb_config_ddp_mode()
remove_prefix()
- Module contents
Submodules
src.yolov5.utils.activations module
- class src.yolov5.utils.activations.AconC(c1)[source]
Bases:
Module
ACON activation (activate or not). AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter according to “Activate or Not: Learning Customized Activation” <https://arxiv.org/pdf/2009.04759.pdf>.
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class src.yolov5.utils.activations.FReLU(c1, k=3)[source]
Bases:
Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class src.yolov5.utils.activations.Hardswish[source]
Bases:
Module
- static forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class src.yolov5.utils.activations.MemoryEfficientMish[source]
Bases:
Module
- class F[source]
Bases:
Function
- static backward(ctx, grad_output)[source]
Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- static forward(ctx, x)[source]
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class src.yolov5.utils.activations.MetaAconC(c1, k=1, s=1, r=16)[source]
Bases:
Module
ACON activation (activate or not). MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network according to “Activate or Not: Learning Customized Activation” <https://arxiv.org/pdf/2009.04759.pdf>.
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class src.yolov5.utils.activations.Mish[source]
Bases:
Module
- static forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class src.yolov5.utils.activations.SiLU[source]
Bases:
Module
- static forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
src.yolov5.utils.autoanchor module
- src.yolov5.utils.autoanchor.kmean_anchors(path='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True)[source]
Creates kmeans-evolved anchors from training dataset
- Parameters
path – path to dataset *.yaml, or a loaded dataset
n – number of anchors
img_size – image size used for training
thr – anchor-label wh ratio threshold hyperparameter hyp[‘anchor_t’] used for training, default=4.0
gen – generations to evolve anchors using genetic algorithm
verbose – print all results
- Returns
kmeans evolved anchors
- Return type
k
- Usage:
from utils.autoanchor import *; _ = kmean_anchors()
src.yolov5.utils.datasets module
- class src.yolov5.utils.datasets.InfiniteDataLoader(*args, **kwargs)[source]
Bases:
DataLoader
Dataloader that reuses workers
Uses same syntax as vanilla DataLoader
- batch_size: Optional[int]
- dataset: Dataset[T_co]
- drop_last: bool
- num_workers: int
- pin_memory: bool
- prefetch_factor: int
- sampler: Sampler
- timeout: float
- class src.yolov5.utils.datasets.LoadImagesAndLabels(path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False, cache_images=False, single_cls=False, stride=32, pad=0.0, prefix='')[source]
Bases:
Dataset
- class src.yolov5.utils.datasets.LoadStreams(sources='streams.txt', img_size=640, stride=32)[source]
Bases:
object
- class src.yolov5.utils.datasets.LoadWebcam(pipe='0', img_size=640, stride=32)[source]
Bases:
object
- src.yolov5.utils.datasets.autosplit(path='../coco128', weights=(0.9, 0.1, 0.0), annotated_only=False)[source]
Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files Usage: from utils.datasets import *; autosplit(‘../coco128’) Arguments
path: Path to images directory weights: Train, val, test weights (list) annotated_only: Only use images with an annotated txt file
- src.yolov5.utils.datasets.box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16)[source]
- src.yolov5.utils.datasets.create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False, rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix='')[source]
- src.yolov5.utils.datasets.letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32)[source]
src.yolov5.utils.general module
- src.yolov5.utils.general.bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-07)[source]
- src.yolov5.utils.general.box_iou(box1, box2)[source]
Return intersection-over-union (Jaccard index) of boxes. Both sets of boxes are expected to be in (x1, y1, x2, y2) format. :param box1: :type box1: Tensor[N, 4] :param box2: :type box2: Tensor[M, 4]
- Returns
- the NxM matrix containing the pairwise
IoU values for every element in boxes1 and boxes2
- Return type
iou (Tensor[N, M])
- src.yolov5.utils.general.download(url, dir='.', unzip=True, delete=True, curl=False, threads=1)[source]
- src.yolov5.utils.general.labels_to_image_weights(labels, nc=80, class_weights=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]))[source]
- src.yolov5.utils.general.non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, labels=(), max_det=300)[source]
Runs Non-Maximum Suppression (NMS) on inference results
- Returns
list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- src.yolov5.utils.general.print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket='')[source]
src.yolov5.utils.google_utils module
src.yolov5.utils.loss module
- class src.yolov5.utils.loss.BCEBlurWithLogitsLoss(alpha=0.05)[source]
Bases:
Module
- forward(pred, true)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class src.yolov5.utils.loss.FocalLoss(loss_fcn, gamma=1.5, alpha=0.25)[source]
Bases:
Module
- forward(pred, true)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class src.yolov5.utils.loss.QFocalLoss(loss_fcn, gamma=1.5, alpha=0.25)[source]
Bases:
Module
- forward(pred, true)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
src.yolov5.utils.metrics module
- class src.yolov5.utils.metrics.ConfusionMatrix(nc, conf=0.25, iou_thres=0.45)[source]
Bases:
object
- process_batch(detections, labels)[source]
Return intersection-over-union (Jaccard index) of boxes. Both sets of boxes are expected to be in (x1, y1, x2, y2) format. :param detections: :type detections: Array[N, 6] :param labels: :type labels: Array[M, 5]
- Returns
None, updates confusion matrix accordingly
- src.yolov5.utils.metrics.ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=())[source]
Compute the average precision, given the recall and precision curves. Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. # Arguments
tp: True positives (nparray, nx1 or nx10). conf: Objectness value from 0-1 (nparray). pred_cls: Predicted object classes (nparray). target_cls: True object classes (nparray). plot: Plot precision-recall curve at mAP@0.5 save_dir: Plot save directory
- # Returns
The average precision as computed in py-faster-rcnn.
- src.yolov5.utils.metrics.compute_ap(recall, precision)[source]
Compute the average precision, given the recall and precision curves # Arguments
recall: The recall curve (list) precision: The precision curve (list)
- # Returns
Average precision, precision curve, recall curve
src.yolov5.utils.plots module
- src.yolov5.utils.plots.plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16)[source]
- src.yolov5.utils.plots.plot_labels(labels, names=(), save_dir=PosixPath('.'), loggers=None)[source]
- src.yolov5.utils.plots.plot_one_box(x, im, color=(128, 128, 128), label=None, line_thickness=3)[source]
- src.yolov5.utils.plots.plot_one_box_PIL(box, im, color=(128, 128, 128), label=None, line_thickness=None)[source]
src.yolov5.utils.torch_utils module
- class src.yolov5.utils.torch_utils.ModelEMA(model, decay=0.9999, updates=0)[source]
Bases:
object
Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models Keep a moving average of everything in the model state_dict (parameters and buffers). This is intended to allow functionality like https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage A smoothed version of the weights is necessary for some training schemes to perform well. This class is sensitive where it is initialized in the sequence of model init, GPU assignment and distributed training wrappers.
- src.yolov5.utils.torch_utils.date_modified(path='/Users/shindo/Workspace/github/alphailpdoc/src/yolov5/utils/torch_utils.py')[source]
- src.yolov5.utils.torch_utils.find_modules(model, mclass=<class 'torch.nn.modules.conv.Conv2d'>)[source]