src.yolov5.utils package

Subpackages

Submodules

src.yolov5.utils.activations module

class src.yolov5.utils.activations.AconC(c1)[source]

Bases: Module

ACON activation (activate or not). AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter according to “Activate or Not: Learning Customized Activation” <https://arxiv.org/pdf/2009.04759.pdf>.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class src.yolov5.utils.activations.FReLU(c1, k=3)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class src.yolov5.utils.activations.Hardswish[source]

Bases: Module

static forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class src.yolov5.utils.activations.MemoryEfficientMish[source]

Bases: Module

class F[source]

Bases: Function

static backward(ctx, grad_output)[source]

Defines a formula for differentiating the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs did forward() return, and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, x)[source]

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store tensors that can be then retrieved during the backward pass.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class src.yolov5.utils.activations.MetaAconC(c1, k=1, s=1, r=16)[source]

Bases: Module

ACON activation (activate or not). MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network according to “Activate or Not: Learning Customized Activation” <https://arxiv.org/pdf/2009.04759.pdf>.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class src.yolov5.utils.activations.Mish[source]

Bases: Module

static forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class src.yolov5.utils.activations.SiLU[source]

Bases: Module

static forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

src.yolov5.utils.autoanchor module

src.yolov5.utils.autoanchor.check_anchor_order(m)[source]
src.yolov5.utils.autoanchor.check_anchors(dataset, model, thr=4.0, imgsz=640)[source]
src.yolov5.utils.autoanchor.kmean_anchors(path='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True)[source]

Creates kmeans-evolved anchors from training dataset

Parameters
  • path – path to dataset *.yaml, or a loaded dataset

  • n – number of anchors

  • img_size – image size used for training

  • thr – anchor-label wh ratio threshold hyperparameter hyp[‘anchor_t’] used for training, default=4.0

  • gen – generations to evolve anchors using genetic algorithm

  • verbose – print all results

Returns

kmeans evolved anchors

Return type

k

Usage:

from utils.autoanchor import *; _ = kmean_anchors()

src.yolov5.utils.datasets module

class src.yolov5.utils.datasets.InfiniteDataLoader(*args, **kwargs)[source]

Bases: DataLoader

Dataloader that reuses workers

Uses same syntax as vanilla DataLoader

batch_size: Optional[int]
dataset: Dataset[T_co]
drop_last: bool
num_workers: int
pin_memory: bool
prefetch_factor: int
sampler: Sampler
timeout: float
class src.yolov5.utils.datasets.LoadImages(path, img_size=640, stride=32)[source]

Bases: object

new_video(path)[source]
class src.yolov5.utils.datasets.LoadImagesAndLabels(path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False, cache_images=False, single_cls=False, stride=32, pad=0.0, prefix='')[source]

Bases: Dataset

cache_labels(path=PosixPath('labels.cache'), prefix='')[source]
static collate_fn(batch)[source]
static collate_fn4(batch)[source]
class src.yolov5.utils.datasets.LoadStreams(sources='streams.txt', img_size=640, stride=32)[source]

Bases: object

update(i, cap)[source]
class src.yolov5.utils.datasets.LoadWebcam(pipe='0', img_size=640, stride=32)[source]

Bases: object

src.yolov5.utils.datasets.augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5)[source]
src.yolov5.utils.datasets.autosplit(path='../coco128', weights=(0.9, 0.1, 0.0), annotated_only=False)[source]

Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files Usage: from utils.datasets import *; autosplit(‘../coco128’) Arguments

path: Path to images directory weights: Train, val, test weights (list) annotated_only: Only use images with an annotated txt file

src.yolov5.utils.datasets.box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16)[source]
src.yolov5.utils.datasets.create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False, rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix='')[source]
src.yolov5.utils.datasets.create_folder(path='./new')[source]
src.yolov5.utils.datasets.cutout(image, labels)[source]
src.yolov5.utils.datasets.exif_size(img)[source]
src.yolov5.utils.datasets.extract_boxes(path='../coco128/')[source]
src.yolov5.utils.datasets.flatten_recursive(path='../coco128')[source]
src.yolov5.utils.datasets.get_hash(paths)[source]
src.yolov5.utils.datasets.hist_equalize(img, clahe=True, bgr=False)[source]
src.yolov5.utils.datasets.img2label_paths(img_paths)[source]
src.yolov5.utils.datasets.letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32)[source]
src.yolov5.utils.datasets.load_image(self, index)[source]
src.yolov5.utils.datasets.load_mosaic(self, index)[source]
src.yolov5.utils.datasets.load_mosaic9(self, index)[source]
src.yolov5.utils.datasets.random_perspective(img, targets=(), segments=(), degrees=10, translate=0.1, scale=0.1, shear=10, perspective=0.0, border=(0, 0))[source]
src.yolov5.utils.datasets.replicate(img, labels)[source]

src.yolov5.utils.general module

src.yolov5.utils.general.apply_classifier(x, model, img, im0)[source]
src.yolov5.utils.general.bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-07)[source]
src.yolov5.utils.general.box_iou(box1, box2)[source]

Return intersection-over-union (Jaccard index) of boxes. Both sets of boxes are expected to be in (x1, y1, x2, y2) format. :param box1: :type box1: Tensor[N, 4] :param box2: :type box2: Tensor[M, 4]

Returns

the NxM matrix containing the pairwise

IoU values for every element in boxes1 and boxes2

Return type

iou (Tensor[N, M])

src.yolov5.utils.general.check_dataset(dict)[source]
src.yolov5.utils.general.check_file(file)[source]
src.yolov5.utils.general.check_git_status()[source]
src.yolov5.utils.general.check_img_size(img_size, s=32)[source]
src.yolov5.utils.general.check_imshow()[source]
src.yolov5.utils.general.check_online()[source]
src.yolov5.utils.general.check_python(minimum='3.7.0', required=True)[source]
src.yolov5.utils.general.check_requirements(requirements='requirements.txt', exclude=())[source]
src.yolov5.utils.general.clean_str(s)[source]
src.yolov5.utils.general.clip_coords(boxes, img_shape)[source]
src.yolov5.utils.general.coco80_to_coco91_class()[source]
src.yolov5.utils.general.colorstr(*input)[source]
src.yolov5.utils.general.download(url, dir='.', unzip=True, delete=True, curl=False, threads=1)[source]
src.yolov5.utils.general.emojis(str='')[source]
src.yolov5.utils.general.file_size(file)[source]
src.yolov5.utils.general.get_latest_run(search_dir='.')[source]
src.yolov5.utils.general.increment_path(path, exist_ok=False, sep='', mkdir=False)[source]
src.yolov5.utils.general.init_seeds(seed=0)[source]
src.yolov5.utils.general.is_colab()[source]
src.yolov5.utils.general.is_docker()[source]
src.yolov5.utils.general.labels_to_class_weights(labels, nc=80)[source]
src.yolov5.utils.general.labels_to_image_weights(labels, nc=80, class_weights=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]))[source]
src.yolov5.utils.general.make_divisible(x, divisor)[source]
src.yolov5.utils.general.non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, labels=(), max_det=300)[source]

Runs Non-Maximum Suppression (NMS) on inference results

Returns

list of detections, on (n,6) tensor per image [xyxy, conf, cls]

src.yolov5.utils.general.one_cycle(y1=0.0, y2=1.0, steps=100)[source]
src.yolov5.utils.general.print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket='')[source]
src.yolov5.utils.general.resample_segments(segments, n=1000)[source]
src.yolov5.utils.general.save_one_box(xyxy, im, file='image.jpg', gain=1.02, pad=10, square=False, BGR=False, save=True)[source]
src.yolov5.utils.general.scale_coords(img1_shape, coords, img0_shape, ratio_pad=None)[source]
src.yolov5.utils.general.segment2box(segment, width=640, height=640)[source]
src.yolov5.utils.general.segments2boxes(segments)[source]
src.yolov5.utils.general.set_logging(rank=-1, verbose=True)[source]
src.yolov5.utils.general.strip_optimizer(f='best.pt', s='')[source]
src.yolov5.utils.general.wh_iou(wh1, wh2)[source]
src.yolov5.utils.general.xyn2xy(x, w=640, h=640, padw=0, padh=0)[source]
src.yolov5.utils.general.xywh2xyxy(x)[source]
src.yolov5.utils.general.xywhn2xyxy(x, w=640, h=640, padw=0, padh=0)[source]
src.yolov5.utils.general.xyxy2xywh(x)[source]

src.yolov5.utils.google_utils module

src.yolov5.utils.google_utils.attempt_download(file, repo='ultralytics/yolov5')[source]
src.yolov5.utils.google_utils.gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip')[source]
src.yolov5.utils.google_utils.get_token(cookie='./cookie')[source]
src.yolov5.utils.google_utils.gsutil_getsize(url='')[source]
src.yolov5.utils.google_utils.safe_download(file, url, url2=None, min_bytes=1.0, error_msg='')[source]

src.yolov5.utils.loss module

class src.yolov5.utils.loss.BCEBlurWithLogitsLoss(alpha=0.05)[source]

Bases: Module

forward(pred, true)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class src.yolov5.utils.loss.ComputeLoss(model, autobalance=False)[source]

Bases: object

build_targets(p, targets)[source]
class src.yolov5.utils.loss.FocalLoss(loss_fcn, gamma=1.5, alpha=0.25)[source]

Bases: Module

forward(pred, true)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class src.yolov5.utils.loss.QFocalLoss(loss_fcn, gamma=1.5, alpha=0.25)[source]

Bases: Module

forward(pred, true)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
src.yolov5.utils.loss.smooth_BCE(eps=0.1)[source]

src.yolov5.utils.metrics module

class src.yolov5.utils.metrics.ConfusionMatrix(nc, conf=0.25, iou_thres=0.45)[source]

Bases: object

matrix()[source]
plot(save_dir='', names=())[source]
print()[source]
process_batch(detections, labels)[source]

Return intersection-over-union (Jaccard index) of boxes. Both sets of boxes are expected to be in (x1, y1, x2, y2) format. :param detections: :type detections: Array[N, 6] :param labels: :type labels: Array[M, 5]

Returns

None, updates confusion matrix accordingly

src.yolov5.utils.metrics.ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=())[source]

Compute the average precision, given the recall and precision curves. Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. # Arguments

tp: True positives (nparray, nx1 or nx10). conf: Objectness value from 0-1 (nparray). pred_cls: Predicted object classes (nparray). target_cls: True object classes (nparray). plot: Plot precision-recall curve at mAP@0.5 save_dir: Plot save directory

# Returns

The average precision as computed in py-faster-rcnn.

src.yolov5.utils.metrics.compute_ap(recall, precision)[source]

Compute the average precision, given the recall and precision curves # Arguments

recall: The recall curve (list) precision: The precision curve (list)

# Returns

Average precision, precision curve, recall curve

src.yolov5.utils.metrics.fitness(x)[source]
src.yolov5.utils.metrics.plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric')[source]
src.yolov5.utils.metrics.plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=())[source]

src.yolov5.utils.plots module

class src.yolov5.utils.plots.Colors[source]

Bases: object

static hex2rgb(h)[source]
src.yolov5.utils.plots.butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5)[source]
src.yolov5.utils.plots.hist2d(x, y, n=100)[source]
src.yolov5.utils.plots.output_to_target(output)[source]
src.yolov5.utils.plots.plot_evolution(yaml_file='data/hyp.finetune.yaml')[source]
src.yolov5.utils.plots.plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16)[source]
src.yolov5.utils.plots.plot_labels(labels, names=(), save_dir=PosixPath('.'), loggers=None)[source]
src.yolov5.utils.plots.plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir='')[source]
src.yolov5.utils.plots.plot_one_box(x, im, color=(128, 128, 128), label=None, line_thickness=3)[source]
src.yolov5.utils.plots.plot_one_box_PIL(box, im, color=(128, 128, 128), label=None, line_thickness=None)[source]
src.yolov5.utils.plots.plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir='')[source]
src.yolov5.utils.plots.plot_results_overlay(start=0, stop=0)[source]
src.yolov5.utils.plots.plot_study_txt(path='', x=None)[source]
src.yolov5.utils.plots.plot_targets_txt()[source]
src.yolov5.utils.plots.plot_test_txt()[source]
src.yolov5.utils.plots.plot_wh_methods()[source]
src.yolov5.utils.plots.profile_idetection(start=0, stop=0, labels=(), save_dir='')[source]

src.yolov5.utils.torch_utils module

class src.yolov5.utils.torch_utils.ModelEMA(model, decay=0.9999, updates=0)[source]

Bases: object

Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models Keep a moving average of everything in the model state_dict (parameters and buffers). This is intended to allow functionality like https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage A smoothed version of the weights is necessary for some training schemes to perform well. This class is sensitive where it is initialized in the sequence of model init, GPU assignment and distributed training wrappers.

update(model)[source]
update_attr(model, include=(), exclude=('process_group', 'reducer'))[source]
src.yolov5.utils.torch_utils.copy_attr(a, b, include=(), exclude=())[source]
src.yolov5.utils.torch_utils.date_modified(path='/Users/shindo/Workspace/github/alphailpdoc/src/yolov5/utils/torch_utils.py')[source]
src.yolov5.utils.torch_utils.de_parallel(model)[source]
src.yolov5.utils.torch_utils.find_modules(model, mclass=<class 'torch.nn.modules.conv.Conv2d'>)[source]
src.yolov5.utils.torch_utils.fuse_conv_and_bn(conv, bn)[source]
src.yolov5.utils.torch_utils.git_describe(path=PosixPath('/Users/shindo/Workspace/github/alphailpdoc/src/yolov5/utils'))[source]
src.yolov5.utils.torch_utils.init_torch_seeds(seed=0)[source]
src.yolov5.utils.torch_utils.initialize_weights(model)[source]
src.yolov5.utils.torch_utils.intersect_dicts(da, db, exclude=())[source]
src.yolov5.utils.torch_utils.is_parallel(model)[source]
src.yolov5.utils.torch_utils.load_classifier(name='resnet101', n=2)[source]
src.yolov5.utils.torch_utils.model_info(model, verbose=False, img_size=640)[source]
src.yolov5.utils.torch_utils.profile(x, ops, n=100, device=None)[source]
src.yolov5.utils.torch_utils.prune(model, amount=0.3)[source]
src.yolov5.utils.torch_utils.scale_img(img, ratio=1.0, same_shape=False, gs=32)[source]
src.yolov5.utils.torch_utils.select_device(device='', batch_size=None)[source]
src.yolov5.utils.torch_utils.sparsity(model)[source]
src.yolov5.utils.torch_utils.time_synchronized()[source]
src.yolov5.utils.torch_utils.torch_distributed_zero_first(local_rank: int)[source]

Decorator to make all processes in distributed training wait for each local_master to do something.

Module contents