Shortcuts

ride.metrics

Module Contents

Classes

OptimisationDirection

Generic enumeration.

MetricMixin

Abstract base class for Ride modules

MeanAveragePrecisionMetric

Mean Average Precision (mAP) metric

FlopsMetric

Computes Floating Point Operations (FLOPs) for the model and adds it as metric

FlopsWeightedAccuracyMetric

Computes acc * (flops / target_gflops) ** (-0.07)

Functions

sort_out_figures(→ Tuple[MetricDict, FigureDict])

MetricSelector(→ MetricMixin)

TopKAccuracyMetric(→ MetricMixin)

topks_correct(→ List[torch.Tensor])

Given the predictions, labels, and a list of top-k values, compute the

topk_errors(preds, labels, ks)

Computes the top-k error for each k.

topk_accuracies(preds, labels, ks)

Computes the top-k accuracy for each k.

flops(model)

Compute the Floating Point Operations per Second for the model

params_count(model)

Compute the number of parameters.

make_confusion_matrix(→ matplotlib.figure.Figure)

Attributes

ExtendedMetricDict

MetricDict

FigureDict

StepOutputs

logger

ride.metrics.ExtendedMetricDict[source]
ride.metrics.MetricDict[source]
ride.metrics.FigureDict[source]
ride.metrics.StepOutputs[source]
ride.metrics.logger[source]
ride.metrics.sort_out_figures(d: ExtendedMetricDict) Tuple[MetricDict, FigureDict][source]
class ride.metrics.OptimisationDirection[source]

Bases: enum.Enum

Generic enumeration.

Derive from this class to define new enumerations.

MIN = 'min'[source]
MAX = 'max'[source]
class ride.metrics.MetricMixin(hparams: pytorch_lightning.utilities.parsing.AttributeDict, *args, **kwargs)[source]

Bases: ride.core.RideMixin

Abstract base class for Ride modules

classmethod __init_subclass__()[source]
classmethod metrics() Dict[str, str][source]
classmethod metric_names() List[str][source]
metrics_step(*args, **kwargs) MetricDict[source]
metrics_epoch(preds: torch.Tensor, targets: torch.Tensor, prefix: str = '', *args, **kwargs) MetricDict[source]
collect_metrics(preds: torch.Tensor, targets: torch.Tensor) MetricDict[source]
collect_epoch_metrics(preds: torch.Tensor, targets: torch.Tensor, prefix: str = None) ExtendedMetricDict[source]
ride.metrics.MetricSelector(mapping: Dict[str, Union[MetricMixin, Iterable[MetricMixin]]] = None, default_config: str = '', **kwargs: Union[MetricMixin, Iterable[MetricMixin]]) MetricMixin[source]
class ride.metrics.MeanAveragePrecisionMetric(hparams: pytorch_lightning.utilities.parsing.AttributeDict, *args, **kwargs)[source]

Bases: MetricMixin

Mean Average Precision (mAP) metric

validate_attributes()[source]
_compute_mean_average_precision(preds, targets)[source]
classmethod _metrics()[source]
metrics_step(preds: torch.Tensor, targets: torch.Tensor, *args, **kwargs) MetricDict[source]
metrics_epoch(preds: torch.Tensor, targets: torch.Tensor, *args, **kwargs) MetricDict[source]
ride.metrics.TopKAccuracyMetric(*Ks) MetricMixin[source]
class ride.metrics.FlopsMetric(hparams: pytorch_lightning.utilities.parsing.AttributeDict, *args, **kwargs)[source]

Bases: MetricMixin

Computes Floating Point Operations (FLOPs) for the model and adds it as metric

classmethod _metrics()[source]
on_init_end(*args, **kwargs)[source]
metrics_step(preds: torch.Tensor, targets: torch.Tensor, **kwargs) MetricDict[source]
class ride.metrics.FlopsWeightedAccuracyMetric(hparams: pytorch_lightning.utilities.parsing.AttributeDict, *args, **kwargs)[source]

Bases: FlopsMetric

Computes acc * (flops / target_gflops) ** (-0.07)

classmethod _metrics()[source]
validate_attributes()[source]
static configs() ride.core.Configs[source]
metrics_step(preds: torch.Tensor, targets: torch.Tensor, **kwargs) MetricDict[source]
ride.metrics.topks_correct(preds: torch.Tensor, labels: torch.Tensor, ks: List[int]) List[torch.Tensor][source]

Given the predictions, labels, and a list of top-k values, compute the number of correct predictions for each top-k value.

Parameters:
  • preds (array) – array of predictions. Dimension is batchsize N x ClassNum.

  • labels (array) – array of labels. Dimension is batchsize N.

  • ks (list) – list of top-k values. For example, ks = [1, 5] correspods to top-1 and top-5.

Returns:

list of numbers, where the i-th entry

corresponds to the number of top-ks[i] correct predictions.

Return type:

topks_correct (list)

ride.metrics.topk_errors(preds: torch.Tensor, labels: torch.Tensor, ks: List[int])[source]

Computes the top-k error for each k. :param preds: array of predictions. Dimension is N. :type preds: array :param labels: array of labels. Dimension is N. :type labels: array :param ks: list of ks to calculate the top accuracies. :type ks: list

ride.metrics.topk_accuracies(preds: torch.Tensor, labels: torch.Tensor, ks: List[int])[source]

Computes the top-k accuracy for each k. :param preds: array of predictions. Dimension is N. :type preds: array :param labels: array of labels. Dimension is N. :type labels: array :param ks: list of ks to calculate the top accuracies. :type ks: list

ride.metrics.flops(model: torch.nn.Module)[source]

Compute the Floating Point Operations per Second for the model

ride.metrics.params_count(model: torch.nn.Module)[source]

Compute the number of parameters. :param model: model to count the number of parameters. :type model: model

ride.metrics.make_confusion_matrix(preds: torch.Tensor, targets: torch.Tensor, classes: List[str]) matplotlib.figure.Figure[source]
Read the Docs v: stable
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.