Link Search Menu Expand Document

Custom Metric

Odin provides the possibility to execute all the analyses with user custom evaluation metrics. To take advantage of this functionality, it is mandatory to extend the CustomMetric interface and to implement the evaluation_metric() method with the computation of the metric.

evaluation_metric()

Parameters

gt

list or pandas.DataFrame
For classification tasks, it is a list representing the ground truth categories.
For localization tasks, it is a pandas.DataFrame representing the ground truth.
detections

list or pandas.DataFrame
For classification tasks, it is a list representing the predictions scores.
For localization tasks, it is a pandas.DataFrame representing the predictions.
matching

pandas.DataFrame
For classification tasks, it is None.
For localization tasks, it represents the matching between the ground truth and the proposals. For each matching row, the following fields can be accessed:
confidence (prediction confidence value)
difficult (if it is difficult to predict the annotation)
iou (intersection over union value)
det_id (id of the detection)
ann_id (id of the annotation)
category_det (id of the category predicted)
category_ann (id of the ground truth category)
is_micro_required

bool, optional
Indicates whether the gt and detections represent a single category.
(default is False)

Example

from odin.classes import CustomMetric

class MyEvaluationMetric(CustomMetric):
    def evaluate_metric(self, gt, predictions, matching, is_micro_required=False):
        if not is_micro_required: #is_micro_required == false
            # implement your evaluation
            return my_score, my_standard_error
        #when called from base report for micro avg
        else: #is_micro_required == true
            # implement your micro evaluation
            return my_score, my_standard_error

my_evaluation_metric = MyEvaluationMetric("my metric name")
my_analyzer.add_custom_metric(my_evaluation_metric)

# use Metrics.MY_METRIC_NAME as 'metric' parameter in the analyses
# N.B. the member added to the Metrics enum is equal to the name of the
# custom metric provided in uppercase and with the white spaces replaced with
# the underscore

Tasks supported

Binary Classification Single-label Classification Multi-label Classification Object Detection Instance Segmentation
yes yes yes yes yes