ESupervisedSegmenterMetrics Class

Collection of metrics used to evaluate the state of a ESupervisedSegmenter tool.
A metric is a value summarizing the quality of a collection of supervised segmentation results (see ESupervisedSegmenterResult) with respect to their ground truth.
New results can be added to the object individually with ESupervisedSegmenterMetrics.

The ESupervisedSegmenterMetrics contains three types of metrics:
- pixel based metrics that are related to the quality of the segmentation masks
- blob based metrics that are related to the ability of the supervised segmentation tool to detect foreground blobs
- defect detection metrics that are related to the ability of the supervised segmentation tool to differentiate between images that contains foreground pixels (defective images) and images that are entirely background (good images).
The pixel metrics are the error (see ESupervisedSegmenterMetrics::Error), the pixel accuracy (see ESupervisedSegmenterMetrics::PixelAccuracy), the pixel confusion (see ESupervisedSegmenterMetrics::GetPixelConfusion), the pixel per-label accuracy (see ESupervisedSegmenterMetrics::GetPixelLabelAccuracy) and the intersection over union (see ESupervisedSegmenterMetrics).
The blob based metrics are two confusion matrixes (ESupervisedSegmenterMetrics::GetGroundtruthBlobConfusion and ESupervisedSegmenterMetrics::GetPredictedBlobConfusion), and various defect detection metrics (ESupervisedSegmenterMetrics, ESupervisedSegmenterMetrics, and ESupervisedSegmenterMetrics). Note that the metrics for defective blob detection are different from the metrics for defective image detection because, in the case of blob detection, we have no "good" ground truth results.
See EDeepLearningDefectDetectionMetrics for a description of the defect detection metrics. These metrics are available when EDeepLearningDefectDetectionMetrics::IsDefectDetectionMetricsValid is true, i.e. when both images that are entirely background and images with non-background pixels have been added to the metrics.
Most metrics depends upon the ESupervisedSegmenterMetrics.

Base Class:EDeepLearningDefectDetectionMetrics

Namespace: Euresys::Open_eVision::EasyDeepLearning

Methods

Balanced error.
The balanced error is the weighted error (ESupervisedSegmenterMetrics) where each label is given an equal weight.
Balanced Intersection over Union (IoU).
The balanced interesection over union is the weighted intersection over union (ESupervisedSegmenterMetrics::WeightedIntersectionOverUnion) where each label is given equal weight.
Balanced accuracy.
The balanced accuracy is the weighted accuracy (ESupervisedSegmenterMetrics::WeightedPixelAccuracy) where each label is given an equal weight.
Average precision for blob detection.
Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label. The average precision for blob detection is the average of the precision (see ESupervisedSegmenterMetrics) over all the possible classification threshold values. As such, the average precision does not depend on a specific classification threshold.
Best F1-Score of the segmenter for blob detection.
See ESupervisedSegmenterMetrics::BlobDetectionBestFScoreThreshold for the corresponding threshold.
See EDeepLearningDefectDetectionMetrics for a definition of the F1-Score. Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
Classification threshold that achieves the best F1-Score for blob detection.
See ESupervisedSegmenterMetrics::BlobDetectionBestFScore for the corresponding F1-Score.
See EDeepLearningDefectDetectionMetrics for a definition of the F1-Score. Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
F1-Score for defective blob detection given the current classification threshold (see EDeepLearningDefectDetectionMetrics::ClassificationThreshold).
See ESupervisedSegmenterMetrics for the corresponding F1-Score.
See EDeepLearningDefectDetectionMetrics for a definition of the F1-Score. Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
Precision for blob detection given the current classification threshold (see EDeepLearningDefectDetectionMetrics::ClassificationThreshold).
The precision is the proportion of detected defective blobs that match ground truth defective blobs.
Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
Recall for blob detection given the current classification threshold (see EDeepLearningDefectDetectionMetrics::ClassificationThreshold).
The recall is the proportion of ground truth defective blobs that are matched to predicted defective blobs.
Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
Error.
Number of ground truth blobs of the given ground truth label that best match with blobs of the given precited label.
See ESupervisedSegmenterMetrics::GetPredictedBlobConfusion for the number of predicted blobs that match to these ground truth blobs.
Intersection over union.
The intersection over union if the ratio between the number of correctly classified pixels from the given label to the total number of pixels that belongs to that label or are predicted as being from that label.
Assuming that the given label is the positive class, the intersection over union is expressed as IOU = TP / (TP + FP + FN) where TP is the number of true positive, FP the number of false positive (pixels that belongs to label but are not predicted as label) and FN the number of false negative (pixels predicted as label but that do not belong to label).
Label recognized by the segmenter that produced the results aggregated in the metrics.
Label error.
Pixel-wise normalized confusion between the given true and predicted labels.
The normalized confusion is the ratio of the number of pixels belonging to the 'trueLabel' that are classified as 'predictedLabel' to the total number of pixels belonging to the 'trueLabel'.
Number of labels recognized by the segmenter that produced the results aggregated in the metrics.
Accuracy.
The accuracy is the ratio of the number of correctly classified pixels to the total number of pixels.
Pixel-wise confusion between the given true and predicted labels.
The confusion is the number of pixels belonging to the 'trueLabel' that are classified as 'predictedLabel'.
Pixel label accuracy.
The label accuracy is the ratio of the number of correctly classified pixels of the given class to the total number of ground truth pixels from that class. If there are no ground truth pixels from the requested label, the method returns '-1'.
Number of predicted blobs of the given precited label that match to blobs of the given ground truth label.
See ESupervisedSegmenterMetrics::GetGroundtruthBlobConfusion for the number of ground truth blobs that match to these predicted blobs.
Weighted error.
The weighted error is the weighted average of each label error (ESupervisedSegmenterMetrics::GetLabelError) with respect to the dataset label weights.
Weighted Intersection over Union (IoU).
The weighted interesection over union is the weighted averaged of each label intersection over union (see ESupervisedSegmenterMetrics::GetIntersectionOverUnion) with respect to the dataset label weights.
Weighted accuracy.
The weighted accuracy is the weighted average of each label accuracy (ESupervisedSegmenterMetrics::GetPixelLabelAccuracy) with respect to the dataset label weights.
Indicates whether the object contains at least one result.
Loads an supervised segmentation metric. The given ESerializer must have been created for reading.
Assignment operator.
Equality operator.
Saves an supervised segmentation metric. The given ESerializer must have been created for writing.

ESupervisedSegmenterMetrics Class

Collection of metrics used to evaluate the state of a ESupervisedSegmenter tool.
A metric is a value summarizing the quality of a collection of supervised segmentation results (see ESupervisedSegmenterResult) with respect to their ground truth.
New results can be added to the object individually with ESupervisedSegmenterMetrics.

The ESupervisedSegmenterMetrics contains three types of metrics:
- pixel based metrics that are related to the quality of the segmentation masks
- blob based metrics that are related to the ability of the supervised segmentation tool to detect foreground blobs
- defect detection metrics that are related to the ability of the supervised segmentation tool to differentiate between images that contains foreground pixels (defective images) and images that are entirely background (good images).
The pixel metrics are the error (see ESupervisedSegmenterMetrics::Error), the pixel accuracy (see ESupervisedSegmenterMetrics::PixelAccuracy), the pixel confusion (see ESupervisedSegmenterMetrics::GetPixelConfusion), the pixel per-label accuracy (see ESupervisedSegmenterMetrics::GetPixelLabelAccuracy) and the intersection over union (see ESupervisedSegmenterMetrics).
The blob based metrics are two confusion matrixes (ESupervisedSegmenterMetrics::GetGroundtruthBlobConfusion and ESupervisedSegmenterMetrics::GetPredictedBlobConfusion), and various defect detection metrics (ESupervisedSegmenterMetrics, ESupervisedSegmenterMetrics, and ESupervisedSegmenterMetrics). Note that the metrics for defective blob detection are different from the metrics for defective image detection because, in the case of blob detection, we have no "good" ground truth results.
See EDeepLearningDefectDetectionMetrics for a description of the defect detection metrics. These metrics are available when EDeepLearningDefectDetectionMetrics::IsDefectDetectionMetricsValid is true, i.e. when both images that are entirely background and images with non-background pixels have been added to the metrics.
Most metrics depends upon the ESupervisedSegmenterMetrics.

Base Class:EDeepLearningDefectDetectionMetrics

Namespace: Euresys.Open_eVision.EasyDeepLearning

Properties

Balanced error.
The balanced error is the weighted error (ESupervisedSegmenterMetrics) where each label is given an equal weight.
Balanced Intersection over Union (IoU).
The balanced interesection over union is the weighted intersection over union (ESupervisedSegmenterMetrics::WeightedIntersectionOverUnion) where each label is given equal weight.
Balanced accuracy.
The balanced accuracy is the weighted accuracy (ESupervisedSegmenterMetrics::WeightedPixelAccuracy) where each label is given an equal weight.
Average precision for blob detection.
Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label. The average precision for blob detection is the average of the precision (see ESupervisedSegmenterMetrics) over all the possible classification threshold values. As such, the average precision does not depend on a specific classification threshold.
Best F1-Score of the segmenter for blob detection.
See ESupervisedSegmenterMetrics::BlobDetectionBestFScoreThreshold for the corresponding threshold.
See EDeepLearningDefectDetectionMetrics for a definition of the F1-Score. Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
Classification threshold that achieves the best F1-Score for blob detection.
See ESupervisedSegmenterMetrics::BlobDetectionBestFScore for the corresponding F1-Score.
See EDeepLearningDefectDetectionMetrics for a definition of the F1-Score. Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
F1-Score for defective blob detection given the current classification threshold (see EDeepLearningDefectDetectionMetrics::ClassificationThreshold).
See ESupervisedSegmenterMetrics for the corresponding F1-Score.
See EDeepLearningDefectDetectionMetrics for a definition of the F1-Score. Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
Precision for blob detection given the current classification threshold (see EDeepLearningDefectDetectionMetrics::ClassificationThreshold).
The precision is the proportion of detected defective blobs that match ground truth defective blobs.
Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
Recall for blob detection given the current classification threshold (see EDeepLearningDefectDetectionMetrics::ClassificationThreshold).
The recall is the proportion of ground truth defective blobs that are matched to predicted defective blobs.
Blob detection is the ability of the segmenter to correctly detect foreground ground truth blobs, regardless of their specific label.
Error.
Number of labels recognized by the segmenter that produced the results aggregated in the metrics.
Accuracy.
The accuracy is the ratio of the number of correctly classified pixels to the total number of pixels.
Weighted error.
The weighted error is the weighted average of each label error (ESupervisedSegmenterMetrics::GetLabelError) with respect to the dataset label weights.
Weighted Intersection over Union (IoU).
The weighted interesection over union is the weighted averaged of each label intersection over union (see ESupervisedSegmenterMetrics::GetIntersectionOverUnion) with respect to the dataset label weights.
Weighted accuracy.
The weighted accuracy is the weighted average of each label accuracy (ESupervisedSegmenterMetrics::GetPixelLabelAccuracy) with respect to the dataset label weights.

Methods

Number of ground truth blobs of the given ground truth label that best match with blobs of the given precited label.
See ESupervisedSegmenterMetrics::GetPredictedBlobConfusion for the number of predicted blobs that match to these ground truth blobs.
Intersection over union.
The intersection over union if the ratio between the number of correctly classified pixels from the given label to the total number of pixels that belongs to that label or are predicted as being from that label.
Assuming that the given label is the positive class, the intersection over union is expressed as IOU = TP / (TP + FP + FN) where TP is the number of true positive, FP the number of false positive (pixels that belongs to label but are not predicted as label) and FN the number of false negative (pixels predicted as label but that do not belong to label).
Label recognized by the segmenter that produced the results aggregated in the metrics.
Label error.
Pixel-wise normalized confusion between the given true and predicted labels.
The normalized confusion is the ratio of the number of pixels belonging to the 'trueLabel' that are classified as 'predictedLabel' to the total number of pixels belonging to the 'trueLabel'.
Pixel-wise confusion between the given true and predicted labels.
The confusion is the number of pixels belonging to the 'trueLabel' that are classified as 'predictedLabel'.
Pixel label accuracy.
The label accuracy is the ratio of the number of correctly classified pixels of the given class to the total number of ground truth pixels from that class. If there are no ground truth pixels from the requested label, the method returns '-1'.
Number of predicted blobs of the given precited label that match to blobs of the given ground truth label.
See ESupervisedSegmenterMetrics::GetGroundtruthBlobConfusion for the number of ground truth blobs that match to these predicted blobs.
Indicates whether the object contains at least one result.
Loads an supervised segmentation metric. The given ESerializer must have been created for reading.
Assignment operator.
Equality operator.
Saves an supervised segmentation metric. The given ESerializer must have been created for writing.