EDeepLearningDefectDetectionMetrics Class

Collection of metrics used to evaluate a defect detection problem where the detection is based on the thresholding of a score produced by the underlying deep learning tool, e.g. a EUnsupervisedSegmenter tool or a ESupervisedSegmenter tool.
These metrics are only valid when results for good and defective images are included in the metrics (see EDeepLearningDefectDetectionMetrics::IsDefectDetectionMetricsValid). The definition of what is considered a good or a defective image depends on the deep learning tool used.
The defect detection metrics are separated in two main categories:
- Metrics dependent on the ROC (Receiver Operating Characteristic) curve. They require at least one good and one defective sample to be defined.
- Metrics dependent on the Precision/Recall curve. They require at least one defective sample to be defined.

The metrics related to the ROC curve are:
- The accuracy (see EDeepLearningDefectDetectionMetrics::GetAccuracy).
- The confusion matrix (see EDeepLearningDefectDetectionMetrics::GetConfusion and EConfusionMatrixElement)
- The ROC curve (see EDeepLearningDefectDetectionMetrics::GetROCPoint)
- The area Under ROC curve (see EDeepLearningDefectDetectionMetrics::AreaUnderROCCurve)

The metrics related to the precision/recall curve are:
- The average precision (see EDeepLearningDefectDetectionMetrics::AveragePrecision)
- Precision/Recall curve (see EDeepLearningDefectDetectionMetrics::GetPrecisionRecallCurvePoint)
- Precision (see EDeepLearningDefectDetectionMetrics::GetPrecision)
- Recall (see EDeepLearningDefectDetectionMetrics::GetRecall)
- F-Score (see EDeepLearningDefectDetectionMetrics::GetFScore).

The ROC and Precision/Recall curve are both obtained by computing some metrics for different values of the EDeepLearningDefectDetectionMetrics::ClassificationThreshold. Thus, each point on the curves gives different metrics, except for the area under the ROC curve and the average precision.
By default, the value of the metrics corresponds to the classification threshold of the corresponding deep learning tool. However, you can specify an index to retrieve the value of the metrics for other value of the EDeepLearningDefectDetectionMetrics::ClassificationThreshold. Metrics based on the ROC curve are indexed between 0 and EDeepLearningDefectDetectionMetrics::NumberOfClassifiers-1 and metrics based on the Precision/Recall curve are indexed between 0 and EDeepLearningDefectDetectionMetrics::NumPrecisionRecallCurvePoint-1.

Derived Class(es):ESupervisedSegmenterMetricsEUnsupervisedSegmenterMetrics

Namespace: Euresys::Open_eVision::EasyDeepLearning

Methods

The accuracy of the segmenter.
The accuracy is the number of images that were correctly classified over the total number of images that was used to evaluate the classifier.
The area under ROC curve (AUC) of the classifier (see EDeepLearningDefectDetectionMetrics::GetROCPoint). It's value is between 0 and 1.
In the context of unsupervised segmentation, the AUC is equal to the probability that good images will have a lower reconstruction error than defective images.
This metrics measure discrimination capacity of a model.
It tells how much the model is capable of distinguishing between classes. The higher the AUC, the better the model is.
Average precision.
The average precision is the area under the precision-recall curve. The presicion is the ratio between the number of true positive and the number of predicted possitive and the recall, also called the true positive rate, is the ratio between the number of true positive and the number of positive sample.
Best achievable accuracy.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::BestAccuracyClassificationThreshold.
Classification threshold giving the best achievable accuracy (see EDeepLearningDefectDetectionMetrics::BestAccuracy).
Best achievable balanced accuracy.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::BestBalancedAccuracyClassificationThreshold.
Classification threshold giving the best achievable balanced accuracy (see EDeepLearningDefectDetectionMetrics::BestBalancedAccuracy).
Best F1-Score.
The F1-Score is the harmonic mean of the precision and recall (true positive rate).
Classification threshold that yields the best F1-Score.
The F1-Score is the harmonic mean of the precision and recall (true positive rate).
Best achievable weighted accuracy.
The weighted accuracy is the weighted average of the true positive rate and the true negative rate (which is equal to 1 minus the false positive rate). See EROCPoint.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::GetBestWeightedAccuracyClassificationThreshold.
Classification threshold giving the best achievable weighted accuracy (see EDeepLearningDefectDetectionMetrics::GetBestWeightedAccuracy).
Classification threshold.
By default, the classification threshold will be equal to the classification threshold of the last unsupervised segmenter used to produce the results that compose this metric.
Some metrics such as EDeepLearningDefectDetectionMetrics::GetROCPoint, EDeepLearningDefectDetectionMetrics::GetAccuracy or EDeepLearningDefectDetectionMetrics depends on the classification threshold. By default, these methods will returns the metric corresponding to the classification threshold.
Confusion value of one label with another.
The confusion value of a label with another is the number of images belonging to this label that are classified as belonging to the other label.
For a EDeepLearningDefectDetectionMetrics there are only 2 labels (good and defective) so the confusion matrix is only composed of 4 values which are called matrix element (see EDeepLearningDefectDetectionMetrics).
The confusion matrix is computed for a given threshold (see EUnsupervisedSegmenter::ClassificationThreshold) which means an index can be passed to the method (see EDeepLearningDefectDetectionMetrics::NumberOfClassifiers).
The F1-Score for the current threshold.
The F1-Score is the harmonic mean of EDeepLearningDefectDetectionMetrics and EDeepLearningDefectDetectionMetrics.
Number of different possible classifiers.
Each classifier is obtained by choosing a different classification threshold (see EUnsupervisedSegmenter::ClassificationThreshold and corresponds to a point in the ROC curve (see EDeepLearningDefectDetectionMetrics::GetROCPoint).
Number of defective sample added to the metric.
Number of good sample added to the metric.
Number of point in the precision/recall curve.
Precision for the current threshold.
The precision is the proportion of detected defective instances that were correctly identified as such.
Index in the precision/recall curve for the current EDeepLearningDefectDetectionMetrics::ClassificationThreshold.
The value of the index is between 0 and EDeepLearningDefectDetectionMetrics::NumPrecisionRecallCurvePoint - 1. The index is -1 if the precision/recall curve is not defined.
Get a point on the precision/recall curve.
Recall for the current threshold.
It is the proportion of defective instances that were correctly identified as such. It is also called the true positive rate.
ROC (Receiver Operating Characteristic) point.
A ROC point is a point from the ROC curve which is the plot of the true positive rate against the false positive rate (see EConfusionMatrixElement) obtained at various classification threshold (see EUnsupervisedSegmenter::ClassificationThreshold).
The ROC points are stricly ordered by decreasing threshold order meaning the true positive rate and false positive rate (see EConfusionMatrixElement) are sorted in increasing order.
Whether the defect detection metrics are valid or not. Defect detection metrics are completely valid when the metrics has results for at least one good and one defective images. Some metrics might be valid with only defective or good results.
Loads the defect detection metrics. The given ESerializer must have been created for reading.
Copy operator.
Saves the defect detection metrics. The given ESerializer must have been created for writing.
Classification threshold.
By default, the classification threshold will be equal to the classification threshold of the last unsupervised segmenter used to produce the results that compose this metric.
Some metrics such as EDeepLearningDefectDetectionMetrics::GetROCPoint, EDeepLearningDefectDetectionMetrics::GetAccuracy or EDeepLearningDefectDetectionMetrics depends on the classification threshold. By default, these methods will returns the metric corresponding to the classification threshold.

EDeepLearningDefectDetectionMetrics Class

Collection of metrics used to evaluate a defect detection problem where the detection is based on the thresholding of a score produced by the underlying deep learning tool, e.g. a EUnsupervisedSegmenter tool or a ESupervisedSegmenter tool.
These metrics are only valid when results for good and defective images are included in the metrics (see EDeepLearningDefectDetectionMetrics::IsDefectDetectionMetricsValid). The definition of what is considered a good or a defective image depends on the deep learning tool used.
The defect detection metrics are separated in two main categories:
- Metrics dependent on the ROC (Receiver Operating Characteristic) curve. They require at least one good and one defective sample to be defined.
- Metrics dependent on the Precision/Recall curve. They require at least one defective sample to be defined.

The metrics related to the ROC curve are:
- The accuracy (see EDeepLearningDefectDetectionMetrics::GetAccuracy).
- The confusion matrix (see EDeepLearningDefectDetectionMetrics::GetConfusion and EConfusionMatrixElement)
- The ROC curve (see EDeepLearningDefectDetectionMetrics::GetROCPoint)
- The area Under ROC curve (see EDeepLearningDefectDetectionMetrics::AreaUnderROCCurve)

The metrics related to the precision/recall curve are:
- The average precision (see EDeepLearningDefectDetectionMetrics::AveragePrecision)
- Precision/Recall curve (see EDeepLearningDefectDetectionMetrics::GetPrecisionRecallCurvePoint)
- Precision (see EDeepLearningDefectDetectionMetrics::GetPrecision)
- Recall (see EDeepLearningDefectDetectionMetrics::GetRecall)
- F-Score (see EDeepLearningDefectDetectionMetrics::GetFScore).

The ROC and Precision/Recall curve are both obtained by computing some metrics for different values of the EDeepLearningDefectDetectionMetrics::ClassificationThreshold. Thus, each point on the curves gives different metrics, except for the area under the ROC curve and the average precision.
By default, the value of the metrics corresponds to the classification threshold of the corresponding deep learning tool. However, you can specify an index to retrieve the value of the metrics for other value of the EDeepLearningDefectDetectionMetrics::ClassificationThreshold. Metrics based on the ROC curve are indexed between 0 and EDeepLearningDefectDetectionMetrics::NumberOfClassifiers-1 and metrics based on the Precision/Recall curve are indexed between 0 and EDeepLearningDefectDetectionMetrics::NumPrecisionRecallCurvePoint-1.

Derived Class(es):ESupervisedSegmenterMetricsEUnsupervisedSegmenterMetrics

Namespace: Euresys.Open_eVision.EasyDeepLearning

Properties

The area under ROC curve (AUC) of the classifier (see EDeepLearningDefectDetectionMetrics::GetROCPoint). It's value is between 0 and 1.
In the context of unsupervised segmentation, the AUC is equal to the probability that good images will have a lower reconstruction error than defective images.
This metrics measure discrimination capacity of a model.
It tells how much the model is capable of distinguishing between classes. The higher the AUC, the better the model is.
Average precision.
The average precision is the area under the precision-recall curve. The presicion is the ratio between the number of true positive and the number of predicted possitive and the recall, also called the true positive rate, is the ratio between the number of true positive and the number of positive sample.
Best achievable accuracy.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::BestAccuracyClassificationThreshold.
Classification threshold giving the best achievable accuracy (see EDeepLearningDefectDetectionMetrics::BestAccuracy).
Best achievable balanced accuracy.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::BestBalancedAccuracyClassificationThreshold.
Classification threshold giving the best achievable balanced accuracy (see EDeepLearningDefectDetectionMetrics::BestBalancedAccuracy).
Best F1-Score.
The F1-Score is the harmonic mean of the precision and recall (true positive rate).
Classification threshold that yields the best F1-Score.
The F1-Score is the harmonic mean of the precision and recall (true positive rate).
Classification threshold.
By default, the classification threshold will be equal to the classification threshold of the last unsupervised segmenter used to produce the results that compose this metric.
Some metrics such as EDeepLearningDefectDetectionMetrics::GetROCPoint, EDeepLearningDefectDetectionMetrics::GetAccuracy or EDeepLearningDefectDetectionMetrics depends on the classification threshold. By default, these methods will returns the metric corresponding to the classification threshold.
Number of different possible classifiers.
Each classifier is obtained by choosing a different classification threshold (see EUnsupervisedSegmenter::ClassificationThreshold and corresponds to a point in the ROC curve (see EDeepLearningDefectDetectionMetrics::GetROCPoint).
Number of defective sample added to the metric.
Number of good sample added to the metric.
Number of point in the precision/recall curve.
Index in the precision/recall curve for the current EDeepLearningDefectDetectionMetrics::ClassificationThreshold.
The value of the index is between 0 and EDeepLearningDefectDetectionMetrics::NumPrecisionRecallCurvePoint - 1. The index is -1 if the precision/recall curve is not defined.

Methods

The accuracy of the segmenter.
The accuracy is the number of images that were correctly classified over the total number of images that was used to evaluate the classifier.
Best achievable weighted accuracy.
The weighted accuracy is the weighted average of the true positive rate and the true negative rate (which is equal to 1 minus the false positive rate). See EROCPoint.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::GetBestWeightedAccuracyClassificationThreshold.
Classification threshold giving the best achievable weighted accuracy (see EDeepLearningDefectDetectionMetrics::GetBestWeightedAccuracy).
Confusion value of one label with another.
The confusion value of a label with another is the number of images belonging to this label that are classified as belonging to the other label.
For a EDeepLearningDefectDetectionMetrics there are only 2 labels (good and defective) so the confusion matrix is only composed of 4 values which are called matrix element (see EDeepLearningDefectDetectionMetrics).
The confusion matrix is computed for a given threshold (see EUnsupervisedSegmenter::ClassificationThreshold) which means an index can be passed to the method (see EDeepLearningDefectDetectionMetrics::NumberOfClassifiers).
The F1-Score for the current threshold.
The F1-Score is the harmonic mean of EDeepLearningDefectDetectionMetrics and EDeepLearningDefectDetectionMetrics.
Precision for the current threshold.
The precision is the proportion of detected defective instances that were correctly identified as such.
Get a point on the precision/recall curve.
Recall for the current threshold.
It is the proportion of defective instances that were correctly identified as such. It is also called the true positive rate.
ROC (Receiver Operating Characteristic) point.
A ROC point is a point from the ROC curve which is the plot of the true positive rate against the false positive rate (see EConfusionMatrixElement) obtained at various classification threshold (see EUnsupervisedSegmenter::ClassificationThreshold).
The ROC points are stricly ordered by decreasing threshold order meaning the true positive rate and false positive rate (see EConfusionMatrixElement) are sorted in increasing order.
Whether the defect detection metrics are valid or not. Defect detection metrics are completely valid when the metrics has results for at least one good and one defective images. Some metrics might be valid with only defective or good results.
Loads the defect detection metrics. The given ESerializer must have been created for reading.
Copy operator.
Saves the defect detection metrics. The given ESerializer must have been created for writing.