EDeepLearningDefectDetectionMetrics Class
Collection of metrics used to evaluate a defect detection problem where the detection is based on the thresholding of a score produced by the underlying deep learning tool, e.g. a EUnsupervisedSegmenter tool or a ESupervisedSegmenter tool.
These metrics are only valid when results for good and defective images are included in the metrics (see EDeepLearningDefectDetectionMetrics::IsDefectDetectionMetricsValid). The definition of what is considered a good or a defective image depends on the deep learning tool used.
The defect detection metrics are separated in two main categories:
- Metrics dependent on the ROC (Receiver Operating Characteristic) curve. They require at least one good and one defective sample to be defined.
- Metrics dependent on the Precision/Recall curve. They require at least one defective sample to be defined.
The metrics related to the ROC curve are:
- The accuracy (see EDeepLearningDefectDetectionMetrics::GetAccuracy).
- The confusion matrix (see EDeepLearningDefectDetectionMetrics::GetConfusion and EConfusionMatrixElement)
- The ROC curve (see EDeepLearningDefectDetectionMetrics::GetROCPoint)
- The area Under ROC curve (see EDeepLearningDefectDetectionMetrics::AreaUnderROCCurve)
The metrics related to the precision/recall curve are:
- The average precision (see EDeepLearningDefectDetectionMetrics::AveragePrecision)
- Precision/Recall curve (see EDeepLearningDefectDetectionMetrics::GetPrecisionRecallCurvePoint)
- Precision (see EDeepLearningDefectDetectionMetrics::GetPrecision)
- Recall (see EDeepLearningDefectDetectionMetrics::GetRecall)
- F-Score (see EDeepLearningDefectDetectionMetrics::GetFScore).
The ROC and Precision/Recall curve are both obtained by computing some metrics for different values of the EDeepLearningDefectDetectionMetrics::ClassificationThreshold. Thus, each point on the curves gives different metrics, except for the area under the ROC curve and the average precision.
By default, the value of the metrics corresponds to the classification threshold of the corresponding deep learning tool. However, you can specify an index to retrieve the value of the metrics for other value of the EDeepLearningDefectDetectionMetrics::ClassificationThreshold. Metrics based on the ROC curve are indexed between 0 and EDeepLearningDefectDetectionMetrics::NumberOfClassifiers-1 and metrics based on the Precision/Recall curve are indexed between 0 and EDeepLearningDefectDetectionMetrics::NumPrecisionRecallCurvePoint-1.
Derived Class(es):ESupervisedSegmenterMetricsEUnsupervisedSegmenterMetrics
Namespace: Euresys::Open_eVision::EasyDeepLearning
Methods
The accuracy is the number of images that were correctly classified over the total number of images that was used to evaluate the classifier.
In the context of unsupervised segmentation, the AUC is equal to the probability that good images will have a lower reconstruction error than defective images.
This metrics measure discrimination capacity of a model.
It tells how much the model is capable of distinguishing between classes. The higher the AUC, the better the model is.
The average precision is the area under the precision-recall curve. The presicion is the ratio between the number of true positive and the number of predicted possitive and the recall, also called the true positive rate, is the ratio between the number of true positive and the number of positive sample.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::BestAccuracyClassificationThreshold.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::BestBalancedAccuracyClassificationThreshold.
The F1-Score is the harmonic mean of the precision and recall (true positive rate).
The F1-Score is the harmonic mean of the precision and recall (true positive rate).
The weighted accuracy is the weighted average of the true positive rate and the true negative rate (which is equal to 1 minus the false positive rate). See EROCPoint.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::GetBestWeightedAccuracyClassificationThreshold.
By default, the classification threshold will be equal to the classification threshold of the last unsupervised segmenter used to produce the results that compose this metric.
Some metrics such as EDeepLearningDefectDetectionMetrics::GetROCPoint, EDeepLearningDefectDetectionMetrics::GetAccuracy or EDeepLearningDefectDetectionMetrics depends on the classification threshold. By default, these methods will returns the metric corresponding to the classification threshold.
The confusion value of a label with another is the number of images belonging to this label that are classified as belonging to the other label.
For a EDeepLearningDefectDetectionMetrics there are only 2 labels (good and defective) so the confusion matrix is only composed of 4 values which are called matrix element (see EDeepLearningDefectDetectionMetrics).
The confusion matrix is computed for a given threshold (see EUnsupervisedSegmenter::ClassificationThreshold) which means an index can be passed to the method (see EDeepLearningDefectDetectionMetrics::NumberOfClassifiers).
The F1-Score is the harmonic mean of EDeepLearningDefectDetectionMetrics and EDeepLearningDefectDetectionMetrics.
Each classifier is obtained by choosing a different classification threshold (see EUnsupervisedSegmenter::ClassificationThreshold and corresponds to a point in the ROC curve (see EDeepLearningDefectDetectionMetrics::GetROCPoint).
The precision is the proportion of detected defective instances that were correctly identified as such.
The value of the index is between 0 and EDeepLearningDefectDetectionMetrics::NumPrecisionRecallCurvePoint - 1. The index is -1 if the precision/recall curve is not defined.
It is the proportion of defective instances that were correctly identified as such. It is also called the true positive rate.
A ROC point is a point from the ROC curve which is the plot of the true positive rate against the false positive rate (see EConfusionMatrixElement) obtained at various classification threshold (see EUnsupervisedSegmenter::ClassificationThreshold).
The ROC points are stricly ordered by decreasing threshold order meaning the true positive rate and false positive rate (see EConfusionMatrixElement) are sorted in increasing order.
By default, the classification threshold will be equal to the classification threshold of the last unsupervised segmenter used to produce the results that compose this metric.
Some metrics such as EDeepLearningDefectDetectionMetrics::GetROCPoint, EDeepLearningDefectDetectionMetrics::GetAccuracy or EDeepLearningDefectDetectionMetrics depends on the classification threshold. By default, these methods will returns the metric corresponding to the classification threshold.
EDeepLearningDefectDetectionMetrics Class
Collection of metrics used to evaluate a defect detection problem where the detection is based on the thresholding of a score produced by the underlying deep learning tool, e.g. a EUnsupervisedSegmenter tool or a ESupervisedSegmenter tool.
These metrics are only valid when results for good and defective images are included in the metrics (see EDeepLearningDefectDetectionMetrics::IsDefectDetectionMetricsValid). The definition of what is considered a good or a defective image depends on the deep learning tool used.
The defect detection metrics are separated in two main categories:
- Metrics dependent on the ROC (Receiver Operating Characteristic) curve. They require at least one good and one defective sample to be defined.
- Metrics dependent on the Precision/Recall curve. They require at least one defective sample to be defined.
The metrics related to the ROC curve are:
- The accuracy (see EDeepLearningDefectDetectionMetrics::GetAccuracy).
- The confusion matrix (see EDeepLearningDefectDetectionMetrics::GetConfusion and EConfusionMatrixElement)
- The ROC curve (see EDeepLearningDefectDetectionMetrics::GetROCPoint)
- The area Under ROC curve (see EDeepLearningDefectDetectionMetrics::AreaUnderROCCurve)
The metrics related to the precision/recall curve are:
- The average precision (see EDeepLearningDefectDetectionMetrics::AveragePrecision)
- Precision/Recall curve (see EDeepLearningDefectDetectionMetrics::GetPrecisionRecallCurvePoint)
- Precision (see EDeepLearningDefectDetectionMetrics::GetPrecision)
- Recall (see EDeepLearningDefectDetectionMetrics::GetRecall)
- F-Score (see EDeepLearningDefectDetectionMetrics::GetFScore).
The ROC and Precision/Recall curve are both obtained by computing some metrics for different values of the EDeepLearningDefectDetectionMetrics::ClassificationThreshold. Thus, each point on the curves gives different metrics, except for the area under the ROC curve and the average precision.
By default, the value of the metrics corresponds to the classification threshold of the corresponding deep learning tool. However, you can specify an index to retrieve the value of the metrics for other value of the EDeepLearningDefectDetectionMetrics::ClassificationThreshold. Metrics based on the ROC curve are indexed between 0 and EDeepLearningDefectDetectionMetrics::NumberOfClassifiers-1 and metrics based on the Precision/Recall curve are indexed between 0 and EDeepLearningDefectDetectionMetrics::NumPrecisionRecallCurvePoint-1.
Derived Class(es):ESupervisedSegmenterMetricsEUnsupervisedSegmenterMetrics
Namespace: Euresys.Open_eVision.EasyDeepLearning
Properties
In the context of unsupervised segmentation, the AUC is equal to the probability that good images will have a lower reconstruction error than defective images.
This metrics measure discrimination capacity of a model.
It tells how much the model is capable of distinguishing between classes. The higher the AUC, the better the model is.
The average precision is the area under the precision-recall curve. The presicion is the ratio between the number of true positive and the number of predicted possitive and the recall, also called the true positive rate, is the ratio between the number of true positive and the number of positive sample.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::BestAccuracyClassificationThreshold.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::BestBalancedAccuracyClassificationThreshold.
The F1-Score is the harmonic mean of the precision and recall (true positive rate).
The F1-Score is the harmonic mean of the precision and recall (true positive rate).
By default, the classification threshold will be equal to the classification threshold of the last unsupervised segmenter used to produce the results that compose this metric.
Some metrics such as EDeepLearningDefectDetectionMetrics::GetROCPoint, EDeepLearningDefectDetectionMetrics::GetAccuracy or EDeepLearningDefectDetectionMetrics depends on the classification threshold. By default, these methods will returns the metric corresponding to the classification threshold.
Each classifier is obtained by choosing a different classification threshold (see EUnsupervisedSegmenter::ClassificationThreshold and corresponds to a point in the ROC curve (see EDeepLearningDefectDetectionMetrics::GetROCPoint).
The value of the index is between 0 and EDeepLearningDefectDetectionMetrics::NumPrecisionRecallCurvePoint - 1. The index is -1 if the precision/recall curve is not defined.
Methods
The accuracy is the number of images that were correctly classified over the total number of images that was used to evaluate the classifier.
The weighted accuracy is the weighted average of the true positive rate and the true negative rate (which is equal to 1 minus the false positive rate). See EROCPoint.
The classification threshold corresponding to this accuracy is given by EDeepLearningDefectDetectionMetrics::GetBestWeightedAccuracyClassificationThreshold.
The confusion value of a label with another is the number of images belonging to this label that are classified as belonging to the other label.
For a EDeepLearningDefectDetectionMetrics there are only 2 labels (good and defective) so the confusion matrix is only composed of 4 values which are called matrix element (see EDeepLearningDefectDetectionMetrics).
The confusion matrix is computed for a given threshold (see EUnsupervisedSegmenter::ClassificationThreshold) which means an index can be passed to the method (see EDeepLearningDefectDetectionMetrics::NumberOfClassifiers).
The F1-Score is the harmonic mean of EDeepLearningDefectDetectionMetrics and EDeepLearningDefectDetectionMetrics.
The precision is the proportion of detected defective instances that were correctly identified as such.
It is the proportion of defective instances that were correctly identified as such. It is also called the true positive rate.
A ROC point is a point from the ROC curve which is the plot of the true positive rate against the false positive rate (see EConfusionMatrixElement) obtained at various classification threshold (see EUnsupervisedSegmenter::ClassificationThreshold).
The ROC points are stricly ordered by decreasing threshold order meaning the true positive rate and false positive rate (see EConfusionMatrixElement) are sorted in increasing order.