EUnsupervisedSegmenter Class

Unsupervised segmentation tool.

The unsupervised segmentation tool learns a model of what is a good product and it can be used for classiyfing whether an image is from a good or defective product and for segmenting the defects within the image.
To learn a model of what is a good product, the tool is trained by considering only the images from the good label (EUnsupervisedSegmenter::GoodLabel). The tool labels (EDeepLearningTool::GetLabel) will always be empty.
The tool can work with images of any resolution higher than EUnsupervisedSegmenter::PatchSize by merging the results obtained by applying the deep neural network using a sliding window algorithm. The overlapping between the sliding windows is controled by EUnsupervisedSegmenter::SamplingDensity.
The unsupervised segmenter offers a tradeoff between a high good detection rate and a high bad detection rate through a classification threshold that can be configured after training (EUnsupervisedSegmenter::ClassificationThreshold).

Base Class:EDeepLearningTool

Namespace: Euresys::Open_eVision::EasyDeepLearning

Methods

Applies the unsupervised segmenter on the given image and its mask region. We recommend to use pointer-based versions of this method to avoid unnecessary image and result copies.
Capacity of the EUnsupervisedSegmenter.
A higher capacity makes the unsupervised segmenter capable of learning more information at the cost of a slower processing speed.
Classification threshold for the score of an image (See EUnsupervisedSegmenterResult::ClassificationScore).
A image with a score smaller or equal to the classification threshold will be classified as good.
Forces the EUnsupervisedSegmenter to convert all images to grayscale (default: true).
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
Name of the good label in the dataset that is used for training (default value: empty).
The effective scale applied when performing inference.
If EUnsupervisedSegmenter::ScaleDisabledAtInference is true, it is equal to 1.0. Otherwise, it is equal to EUnsupervisedSegmenter::Scale.
Number of patches that will be extracted from an input image to perform inference.
For EasyClassify and EasyLocate, this will always be equal to 1.
For EasySegment, the number of patches will depend on the scale, patch size and sampling density parameters.
Patch size (width and height of the patches processed by the neural network).
Sampling density (default value: 2.0, its value must be equal or greater than 1).
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size EUnsupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is EUnsupervisedSegmenter::PatchSize / EUnsupervisedSegmenter::SamplingDensity.
Down-scaling applied to images before processing them. Its value is between 0 and 1 (default value: 1).
Whether to apply the scale parameter at inference or not.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
Type of the deep learning tool.
Training metrics at the given iteration
Validation metrics at the given iteration
Assignment operator
Serializes the settings of the unsupervised segmenter.
Capacity of the EUnsupervisedSegmenter.
A higher capacity makes the unsupervised segmenter capable of learning more information at the cost of a slower processing speed.
Classification threshold for the score of an image (See EUnsupervisedSegmenterResult::ClassificationScore).
A image with a score smaller or equal to the classification threshold will be classified as good.
Forces the EUnsupervisedSegmenter to convert all images to grayscale (default: true).
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
Name of the good label in the dataset that is used for training (default value: empty).
Patch size (width and height of the patches processed by the neural network).
Sampling density (default value: 2.0, its value must be equal or greater than 1).
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size EUnsupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is EUnsupervisedSegmenter::PatchSize / EUnsupervisedSegmenter::SamplingDensity.
Down-scaling applied to images before processing them. Its value is between 0 and 1 (default value: 1).
Whether to apply the scale parameter at inference or not.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.

EUnsupervisedSegmenter Class

Unsupervised segmentation tool.

The unsupervised segmentation tool learns a model of what is a good product and it can be used for classiyfing whether an image is from a good or defective product and for segmenting the defects within the image.
To learn a model of what is a good product, the tool is trained by considering only the images from the good label (EUnsupervisedSegmenter::GoodLabel). The tool labels (EDeepLearningTool::GetLabel) will always be empty.
The tool can work with images of any resolution higher than EUnsupervisedSegmenter::PatchSize by merging the results obtained by applying the deep neural network using a sliding window algorithm. The overlapping between the sliding windows is controled by EUnsupervisedSegmenter::SamplingDensity.
The unsupervised segmenter offers a tradeoff between a high good detection rate and a high bad detection rate through a classification threshold that can be configured after training (EUnsupervisedSegmenter::ClassificationThreshold).

Base Class:EDeepLearningTool

Namespace: Euresys.Open_eVision.EasyDeepLearning

Properties

Capacity of the EUnsupervisedSegmenter.
A higher capacity makes the unsupervised segmenter capable of learning more information at the cost of a slower processing speed.
Classification threshold for the score of an image (See EUnsupervisedSegmenterResult::ClassificationScore).
A image with a score smaller or equal to the classification threshold will be classified as good.
Forces the EUnsupervisedSegmenter to convert all images to grayscale (default: true).
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
Name of the good label in the dataset that is used for training (default value: empty).
The effective scale applied when performing inference.
If EUnsupervisedSegmenter::ScaleDisabledAtInference is true, it is equal to 1.0. Otherwise, it is equal to EUnsupervisedSegmenter::Scale.
Patch size (width and height of the patches processed by the neural network).
Sampling density (default value: 2.0, its value must be equal or greater than 1).
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size EUnsupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is EUnsupervisedSegmenter::PatchSize / EUnsupervisedSegmenter::SamplingDensity.
Down-scaling applied to images before processing them. Its value is between 0 and 1 (default value: 1).
Whether to apply the scale parameter at inference or not.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
Type of the deep learning tool.

Methods

Applies the unsupervised segmenter on the given image and its mask region. We recommend to use pointer-based versions of this method to avoid unnecessary image and result copies.
Number of patches that will be extracted from an input image to perform inference.
For EasyClassify and EasyLocate, this will always be equal to 1.
For EasySegment, the number of patches will depend on the scale, patch size and sampling density parameters.
Training metrics at the given iteration
Validation metrics at the given iteration
Assignment operator
Serializes the settings of the unsupervised segmenter.