ESupervisedSegmenter Class

Supervised segmentation tool.

The supervised segmentation tool segments the pixels of an image into various labels by learning a deep learning model on a dataset of segmented images.
The tool can work with images of any resolution higher than ESupervisedSegmenter::PatchSize by merging the results obtained by applying the deep neural network using a sliding window algorithm. The overlap between the sliding windows is controled by ESupervisedSegmenter::SamplingDensity.
For defect detection/foreground blobs detection, the supervised segmenter offers a tradeoff between a high good detection rate and a high defect detection rate through a classification threshold that can be configured after training (ESupervisedSegmenter::ClassificationThreshold).

Base Class:EDeepLearningTool

Namespace: Euresys::Open_eVision::EasyDeepLearning

Methods

Applies the unsupervised segmenter on the given image and its mask region.
We recommend to use pointer-based versions of this method to avoid unnecessary image and result copies.
Evaluates the dataset.
Capacity of the ESupervisedSegmenter.
A higher capacity makes the supervised segmenter capable of learning more information at the cost of a slower processing speed.
Classification threshold for determining whether an image contains blobs with a label different than background.
By default, its value is set during training to maximize the weighted accuracy (see ESupervisedSegmenterMetrics).
Forces the ESupervisedSegmenter to convert all images to grayscale (default: false).
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
The effective scale applied when performing inference.
If ESupervisedSegmenter::ScaleDisabledAtInference is true, it is equal to 1.0. Otherwise, it is equal to ESupervisedSegmenter::Scale.
Number of patches that will be extracted from an input image to perform inference.
For EasyClassify and EasyLocate, this will always be equal to 1.
For EasySegment, the number of patches will depend on the scale, patch size and sampling density parameters.
Patch size (width and height of the patches processed by the neural network).
Sampling density (default value: 1.25, its value must be equal or greater than 1).
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size ESupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is ESupervisedSegmenter::PatchSize / ESupervisedSegmenter::SamplingDensity.
Down-scaling applied to images before processing them. Its value is between 0 and 1 (default value: 1).
Whether to apply the scale parameter at inference or not.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
Type of the deep learning tool.
Training metrics at the given iteration
Validation metrics at the given iteration
Assignment operator
Serializes the settings of the supervised segmenter.
Capacity of the ESupervisedSegmenter.
A higher capacity makes the supervised segmenter capable of learning more information at the cost of a slower processing speed.
Classification threshold for determining whether an image contains blobs with a label different than background.
By default, its value is set during training to maximize the weighted accuracy (see ESupervisedSegmenterMetrics).
Forces the ESupervisedSegmenter to convert all images to grayscale (default: false).
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
Patch size (width and height of the patches processed by the neural network).
Sampling density (default value: 1.25, its value must be equal or greater than 1).
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size ESupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is ESupervisedSegmenter::PatchSize / ESupervisedSegmenter::SamplingDensity.
Down-scaling applied to images before processing them. Its value is between 0 and 1 (default value: 1).
Whether to apply the scale parameter at inference or not.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.

ESupervisedSegmenter Class

Supervised segmentation tool.

The supervised segmentation tool segments the pixels of an image into various labels by learning a deep learning model on a dataset of segmented images.
The tool can work with images of any resolution higher than ESupervisedSegmenter::PatchSize by merging the results obtained by applying the deep neural network using a sliding window algorithm. The overlap between the sliding windows is controled by ESupervisedSegmenter::SamplingDensity.
For defect detection/foreground blobs detection, the supervised segmenter offers a tradeoff between a high good detection rate and a high defect detection rate through a classification threshold that can be configured after training (ESupervisedSegmenter::ClassificationThreshold).

Base Class:EDeepLearningTool

Namespace: Euresys.Open_eVision.EasyDeepLearning

Properties

Capacity of the ESupervisedSegmenter.
A higher capacity makes the supervised segmenter capable of learning more information at the cost of a slower processing speed.
Classification threshold for determining whether an image contains blobs with a label different than background.
By default, its value is set during training to maximize the weighted accuracy (see ESupervisedSegmenterMetrics).
Forces the ESupervisedSegmenter to convert all images to grayscale (default: false).
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
The effective scale applied when performing inference.
If ESupervisedSegmenter::ScaleDisabledAtInference is true, it is equal to 1.0. Otherwise, it is equal to ESupervisedSegmenter::Scale.
Patch size (width and height of the patches processed by the neural network).
Sampling density (default value: 1.25, its value must be equal or greater than 1).
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size ESupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is ESupervisedSegmenter::PatchSize / ESupervisedSegmenter::SamplingDensity.
Down-scaling applied to images before processing them. Its value is between 0 and 1 (default value: 1).
Whether to apply the scale parameter at inference or not.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
Type of the deep learning tool.

Methods

Applies the unsupervised segmenter on the given image and its mask region.
We recommend to use pointer-based versions of this method to avoid unnecessary image and result copies.
Evaluates the dataset.
Number of patches that will be extracted from an input image to perform inference.
For EasyClassify and EasyLocate, this will always be equal to 1.
For EasySegment, the number of patches will depend on the scale, patch size and sampling density parameters.
Training metrics at the given iteration
Validation metrics at the given iteration
Assignment operator
Serializes the settings of the supervised segmenter.