ESupervisedSegmenter Class
Supervised segmentation tool.
The supervised segmentation tool segments the pixels of an image into various labels by learning a deep learning model on a dataset of segmented images.
The tool can work with images of any resolution higher than ESupervisedSegmenter::PatchSize by merging the results obtained by applying the deep neural network using a sliding window algorithm. The overlap between the sliding windows is controled by ESupervisedSegmenter::SamplingDensity.
For defect detection/foreground blobs detection, the supervised segmenter offers a tradeoff between a high good detection rate and a high defect detection rate through a classification threshold that can be configured after training (ESupervisedSegmenter::ClassificationThreshold).
Base Class:EDeepLearningTool
Namespace: Euresys::Open_eVision::EasyDeepLearning
Methods
We recommend to use pointer-based versions of this method to avoid unnecessary image and result copies.
A higher capacity makes the supervised segmenter capable of learning more information at the cost of a slower processing speed.
By default, its value is set during training to maximize the weighted accuracy (see ESupervisedSegmenterMetrics).
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
If ESupervisedSegmenter::ScaleDisabledAtInference is true, it is equal to 1.0. Otherwise, it is equal to ESupervisedSegmenter::Scale.
For EasyClassify and EasyLocate, this will always be equal to 1.
For EasySegment, the number of patches will depend on the scale, patch size and sampling density parameters.
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size ESupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is ESupervisedSegmenter::PatchSize / ESupervisedSegmenter::SamplingDensity.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
A higher capacity makes the supervised segmenter capable of learning more information at the cost of a slower processing speed.
By default, its value is set during training to maximize the weighted accuracy (see ESupervisedSegmenterMetrics).
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size ESupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is ESupervisedSegmenter::PatchSize / ESupervisedSegmenter::SamplingDensity.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
ESupervisedSegmenter Class
Supervised segmentation tool.
The supervised segmentation tool segments the pixels of an image into various labels by learning a deep learning model on a dataset of segmented images.
The tool can work with images of any resolution higher than ESupervisedSegmenter::PatchSize by merging the results obtained by applying the deep neural network using a sliding window algorithm. The overlap between the sliding windows is controled by ESupervisedSegmenter::SamplingDensity.
For defect detection/foreground blobs detection, the supervised segmenter offers a tradeoff between a high good detection rate and a high defect detection rate through a classification threshold that can be configured after training (ESupervisedSegmenter::ClassificationThreshold).
Base Class:EDeepLearningTool
Namespace: Euresys.Open_eVision.EasyDeepLearning
Properties
A higher capacity makes the supervised segmenter capable of learning more information at the cost of a slower processing speed.
By default, its value is set during training to maximize the weighted accuracy (see ESupervisedSegmenterMetrics).
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
If ESupervisedSegmenter::ScaleDisabledAtInference is true, it is equal to 1.0. Otherwise, it is equal to ESupervisedSegmenter::Scale.
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size ESupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is ESupervisedSegmenter::PatchSize / ESupervisedSegmenter::SamplingDensity.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
Methods
We recommend to use pointer-based versions of this method to avoid unnecessary image and result copies.
For EasyClassify and EasyLocate, this will always be equal to 1.
For EasySegment, the number of patches will depend on the scale, patch size and sampling density parameters.