EUnsupervisedSegmenter Class
Unsupervised segmentation tool.
The unsupervised segmentation tool learns a model of what is a good product and it can be used for classiyfing whether an image is from a good or defective product and for segmenting the defects within the image.
To learn a model of what is a good product, the tool is trained by considering only the images from the good label (EUnsupervisedSegmenter::GoodLabel). The tool labels (EDeepLearningTool::GetLabel) will always be empty.
The tool can work with images of any resolution higher than EUnsupervisedSegmenter::PatchSize by merging the results obtained by applying the deep neural network using a sliding window algorithm. The overlapping between the sliding windows is controled by EUnsupervisedSegmenter::SamplingDensity.
The unsupervised segmenter offers a tradeoff between a high good detection rate and a high bad detection rate through a classification threshold that can be configured after training (EUnsupervisedSegmenter::ClassificationThreshold).
Base Class:EDeepLearningTool
Namespace: Euresys::Open_eVision::EasyDeepLearning
Methods
A higher capacity makes the unsupervised segmenter capable of learning more information at the cost of a slower processing speed.
A image with a score smaller or equal to the classification threshold will be classified as good.
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
If EUnsupervisedSegmenter::ScaleDisabledAtInference is true, it is equal to 1.0. Otherwise, it is equal to EUnsupervisedSegmenter::Scale.
For EasyClassify and EasyLocate, this will always be equal to 1.
For EasySegment, the number of patches will depend on the scale, patch size and sampling density parameters.
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size EUnsupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is EUnsupervisedSegmenter::PatchSize / EUnsupervisedSegmenter::SamplingDensity.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
A higher capacity makes the unsupervised segmenter capable of learning more information at the cost of a slower processing speed.
A image with a score smaller or equal to the classification threshold will be classified as good.
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size EUnsupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is EUnsupervisedSegmenter::PatchSize / EUnsupervisedSegmenter::SamplingDensity.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
EUnsupervisedSegmenter Class
Unsupervised segmentation tool.
The unsupervised segmentation tool learns a model of what is a good product and it can be used for classiyfing whether an image is from a good or defective product and for segmenting the defects within the image.
To learn a model of what is a good product, the tool is trained by considering only the images from the good label (EUnsupervisedSegmenter::GoodLabel). The tool labels (EDeepLearningTool::GetLabel) will always be empty.
The tool can work with images of any resolution higher than EUnsupervisedSegmenter::PatchSize by merging the results obtained by applying the deep neural network using a sliding window algorithm. The overlapping between the sliding windows is controled by EUnsupervisedSegmenter::SamplingDensity.
The unsupervised segmenter offers a tradeoff between a high good detection rate and a high bad detection rate through a classification threshold that can be configured after training (EUnsupervisedSegmenter::ClassificationThreshold).
Base Class:EDeepLearningTool
Namespace: Euresys.Open_eVision.EasyDeepLearning
Properties
A higher capacity makes the unsupervised segmenter capable of learning more information at the cost of a slower processing speed.
A image with a score smaller or equal to the classification threshold will be classified as good.
Setting this property to false will make the underlying neural network operates with the same number of channels as the images in the dataset used for training. Otherwise, color images will be converted to grayscale before using them.
If EUnsupervisedSegmenter::ScaleDisabledAtInference is true, it is equal to 1.0. Otherwise, it is equal to EUnsupervisedSegmenter::Scale.
The sampling density is the parameter of the sliding window algorithm used for processing a whole image using subwindows of size EUnsupervisedSegmenter::PatchSize. It indicates how much overlap there will be between the image patches: the stride between two consecutive patches is EUnsupervisedSegmenter::PatchSize / EUnsupervisedSegmenter::SamplingDensity.
If you disable the scale at inference, make sure your images are already scaled. For example, if you trained with images of 1024x1024 and a scale of 0.5, the inference images should be 512 x 512 images with the same field of view as the training images.
Methods
For EasyClassify and EasyLocate, this will always be equal to 1.
For EasySegment, the number of patches will depend on the scale, patch size and sampling density parameters.