EClassificationDataset Class

EClassificationDataset manages a dataset of images.

A dataset is a collection of images with different types of labeling: labeling for the classification of images, labeling for the segmentation of pixels, and/or labeling for the detection of objects (classification and localization).
The dataset maintains 3 sets of labels for each type of labeling:
- the classification labels that characterize an entire image;
- the segmentation labels that characterize the pixels of an image; and
- the object labels that characterize axis-aligned rectangle region of an image.
The classification and segmentation labels are entirely user defined. The set of segmentation labels will always contain at least the "Background" label representing pixels of the images that have no relevant information for your task (for example, in a defect segmentation application, the "Background" pixels would be the pixels without any defects).

For each type of labeling, an image can either be unlabeled or labeled. When an image in unlabelled for a given type of labeling, the image won't be used for training a deep learning tool that requires this type of labeling.
The image in the dataset can be stored as path to an image file or as an Open eVision image structure. Supported structures are 8-bits monochrome (EImageBW8), 16-bits monochrome (EImageBW16), and 24-bits color (EROIC24, EImageC24).
The dataset associates with each image a region of interest and a mask/don't care area. By default, the region of interest of an image is its full extent and its mask is empty.
A EClassificationDataset object is also responsible for providing tools to do data augmentation. Data augmentation is the process of generating new images on-the-fly by applying affine transformations to those already in the dataset. Data augmentation allows a deep neural network to be invariant to the applied transformation without having to capture and label real world images containing those transformations.

A dataset can contain images with different sizes (with and height of their region of interest). However, the dataset has a default resolution (see EClassificationDataset and EClassificationDataset) that is used by deep learning tools that require the same input image size such as the EClassifier. When the images have different sizes, the default resolution will be the resolution of the region of interest of the first image added to the dataset.

Namespace: Euresys::Open_eVision::EasyDeepLearning

Methods

Adds an image to the dataset.
The image can be specified by its path on the filesystem (parameter imagePath) or by an Open eVision mage buffer (parameter img).
By default, an image will have no classification label and no segmentation. The label of the image can be directly specified when adding the image to the dataset. If the given label was not in the classification labels of the dataset, it will be automatically added to them.
If no region of interest and/or mask is specified, the region of interest of the image will be its full extent and its mask will be empty.
The method returns -1 if there was an error when inserting the image in the dataset or a numeric identifier greater or equal to 0 that can be used to access and manipulate the image in the dataset.
Adds an object to the image.
Adds all the images present in the directory specified by the parameter path and whose filename matches the filter.
By default, all the images will have no classification label and no classification. However, a label can be directly specified for all of the images.
The method returns the number of images added to the dataset.
Adds a label to the dataset.
Adds an object label.
Adds a region (see ERegion) to the given segment of an image. If, before this call, the image had no segmentation, the pixels not in the given region will be set to the background segmentation label.
Adds a segmentation label. The index of the new segmentation labels is returned by the method.
Clears the datasets.
Exports the dataset and its images to the given directory. - A new EClassificationDataset object with relative paths to the images is saved into the given directory.
- The exported images will be placed in a sub directory named "Images".
By default, the images will be saved under the filename "Image_[id].[ext]" where [id] is the index of the image in the dataset and [ext] is the image extension. If fileType is set to EImageFileType_Auto, [ext] will be the same as the original image. Otherwise, [ext] will be deduced from the specified file type. If keepFilename is set to true, the images will keep their original filename (except for their extensions). In two images have the same filename, an index will be automatically added to avoid nay conflict. A width, height, and number of channels can be optionally specified to reformat all the images in the dataset. Note that, in this case, only the ROI of the image will be exported.
Creates a new EClassificationDataset containing only the images of the given split type.
Absolute maximum overlap between objects in the dataset.
Image annotation file formats that are available next to the image file.
Number of channels of the first image added to the dataset.
Enable data augmentation.
Enable horizontal flipping in data augmentation.
Enable vertical flipping in data augmentation.
The Gaussian noise maximum standard deviation.
The Gaussian noise is an additive noise sampled from a Gaussian distribution of deviation between EClassificationDataset::GaussianNoiseMinimumStandardDeviation and EClassificationDataset::GaussianNoiseMaximumStandardDeviation (also called the normal distribution).
Its value must be superior or equal to EClassificationDataset::GaussianNoiseMinimumStandardDeviation.
The Gaussian noise minimum standard deviation.
The Gaussian noise is an additive noise sampled from a Gaussian distribution of deviation between EClassificationDataset::GaussianNoiseMinimumStandardDeviation and EClassificationDataset::GaussianNoiseMaximumStandardDeviation (also called the normal distribution).
Its value must be betwteen 0 and EClassificationDataset::GaussianNoiseMaximumStandardDeviation.
Height of the region of interest of the first image added to the dataset.
Gets a copy of the i-th image of the dataset. For the variant where a pointer is returned, the user is responsible for clearing the memory of the returned pointer.
Generates a new image from the i-th image of the dataset. For the variant where a pointer is returned, the user is responsible for clearing the memory of the returned pointer.
Gets the label of the i-th image of the dataset. An exception is thrown if the image has no classification label.
Number of objects for the speficied image.
Gets the specified object for the given image.
Gets all the objects of the specified image.
Gets the path of the i-th image of the dataset. If the image was not given using a path, the method will throw an exception.
Gets a copy of all images in the dataset. If data augmentation is enabled, the returned images will be augmented versions of the ones in the dataset.
The caller is responsible for clearing the memory allocated for each image.
Gets a list of index corresponding to the images associated with the given label
Gets a list of index corresponding to the images with no classification labels.
Gets the i-th label of the dataset (starting from 0 to EClassificationDataset::NumLabels - 1)
Gets a list of index corresponding to the images that have a classification label.
Gets the weight associated to the i-th label of the dataset (starting from 0 to EClassificationDataset::NumLabels - 1)
Gets the mask region/don't care area for the given image. The mask is defined with respect to the region of interest of the image.
Maximum absolute brightness offset. Its value must be between 0 and 1.
Maximum contrast gain. Its value must be strictly positive and over EClassificationDataset::MinContrastGain.
Maximum gamma for gamma correction. Its value must be higher than EClassificationDataset::MinGamma.
Maximum absolute horizontal shear.
It is represented as an angle from the vertical direction. Its value must be between 0 and 90 degrees.
Maximum horizontal shift for data augmentation.
The horizontal shift will be between -EClassificationDataset::MaxHorizontalShift and +EClassificationDataset::MaxHorizontalShift.
Maximum absolute hue offset. Its value must be between 0 and 180 degrees.
Maximum number of objects for an image in the dataset. If no images has object labeling (see EClassificationDataset::HasObjectLabeling), the method returns -1.
Maximum rotation angle for data augmentation.
The rotation angle will be between -EClassificationDataset::MaxRotationAngle and +EClassificationDataset::MaxRotationAngle.
Maximum saturation gain. Its value must be over or equal to EClassificationDataset::MinSaturationGain.
Maximum scaling allowed for data augmentation.
Maximum absolute vertical shear.
It is represented as an angle from the horizontal direction. Its value must be between 0 and 90 degrees.
Maximum vertical shift for data augmentation.
The vertical shift will be between -EClassificationDataset::MaxHorizontalShift and +EClassificationDataset::MaxHorizontalShift.
Minimum contrast gain. Its value must be strictly positive and below EClassificationDataset::MaxContrastGain.
Minimum gamma for gamma correction. Its value must be strictly positive and below EClassificationDataset::MaxGamma.
Minimum saturation gain. Its value must be strictly positive.
Minimum scaling allowed for data augmentation.
Number of different image files contained in the dataset.
Number of images in the dataset.
Get the number of images in the dataset that are associated with the given image file.
Number of images containing the given segmentation label.
Number of images that have a ground truth segmentation that has foreground segments and is this not entirely composed of background pixels.
Number of images that contain an object with the given label.
Number of images in the dataset that are labelled for object detection.
Number of images in the dataset that are labelled for object detection and have 1 or more objects.
Number of images that have a ground truth segmentation that has no foreground segments and is thus entirely composed of background pixels.
Number of images in the dataset that are not labelled for object detection.
Number of images in the dataset that are labelled for object detection but have been associated with no object.
Number of images that doesn't have a ground truth segmentation.
Number of images that have a ground truth segmentation.
Number of images that have a classification label.
Number of labels in the dataset.
Number of object labels.
Number of objects with the given label.
Number of pixels assigned to the given segmentation label.
Number of segmentation labels. A dataset has always at least one segmentation label: "background".
Number of segmented blobs in the image for all non-background labels or for a specific label.
Number of images that doesn't have any classification label.
Object label.
Gets the weight of an objet label.
Object size for EasyLocate Interest point.
Gets a region corresponding to the pixels of the given segment.
Region of interest height for the specified image.
Region of interest origin abscissa for the specified image.
Region of interest origin ordinate for the specified image.
Region of interest width for the specified image.
The maximum density of the salt and pepper noise.
The salt and pepper noise sets the value of a number of randomly selected (between EClassificationDataset::SaltAndPepperNoiseMinimumDensity and EClassificationDataset::SaltAndPepperNoiseMaximumDensity) pixels to its minimum or maximum value.
Its value must be betwteen EClassificationDataset::SaltAndPepperNoiseMinimumDensity and 1.
The minimum density of the salt and pepper noise.
The salt and pepper noise sets the value of a number of randomly selected (between EClassificationDataset::SaltAndPepperNoiseMinimumDensity and EClassificationDataset::SaltAndPepperNoiseMaximumDensity) pixels to its minimum or maximum value.
Its value must be betwteen 0 and EClassificationDataset::SaltAndPepperNoiseMaximumDensity.
Maximum overlap between objects with the same label in the dataset.
Gets the segmentation label.
Gets the segmentation label weights.
Gets the segmentation map of an image. The segmentation map is a 16-bit EROIBW16 image where the value of each pixel is equal to the corresponding segmentation label index (see EClassificationDataset::GetSegmentationLabel).
If an image has no segmentation (EClassificationDataset::HasSegmentation), the getter will throw an excpetion.
The speckle noise maximum standard deviation.
The speckle noise is a multiplicative noise sampled from a Gamma distribution with a mean of 1 and with its standard deviation between EClassificationDataset::SpeckleNoiseMinimumStandardDeviation and EClassificationDataset::SpeckleNoiseMaximumStandardDeviation.
Its value must be strictly higher than EClassificationDataset::SpeckleNoiseMinimumStandardDeviation.
The speckle noise minimum standard deviation.
The speckle noise is a multiplicative noise sampled from a Gamma distribution with a mean of 1 and with its standard deviation between EClassificationDataset::SpeckleNoiseMinimumStandardDeviation and EClassificationDataset::SpeckleNoiseMaximumStandardDeviation.
Its value must be strictly positive and lower than EClassificationDataset::SpeckleNoiseMaximumStandardDeviation.
Generates a split.
Width of the region of interest of the first image added to the dataset.
Whether the image segmentation contains segments that are not background. If the image has no segmentation, this method throws an exception.
Whether an image has a classification label.
Whether the image is labelled for object detection and use with ELocator.
Whether the image has a segmentation.
Imports Pascal VOC XML annotations (for EasyLocate Bounding Box).
This method may add new labels to the object labels.
The version that do not take an image index attempts to import the annotation from all images currently in the dataset.
Imports YOLO TXT annotations (for EasyLocate Bounding Box).
You need to specify the list of labels associated with the YOLO TXT annotations through an array of string or through a file containing this list. This method may add new labels to the object labels.
The version that do not take an image index attempts to import the annotation from all images currently in the dataset.
Whether the image is embedded into the dataset and is not a reference towards an image file.
Whether the image is stored as a file path towards an image.
Loads a classification dataset from disk.
Assignment operator
Removes the image at the given index. All annotations (label, segmentation, objects) will be lost.
Removes an object from an image.
Removes a label from the dataset.
All the images associated with this label will be set to unlabeled (see EClassificationDataset::HasLabel).
Removes an object label.
Remove a segmentation label.
This method sets all the pixels in the dataset assigned to this label to the background label.
Resets the object labeling for the specified image. This sets the image as having been labelled for object detection with no object present in the image.
Resets the segmentation of an image by setting all pixels to background.
Saves a classification dataset to disk, containing the file paths to the images in the dataset and their associated label.
Sets the base path for the images specified with a relative path.
Enable data augmentation.
Enable horizontal flipping in data augmentation.
Enable vertical flipping in data augmentation.
The Gaussian noise maximum standard deviation.
The Gaussian noise is an additive noise sampled from a Gaussian distribution of deviation between EClassificationDataset::GaussianNoiseMinimumStandardDeviation and EClassificationDataset::GaussianNoiseMaximumStandardDeviation (also called the normal distribution).
Its value must be superior or equal to EClassificationDataset::GaussianNoiseMinimumStandardDeviation.
The Gaussian noise minimum standard deviation.
The Gaussian noise is an additive noise sampled from a Gaussian distribution of deviation between EClassificationDataset::GaussianNoiseMinimumStandardDeviation and EClassificationDataset::GaussianNoiseMaximumStandardDeviation (also called the normal distribution).
Its value must be betwteen 0 and EClassificationDataset::GaussianNoiseMaximumStandardDeviation.
Sets the label of images in the dataset.
Sets the specified object for the given image.
Sets the i-th label of the dataset (starting from 0 to EClassificationDataset::NumLabels - 1). This operation does not add a new label to the dataset but simply renames an existing label.
Sets the weight associated to the i-th label of the dataset (starting from 0 to EClassificationDataset::NumLabels - 1). This operation does not add a new label.
Sets the mask region/don't care area for the given image. The mask is defined with respect to the region of interest of the image.
Maximum absolute brightness offset. Its value must be between 0 and 1.
Maximum contrast gain. Its value must be strictly positive and over EClassificationDataset::MinContrastGain.
Maximum gamma for gamma correction. Its value must be higher than EClassificationDataset::MinGamma.
Maximum absolute horizontal shear.
It is represented as an angle from the vertical direction. Its value must be between 0 and 90 degrees.
Maximum horizontal shift for data augmentation.
The horizontal shift will be between -EClassificationDataset::MaxHorizontalShift and +EClassificationDataset::MaxHorizontalShift.
Maximum absolute hue offset. Its value must be between 0 and 180 degrees.
Maximum rotation angle for data augmentation.
The rotation angle will be between -EClassificationDataset::MaxRotationAngle and +EClassificationDataset::MaxRotationAngle.
Maximum saturation gain. Its value must be over or equal to EClassificationDataset::MinSaturationGain.
Maximum scaling allowed for data augmentation.
Maximum absolute vertical shear.
It is represented as an angle from the horizontal direction. Its value must be between 0 and 90 degrees.
Maximum vertical shift for data augmentation.
The vertical shift will be between -EClassificationDataset::MaxHorizontalShift and +EClassificationDataset::MaxHorizontalShift.
Minimum contrast gain. Its value must be strictly positive and below EClassificationDataset::MaxContrastGain.
Minimum gamma for gamma correction. Its value must be strictly positive and below EClassificationDataset::MaxGamma.
Minimum saturation gain. Its value must be strictly positive.
Minimum scaling allowed for data augmentation.
Sets an object label.
Sets the weight of an object label.
Object size for EasyLocate Interest point.
Sets the region of interest for the specified image.
The maximum density of the salt and pepper noise.
The salt and pepper noise sets the value of a number of randomly selected (between EClassificationDataset::SaltAndPepperNoiseMinimumDensity and EClassificationDataset::SaltAndPepperNoiseMaximumDensity) pixels to its minimum or maximum value.
Its value must be betwteen EClassificationDataset::SaltAndPepperNoiseMinimumDensity and 1.
The minimum density of the salt and pepper noise.
The salt and pepper noise sets the value of a number of randomly selected (between EClassificationDataset::SaltAndPepperNoiseMinimumDensity and EClassificationDataset::SaltAndPepperNoiseMaximumDensity) pixels to its minimum or maximum value.
Its value must be betwteen 0 and EClassificationDataset::SaltAndPepperNoiseMaximumDensity.
Sets the segmentation labels.
Sets the segmentation label weight.
Sets the segmentation map of an image. The segmentation map is a 16-bit EROIBW16 image where the value of each pixel is equal to the corresponding segmentation label index (see EClassificationDataset::GetSegmentationLabel).
If an image has no segmentation (EClassificationDataset::HasSegmentation), the getter will throw an excpetion.
The speckle noise maximum standard deviation.
The speckle noise is a multiplicative noise sampled from a Gamma distribution with a mean of 1 and with its standard deviation between EClassificationDataset::SpeckleNoiseMinimumStandardDeviation and EClassificationDataset::SpeckleNoiseMaximumStandardDeviation.
Its value must be strictly higher than EClassificationDataset::SpeckleNoiseMinimumStandardDeviation.
The speckle noise minimum standard deviation.
The speckle noise is a multiplicative noise sampled from a Gamma distribution with a mean of 1 and with its standard deviation between EClassificationDataset::SpeckleNoiseMinimumStandardDeviation and EClassificationDataset::SpeckleNoiseMaximumStandardDeviation.
Its value must be strictly positive and lower than EClassificationDataset::SpeckleNoiseMaximumStandardDeviation.
Splits the dataset in two parts to be used for training and validation respectively.
Splits the dataset in two parts for training and validation of a ELocator tool. Images without object labeling are excluded from the split.
Splits the dataset in two parts for a supervised segmenter. The two parts are to be used for training and validation respectively.
Unset the label of the given image of the dataset. After this operation, the given image will have no classification labels.
Unsets the object labeling for the specified image. This sets the image as not having been labelled for object detection. This image won't be use for training with a ELocator tool.
Unsets the segmentation of an image. After this operation, the image will have no segmentation.

EClassificationDataset Class

EClassificationDataset manages a dataset of images.

A dataset is a collection of images with different types of labeling: labeling for the classification of images, labeling for the segmentation of pixels, and/or labeling for the detection of objects (classification and localization).
The dataset maintains 3 sets of labels for each type of labeling:
- the classification labels that characterize an entire image;
- the segmentation labels that characterize the pixels of an image; and
- the object labels that characterize axis-aligned rectangle region of an image.
The classification and segmentation labels are entirely user defined. The set of segmentation labels will always contain at least the "Background" label representing pixels of the images that have no relevant information for your task (for example, in a defect segmentation application, the "Background" pixels would be the pixels without any defects).

For each type of labeling, an image can either be unlabeled or labeled. When an image in unlabelled for a given type of labeling, the image won't be used for training a deep learning tool that requires this type of labeling.
The image in the dataset can be stored as path to an image file or as an Open eVision image structure. Supported structures are 8-bits monochrome (EImageBW8), 16-bits monochrome (EImageBW16), and 24-bits color (EROIC24, EImageC24).
The dataset associates with each image a region of interest and a mask/don't care area. By default, the region of interest of an image is its full extent and its mask is empty.
A EClassificationDataset object is also responsible for providing tools to do data augmentation. Data augmentation is the process of generating new images on-the-fly by applying affine transformations to those already in the dataset. Data augmentation allows a deep neural network to be invariant to the applied transformation without having to capture and label real world images containing those transformations.

A dataset can contain images with different sizes (with and height of their region of interest). However, the dataset has a default resolution (see EClassificationDataset and EClassificationDataset) that is used by deep learning tools that require the same input image size such as the EClassifier. When the images have different sizes, the default resolution will be the resolution of the region of interest of the first image added to the dataset.

Namespace: Euresys.Open_eVision.EasyDeepLearning

Properties

Absolute maximum overlap between objects in the dataset.
Sets the base path for the images specified with a relative path.
Number of channels of the first image added to the dataset.
Enable data augmentation.
Enable horizontal flipping in data augmentation.
Enable vertical flipping in data augmentation.
The Gaussian noise maximum standard deviation.
The Gaussian noise is an additive noise sampled from a Gaussian distribution of deviation between EClassificationDataset::GaussianNoiseMinimumStandardDeviation and EClassificationDataset::GaussianNoiseMaximumStandardDeviation (also called the normal distribution).
Its value must be superior or equal to EClassificationDataset::GaussianNoiseMinimumStandardDeviation.
The Gaussian noise minimum standard deviation.
The Gaussian noise is an additive noise sampled from a Gaussian distribution of deviation between EClassificationDataset::GaussianNoiseMinimumStandardDeviation and EClassificationDataset::GaussianNoiseMaximumStandardDeviation (also called the normal distribution).
Its value must be betwteen 0 and EClassificationDataset::GaussianNoiseMaximumStandardDeviation.
Height of the region of interest of the first image added to the dataset.
Gets a list of index corresponding to the images with no classification labels.
Gets a list of index corresponding to the images that have a classification label.
Maximum absolute brightness offset. Its value must be between 0 and 1.
Maximum contrast gain. Its value must be strictly positive and over EClassificationDataset::MinContrastGain.
Maximum gamma for gamma correction. Its value must be higher than EClassificationDataset::MinGamma.
Maximum absolute horizontal shear.
It is represented as an angle from the vertical direction. Its value must be between 0 and 90 degrees.
Maximum horizontal shift for data augmentation.
The horizontal shift will be between -EClassificationDataset::MaxHorizontalShift and +EClassificationDataset::MaxHorizontalShift.
Maximum absolute hue offset. Its value must be between 0 and 180 degrees.
Maximum number of objects for an image in the dataset. If no images has object labeling (see EClassificationDataset::HasObjectLabeling), the method returns -1.
Maximum rotation angle for data augmentation.
The rotation angle will be between -EClassificationDataset::MaxRotationAngle and +EClassificationDataset::MaxRotationAngle.
Maximum saturation gain. Its value must be over or equal to EClassificationDataset::MinSaturationGain.
Maximum scaling allowed for data augmentation.
Maximum absolute vertical shear.
It is represented as an angle from the horizontal direction. Its value must be between 0 and 90 degrees.
Maximum vertical shift for data augmentation.
The vertical shift will be between -EClassificationDataset::MaxHorizontalShift and +EClassificationDataset::MaxHorizontalShift.
Minimum contrast gain. Its value must be strictly positive and below EClassificationDataset::MaxContrastGain.
Minimum gamma for gamma correction. Its value must be strictly positive and below EClassificationDataset::MaxGamma.
Minimum saturation gain. Its value must be strictly positive.
Minimum scaling allowed for data augmentation.
Number of different image files contained in the dataset.
Number of images in the dataset.
Number of images that have a ground truth segmentation that has foreground segments and is this not entirely composed of background pixels.
Number of images in the dataset that are labelled for object detection.
Number of images in the dataset that are labelled for object detection and have 1 or more objects.
Number of images that have a ground truth segmentation that has no foreground segments and is thus entirely composed of background pixels.
Number of images in the dataset that are not labelled for object detection.
Number of images in the dataset that are labelled for object detection but have been associated with no object.
Number of images that doesn't have a ground truth segmentation.
Number of images that have a ground truth segmentation.
Number of images that have a classification label.
Number of labels in the dataset.
Number of object labels.
Number of segmentation labels. A dataset has always at least one segmentation label: "background".
Number of images that doesn't have any classification label.
Object size for EasyLocate Interest point.
The maximum density of the salt and pepper noise.
The salt and pepper noise sets the value of a number of randomly selected (between EClassificationDataset::SaltAndPepperNoiseMinimumDensity and EClassificationDataset::SaltAndPepperNoiseMaximumDensity) pixels to its minimum or maximum value.
Its value must be betwteen EClassificationDataset::SaltAndPepperNoiseMinimumDensity and 1.
The minimum density of the salt and pepper noise.
The salt and pepper noise sets the value of a number of randomly selected (between EClassificationDataset::SaltAndPepperNoiseMinimumDensity and EClassificationDataset::SaltAndPepperNoiseMaximumDensity) pixels to its minimum or maximum value.
Its value must be betwteen 0 and EClassificationDataset::SaltAndPepperNoiseMaximumDensity.
Maximum overlap between objects with the same label in the dataset.
The speckle noise maximum standard deviation.
The speckle noise is a multiplicative noise sampled from a Gamma distribution with a mean of 1 and with its standard deviation between EClassificationDataset::SpeckleNoiseMinimumStandardDeviation and EClassificationDataset::SpeckleNoiseMaximumStandardDeviation.
Its value must be strictly higher than EClassificationDataset::SpeckleNoiseMinimumStandardDeviation.
The speckle noise minimum standard deviation.
The speckle noise is a multiplicative noise sampled from a Gamma distribution with a mean of 1 and with its standard deviation between EClassificationDataset::SpeckleNoiseMinimumStandardDeviation and EClassificationDataset::SpeckleNoiseMaximumStandardDeviation.
Its value must be strictly positive and lower than EClassificationDataset::SpeckleNoiseMaximumStandardDeviation.
Width of the region of interest of the first image added to the dataset.

Methods

Adds an image to the dataset.
The image can be specified by its path on the filesystem (parameter imagePath) or by an Open eVision mage buffer (parameter img).
By default, an image will have no classification label and no segmentation. The label of the image can be directly specified when adding the image to the dataset. If the given label was not in the classification labels of the dataset, it will be automatically added to them.
If no region of interest and/or mask is specified, the region of interest of the image will be its full extent and its mask will be empty.
The method returns -1 if there was an error when inserting the image in the dataset or a numeric identifier greater or equal to 0 that can be used to access and manipulate the image in the dataset.
Adds an object to the image.
Adds all the images present in the directory specified by the parameter path and whose filename matches the filter.
By default, all the images will have no classification label and no classification. However, a label can be directly specified for all of the images.
The method returns the number of images added to the dataset.
Adds a label to the dataset.
Adds an object label.
Adds a region (see ERegion) to the given segment of an image. If, before this call, the image had no segmentation, the pixels not in the given region will be set to the background segmentation label.
Adds a segmentation label. The index of the new segmentation labels is returned by the method.
Clears the datasets.
Exports the dataset and its images to the given directory. - A new EClassificationDataset object with relative paths to the images is saved into the given directory.
- The exported images will be placed in a sub directory named "Images".
By default, the images will be saved under the filename "Image_[id].[ext]" where [id] is the index of the image in the dataset and [ext] is the image extension. If fileType is set to EImageFileType_Auto, [ext] will be the same as the original image. Otherwise, [ext] will be deduced from the specified file type. If keepFilename is set to true, the images will keep their original filename (except for their extensions). In two images have the same filename, an index will be automatically added to avoid nay conflict. A width, height, and number of channels can be optionally specified to reformat all the images in the dataset. Note that, in this case, only the ROI of the image will be exported.
Creates a new EClassificationDataset containing only the images of the given split type.
Image annotation file formats that are available next to the image file.
Gets a copy of the i-th image of the dataset. For the variant where a pointer is returned, the user is responsible for clearing the memory of the returned pointer.
Generates a new image from the i-th image of the dataset. For the variant where a pointer is returned, the user is responsible for clearing the memory of the returned pointer.
Gets the label of the i-th image of the dataset. An exception is thrown if the image has no classification label.
Number of objects for the speficied image.
Gets the specified object for the given image.
Gets all the objects of the specified image.
Gets the path of the i-th image of the dataset. If the image was not given using a path, the method will throw an exception.
Gets a copy of all images in the dataset. If data augmentation is enabled, the returned images will be augmented versions of the ones in the dataset.
The caller is responsible for clearing the memory allocated for each image.
Gets a list of index corresponding to the images associated with the given label
Gets the i-th label of the dataset (starting from 0 to EClassificationDataset::NumLabels - 1)
Gets the weight associated to the i-th label of the dataset (starting from 0 to EClassificationDataset::NumLabels - 1)
Gets the mask region/don't care area for the given image. The mask is defined with respect to the region of interest of the image.
Get the number of images in the dataset that are associated with the given image file.
Number of images containing the given segmentation label.
Number of images that contain an object with the given label.
Number of objects with the given label.
Number of pixels assigned to the given segmentation label.
Number of segmented blobs in the image for all non-background labels or for a specific label.
Object label.
Gets the weight of an objet label.
Gets a region corresponding to the pixels of the given segment.
Region of interest height for the specified image.
Region of interest origin abscissa for the specified image.
Region of interest origin ordinate for the specified image.
Region of interest width for the specified image.
Gets the segmentation label.
Gets the segmentation label weights.
Gets the segmentation map of an image. The segmentation map is a 16-bit EROIBW16 image where the value of each pixel is equal to the corresponding segmentation label index (see EClassificationDataset::GetSegmentationLabel).
If an image has no segmentation (EClassificationDataset::HasSegmentation), the getter will throw an excpetion.
Generates a split.
Whether the image segmentation contains segments that are not background. If the image has no segmentation, this method throws an exception.
Whether an image has a classification label.
Whether the image is labelled for object detection and use with ELocator.
Whether the image has a segmentation.
Imports Pascal VOC XML annotations (for EasyLocate Bounding Box).
This method may add new labels to the object labels.
The version that do not take an image index attempts to import the annotation from all images currently in the dataset.
Imports YOLO TXT annotations (for EasyLocate Bounding Box).
You need to specify the list of labels associated with the YOLO TXT annotations through an array of string or through a file containing this list. This method may add new labels to the object labels.
The version that do not take an image index attempts to import the annotation from all images currently in the dataset.
Whether the image is embedded into the dataset and is not a reference towards an image file.
Whether the image is stored as a file path towards an image.
Loads a classification dataset from disk.
Assignment operator
Removes the image at the given index. All annotations (label, segmentation, objects) will be lost.
Removes an object from an image.
Removes a label from the dataset.
All the images associated with this label will be set to unlabeled (see EClassificationDataset::HasLabel).
Removes an object label.
Remove a segmentation label.
This method sets all the pixels in the dataset assigned to this label to the background label.
Resets the object labeling for the specified image. This sets the image as having been labelled for object detection with no object present in the image.
Resets the segmentation of an image by setting all pixels to background.
Saves a classification dataset to disk, containing the file paths to the images in the dataset and their associated label.
Sets the label of images in the dataset.
Sets the specified object for the given image.
Sets the i-th label of the dataset (starting from 0 to EClassificationDataset::NumLabels - 1). This operation does not add a new label to the dataset but simply renames an existing label.
Sets the weight associated to the i-th label of the dataset (starting from 0 to EClassificationDataset::NumLabels - 1). This operation does not add a new label.
Sets the mask region/don't care area for the given image. The mask is defined with respect to the region of interest of the image.
Sets an object label.
Sets the weight of an object label.
Sets the region of interest for the specified image.
Sets the segmentation labels.
Sets the segmentation label weight.
Sets the segmentation map of an image. The segmentation map is a 16-bit EROIBW16 image where the value of each pixel is equal to the corresponding segmentation label index (see EClassificationDataset::GetSegmentationLabel).
If an image has no segmentation (EClassificationDataset::HasSegmentation), the getter will throw an excpetion.
Splits the dataset in two parts to be used for training and validation respectively.
Splits the dataset in two parts for training and validation of a ELocator tool. Images without object labeling are excluded from the split.
Splits the dataset in two parts for a supervised segmenter. The two parts are to be used for training and validation respectively.
Unset the label of the given image of the dataset. After this operation, the given image will have no classification labels.
Unsets the object labeling for the specified image. This sets the image as not having been labelled for object detection. This image won't be use for training with a ELocator tool.
Unsets the segmentation of an image. After this operation, the image will have no segmentation.