EDeepLearningTool Class

EDeepLearningTool represents the common operations of deep learning tools.

The class is responsible for handling CPU/GPU settings and training.
Computing the result for an image is called inference and is the responsability of the actual deep learning tools (see EClassifier).

Derived Class(es): EClassifier ELocator ESupervisedSegmenter EUnsupervisedSegmenter

Namespace: Euresys::Open_eVision_2_16::EasyDeepLearning

Methods

Factory method from a serializer stream: allocates and reads the deep learning tool from the given serializer.
Returns the corresponding deep learning tool, must be released by caller.
Batch size. The batch size is the number of images that are processed together during training and batch inference.
When using multi-GPUs processing (see EDeepLearningTool::GPUIndexes), the batch size is the number of images that each GPU will process at once.
A large batch size will increase the processing speed on GPU but also the memory requirements.
The batch size must be bigger or equal to 1 and it is commonly chosen to be a power of 2.
Computes the batch size that will maximize the inference speed on a GPU.
Iteration at which the minimum validation error was reached. After training, the classifier is in the state it was at this best iteration.
Number of iterations that are already finished in the current training.
Total number of training iterations that will be performed during the current training of the deep learning tool.
After the end of the training, this number is the actual number of iterations performed during the training which can be lower than the number of iterations given to the EDeepLearningTool::Train method (for example if the user called EDeepLearningTool::StopTraining).
Current training progression as a percentage (between 0 and 1).
Enable the use of a GPU for training and inference of the deep learning tool.
Indexes of the GPUs to use for computations.
Using multiple GPUs is only possible when we can process multiple images at once, i.e. during training (EDeepLearningTool::Train) or batch inference.
By default, all the detected GPUs will be used.
Size in byte of the image cache. The cache is used during training to store reformatted and normalized images. A correctly sized cache can reduce the hard drive accesses and the preprocessing time for each image.
Number of detected GPU.
Number of iterations that were performed to train this deep learning tool.
Indicates whether to optimize the batch size (see EDeepLearningTool::BatchSize) to maximize the training and inference speed according to EDeepLearningTool::EnableGPU and the available memory.
Default value is true.
Tells whether the deep learning tool has been trained.
Indicates whether the object is currently training.
Loads a deep learning tool. The given ESerializer must have been created for reading.
Saves a deep learning tool. The given ESerializer must have been created for writing.
Serializes the deep learning tool.
Batch size. The batch size is the number of images that are processed together during training and batch inference.
When using multi-GPUs processing (see EDeepLearningTool::GPUIndexes), the batch size is the number of images that each GPU will process at once.
A large batch size will increase the processing speed on GPU but also the memory requirements.
The batch size must be bigger or equal to 1 and it is commonly chosen to be a power of 2.
Enable the use of a GPU for training and inference of the deep learning tool.
Indexes of the GPUs to use for computations.
Using multiple GPUs is only possible when we can process multiple images at once, i.e. during training (EDeepLearningTool::Train) or batch inference.
By default, all the detected GPUs will be used.
Size in byte of the image cache. The cache is used during training to store reformatted and normalized images. A correctly sized cache can reduce the hard drive accesses and the preprocessing time for each image.
Indicates whether to optimize the batch size (see EDeepLearningTool::BatchSize) to maximize the training and inference speed according to EDeepLearningTool::EnableGPU and the available memory.
Default value is true.
Stops training and returns the last completed iteration. If the parameter 'wait' is set to true, the method will wait for the training thread to completely stop. Otherwise, the method will return immediately.
Trains the EDeepLearningTool with the given dataset for the specified number of iterations.
At the end of the training, the deep learning tool is in the state it was at the iteration that gave the minimum validation error. See EDeepLearningTool::BestIteration.
Waits until an iteration is complete. A call to this method will block the calling thread until a training iteration in the training thread is finished. This method returns the number of trained iterations.
Waits until the training is complete or the timeout is expired. A call to this method will block the calling thread for the shortest time between the timeout and the time it takes for the training to complete.
A negative timeout means that the method will wait until the training is complete.
The default value is set to -1.
The method returns the number of trained iterations.

EDeepLearningTool Class

EDeepLearningTool represents the common operations of deep learning tools.

The class is responsible for handling CPU/GPU settings and training.
Computing the result for an image is called inference and is the responsability of the actual deep learning tools (see EClassifier).

Derived Class(es): EClassifier ELocator ESupervisedSegmenter EUnsupervisedSegmenter

Namespace: Euresys.Open_eVision_2_16.EasyDeepLearning

Properties

Batch size. The batch size is the number of images that are processed together during training and batch inference.
When using multi-GPUs processing (see EDeepLearningTool::GPUIndexes), the batch size is the number of images that each GPU will process at once.
A large batch size will increase the processing speed on GPU but also the memory requirements.
The batch size must be bigger or equal to 1 and it is commonly chosen to be a power of 2.
Computes the batch size that will maximize the inference speed on a GPU.
Iteration at which the minimum validation error was reached. After training, the classifier is in the state it was at this best iteration.
Number of iterations that are already finished in the current training.
Total number of training iterations that will be performed during the current training of the deep learning tool.
After the end of the training, this number is the actual number of iterations performed during the training which can be lower than the number of iterations given to the EDeepLearningTool::Train method (for example if the user called EDeepLearningTool::StopTraining).
Current training progression as a percentage (between 0 and 1).
Enable the use of a GPU for training and inference of the deep learning tool.
Indexes of the GPUs to use for computations.
Using multiple GPUs is only possible when we can process multiple images at once, i.e. during training (EDeepLearningTool::Train) or batch inference.
By default, all the detected GPUs will be used.
Size in byte of the image cache. The cache is used during training to store reformatted and normalized images. A correctly sized cache can reduce the hard drive accesses and the preprocessing time for each image.
Number of detected GPU.
Number of iterations that were performed to train this deep learning tool.
Indicates whether to optimize the batch size (see EDeepLearningTool::BatchSize) to maximize the training and inference speed according to EDeepLearningTool::EnableGPU and the available memory.
Default value is true.

Methods

Factory method from a serializer stream: allocates and reads the deep learning tool from the given serializer.
Returns the corresponding deep learning tool, must be released by caller.
Tells whether the deep learning tool has been trained.
Indicates whether the object is currently training.
Loads a deep learning tool. The given ESerializer must have been created for reading.
Saves a deep learning tool. The given ESerializer must have been created for writing.
Serializes the deep learning tool.
Stops training and returns the last completed iteration. If the parameter 'wait' is set to true, the method will wait for the training thread to completely stop. Otherwise, the method will return immediately.
Trains the EDeepLearningTool with the given dataset for the specified number of iterations.
At the end of the training, the deep learning tool is in the state it was at the iteration that gave the minimum validation error. See EDeepLearningTool::BestIteration.
Waits until an iteration is complete. A call to this method will block the calling thread until a training iteration in the training thread is finished. This method returns the number of trained iterations.
Waits until the training is complete or the timeout is expired. A call to this method will block the calling thread for the shortest time between the timeout and the time it takes for the training to complete.
A negative timeout means that the method will wait until the training is complete.
The default value is set to -1.
The method returns the number of trained iterations.