Training a Deep Learning Tool

In the API, to train a deep learning tool, call the EDeepLearningTool::Train(trainingDataset, validationDataset, numberOfIterations) method.

An Iteration corresponds to going through all the images in the training dataset once.
The training process requires a large number of iterations to obtain good results.
The training process requires a large number of iterations to obtain good results.
The default number of iterations is 50.
The larger the number of iterations, the longer the training is and the better the results you obtain.
Multiple iterations:
Calling the EDeepLearningTool::Train method several times with the same training and validation dataset is equivalent to calling it once but with a larger number of iterations.
Call EDeepLearningTool::GetNumTrainedIterations to get the total number of iterations used to train the classifier.
In successive calls to EDeepLearningTool::Train:

- You can add images to the training and validation dataset to train the tool to recognize new instances of your problem.

- We do not recommend that you remove images from the dataset as the tool might forget about these images during the new training phase.

The training process is asynchronous:
EDeepLearningTool::Train launches a new thread that does the training in background.
EDeepLearningTool::WaitForTrainingCompletion suspends the program until the whole training is completed.
EDeepLearningTool::WaitForIterationCompletion suspends the program until the current iteration is completed.
During the training, EDeepLearningTool::GetCurrentTrainingProgression shows the progression of the training.
The batch size corresponds to the number of image patches that are processed together.
The training is influenced by the batch size.
A large batch size increases the processing speed of a single iteration on a GPU but requires more memory.
The training process is not able to learn a good model with too small batch sizes.
By default, the batch size is determined automatically during the training to optimize the training speed with respect to the available memory.

- Use EDeepLearningTool::SetOptimizeBatchSize(false) to disable this behavior.

- Use EDeepLearningTool::SetBatchSize to change the size of your batch.

EDeepLearningTool::GetBatchSizeForMaximumInferenceSpeed gets the batch size that maximizes the batch classification speed on a GPU according to the available memory.
It is common to choose powers of 2 as the batch size for performance reasons.