EDeepLearningTool::GetBatchSizeForMaximumInferenceSpeed
Computes the batch size that will maximize the inference speed on NVIDIA GPU. For all other devices, it will return 1.
Namespace: Euresys::Open_eVision::EasyDeepLearning
[C++]
int GetBatchSizeForMaximumInferenceSpeed() const
Remarks
This value is given as an indication and should not necessarily be used in practice.
You must choose a tradeoff between the overall inference speed (also called the throughput), which is limited by this value, and the time it takes to compute the result of a whole batch (also called the latency), which is minimized by making inference image per image (i.e. a batch size of 1).
The tradeoff depends on your particular application.
Throw EError_DeepLearningToolNotTrained if the tool is not trained.
EDeepLearningTool.BatchSizeForMaximumInferenceSpeed
Computes the batch size that will maximize the inference speed on NVIDIA GPU. For all other devices, it will return 1.
Namespace: Euresys.Open_eVision.EasyDeepLearning
[C#]
int BatchSizeForMaximumInferenceSpeed
{ get; }
Remarks
This value is given as an indication and should not necessarily be used in practice.
You must choose a tradeoff between the overall inference speed (also called the throughput), which is limited by this value, and the time it takes to compute the result of a whole batch (also called the latency), which is minimized by making inference image per image (i.e. a batch size of 1).
The tradeoff depends on your particular application.
Throw EError_DeepLearningToolNotTrained if the tool is not trained.