EDeepLearningTool::GetBatchSizeForMaximumInferenceSpeed

Computes the batch size that will maximize the inference speed on a GPU.

Namespace: Euresys::Open_eVision_2_16::EasyDeepLearning

[C++]

OEV_INT32 GetBatchSizeForMaximumInferenceSpeed() const

Remarks

This value is given as an indication and should not necessarily be used in practice.
You must choose a tradeoff between the overall inference speed (also called the throughput), which is limited by this value, and the time it takes to compute the result of a whole batch (also called the latency), which is minimized by making inference image per image (i.e. a batch size of 1).
The tradeoff depends on your particular application.
Throw EError_DeepLearningToolNotTrained if classifier is not trained.

EDeepLearningTool.BatchSizeForMaximumInferenceSpeed

Computes the batch size that will maximize the inference speed on a GPU.

Namespace: Euresys.Open_eVision_2_16.EasyDeepLearning

[C#]

int BatchSizeForMaximumInferenceSpeed

{ get; }

Remarks

This value is given as an indication and should not necessarily be used in practice.
You must choose a tradeoff between the overall inference speed (also called the throughput), which is limited by this value, and the time it takes to compute the result of a whole batch (also called the latency), which is minimized by making inference image per image (i.e. a batch size of 1).
The tradeoff depends on your particular application.
Throw EError_DeepLearningToolNotTrained if classifier is not trained.