EImageStitcher::LearnC24

Computes the necessary transformations to apply on each input images. Subsequent calls to the Stitch methods will use these transformations instead of re-computing them. Thus, they are expected to be faster.

It is assumed that all the set of input images (in Learn and in next Stitch methods) are correctly ordered, have the same size, and that all their images must have the same dimension.

Learning to stitch on a pattern with strong feature points to maximize the precision and then apply the stitching on images in production is a valable usage. See the New Open eVision Studio example where the EImageSticher is applied on a piece of tissue (low number of reliable feature points) after being learned on a pattern (high number of reliable feature points).

Throw an exception if the learning failed (the same as if a regular stitching operation fails).

Namespace: Euresys::Open_eVision

[C++]

void LearnC24(
   const std::vector<EImageC24>& images,
   uint32_t rowCount,
   uint32_t colCount
)

void LearnC24(
   const std::vector<EImageC24>& images,
   uint32_t rowCount
)

void LearnC24(
   const std::vector<EImageC24>& images
)

void LearnC24(
   const std::vector<EImageC24>& images,
   const std::vector<ERegion>& regions,
   uint32_t rowCount,
   uint32_t colCount
)

void LearnC24(
   const std::vector<EImageC24>& images,
   const std::vector<ERegion>& regions,
   uint32_t rowCount
)

void LearnC24(
   const std::vector<EImageC24>& images,
   const std::vector<ERegion>& regions
)

Parameters

images

Input images. Each image must belong to the same plan.

rowCount

Number of rows.

colCount

Number of columns.

regions

Regions returned by the ECameraCalibration::Unwarp or EWorldShape::Unwarp methods. Indicates which pixel belong to the original image during blending.

EImageStitcher.LearnC24

Computes the necessary transformations to apply on each input images. Subsequent calls to the Stitch methods will use these transformations instead of re-computing them. Thus, they are expected to be faster.

It is assumed that all the set of input images (in Learn and in next Stitch methods) are correctly ordered, have the same size, and that all their images must have the same dimension.

Learning to stitch on a pattern with strong feature points to maximize the precision and then apply the stitching on images in production is a valable usage. See the New Open eVision Studio example where the EImageSticher is applied on a piece of tissue (low number of reliable feature points) after being learned on a pattern (high number of reliable feature points).

Throw an exception if the learning failed (the same as if a regular stitching operation fails).

Namespace: Euresys.Open_eVision

[C#]

void LearnC24(
   EImageC24[] images,
   uint rowCount,
   uint colCount
)

void LearnC24(
   EImageC24[] images,
   uint rowCount
)

void LearnC24(
   EImageC24[] images
)

void LearnC24(
   EImageC24[] images,
   ERegion[] regions,
   uint rowCount,
   uint colCount
)

void LearnC24(
   EImageC24[] images,
   ERegion[] regions,
   uint rowCount
)

void LearnC24(
   EImageC24[] images,
   ERegion[] regions
)

Parameters

images

Input images. Each image must belong to the same plan.

rowCount

Number of rows.

colCount

Number of columns.

regions

Regions returned by the ECameraCalibration.Unwarp or EWorldShape.Unwarp methods. Indicates which pixel belong to the original image during blending.

EImageStitcher.LearnC24

Computes the necessary transformations to apply on each input images. Subsequent calls to the Stitch methods will use these transformations instead of re-computing them. Thus, they are expected to be faster.

It is assumed that all the set of input images (in Learn and in next Stitch methods) are correctly ordered, have the same size, and that all their images must have the same dimension.

Learning to stitch on a pattern with strong feature points to maximize the precision and then apply the stitching on images in production is a valable usage. See the New Open eVision Studio example where the EImageSticher is applied on a piece of tissue (low number of reliable feature points) after being learned on a pattern (high number of reliable feature points).

Throw an exception if the learning failed (the same as if a regular stitching operation fails).

Module: open_evision

[Python]

LearnC24(
    images: list[EImageC24]
    rowCount: int
    colCount: int
) -> None

LearnC24(
    images: list[EImageC24]
    rowCount: int
) -> None

LearnC24(
    images: list[EImageC24]
) -> None

LearnC24(
    images: list[EImageC24]
    regions: list[ERegion]
    rowCount: int
    colCount: int
) -> None

LearnC24(
    images: list[EImageC24]
    regions: list[ERegion]
    rowCount: int
) -> None

LearnC24(
    images: list[EImageC24]
    regions: list[ERegion]
) -> None

Parameters

images

Input images. Each image must belong to the same plan.

rowCount

Number of rows.

colCount

Number of columns.

regions

Regions returned by the ECameraCalibration.Unwarp or EWorldShape.Unwarp methods. Indicates which pixel belong to the original image during blending.