Manages a complete matching context in EasyMatch.
A matching context consists of a learned pattern and of the parameters required to locate one or more instances of the pattern in a search field.
Releases the pointer to the image object that has been passed to the
EMatcher object.
Copies the learnt pattern in the supplied image. If no pattern has been learned, an exception with code
EError_NoPatternLearnt will be thrown.
Copies all the data of the current
EMatcher object into another
EMatcher object and returns it.
Draws a graphical representation of a given occurrence of the pattern in the image, using a rectangle and possibly a small line segment in the upper-left corner.
Draws a graphical representation of all occurrences of the pattern in the image, using a rectangle and possibly a small line segment in the upper-left corner.
Draws a graphical representation of all occurrences of the pattern in the image, using a rectangle and possibly a small line segment in the upper-left corner.
Draws a graphical representation of a given occurrence of the pattern in the image, using a rectangle and possibly a small line segment in the upper-left corner.
Constructs a matching context.
Toggle advanced learning.
Index of the last reduction.
Minimum score applied as a selection criterion in the early stages of the matching process.
Flag indicating whether isotropic (as opposed to anisotropic) scaling is used.
Maximum angle, in the current angle unit.
Maximum number of positions at the first stage of the matching process.
Maximum number of positions.
Maximum scale factor for isotropic scaling.
Maximum horizontal scale factor for anisotropic scaling.
Maximum vertical scale factor for anisotropic scaling.
Minimum angle, in the current angle unit.
Minimum reduced area parameter.
Minimum scale factor for isotropic scaling.
Minimum horizontal scale factor for anisotropic scaling.
Minimum vertical scale factor for anisotropic scaling.
Number of reduction steps used in the matching process.
Returns
TRUE after a learning operation has been successfully performed, indicating that the
EMatcher object is ready for matching, and
FALSE otherwise.
Pixel type of the learnt pattern.
Gets the physical pixel dimensions.
Returns a vector of
EMatchPosition objects, each containing the position coordinates and other matching results.
Current value of scale step.
Current value of scale X step.
Current value of scale Y step.
Learns a pattern to subsequently match in an image.
Matches the pattern against an image.
Copies all the data from another
EMatcher object into the current
EMatcher object
Toggle advanced learning.
Sets the extension of the matching ROI, i.e. puts the horizontal and vertical distances, in pixels, that the found pattern occurrences may fall outside the matching ROI. Such occurrences partially outside the ROI have their score corrected by the ratio between the pattern area outside the ROI and the pattern area inside.
Index of the last reduction.
Minimum score applied as a selection criterion in the early stages of the matching process.
Maximum angle, in the current angle unit.
Maximum number of positions at the first stage of the matching process.
Maximum number of positions.
Maximum scale factor for isotropic scaling.
Maximum horizontal scale factor for anisotropic scaling.
Maximum vertical scale factor for anisotropic scaling.
Minimum angle, in the current angle unit.
Minimum reduced area parameter.
Minimum scale factor for isotropic scaling.
Minimum horizontal scale factor for anisotropic scaling.
Minimum vertical scale factor for anisotropic scaling.
Sets the physical pixel dimensions.