US20100067806A1 - System and method for pleographic recognition, matching, and identification of images and objects - Google Patents

System and method for pleographic recognition, matching, and identification of images and objects Download PDF

Info

Publication number
US20100067806A1
US20100067806A1 US12/558,520 US55852009A US2010067806A1 US 20100067806 A1 US20100067806 A1 US 20100067806A1 US 55852009 A US55852009 A US 55852009A US 2010067806 A1 US2010067806 A1 US 2010067806A1
Authority
US
United States
Prior art keywords
image
images
matching
unknown
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/558,520
Inventor
Michael Shutt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Halberd Match Corp
Original Assignee
Halberd Match Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Halberd Match Corp filed Critical Halberd Match Corp
Priority to US12/558,520 priority Critical patent/US20100067806A1/en
Publication of US20100067806A1 publication Critical patent/US20100067806A1/en
Priority to PCT/US2010/048758 priority patent/WO2011032142A2/en
Priority to US13/040,335 priority patent/US20110188707A1/en
Priority to US14/822,979 priority patent/US20150347832A1/en
Priority to US14/822,974 priority patent/US9542618B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching

Definitions

  • the present invention relates generally to data processing systems and methods for automatic recognition of images and objects, and more particularly to a system and method for recognizing, matching, and/or identifying images and/or objects utilizing at least one novel pleographic data processing technique.
  • Image analysis techniques are utilized in a vast array of everyday applications ranging from consumer systems to industrial, scientific, medical, law enforcement systems and solutions.
  • current image analysis systems suffer from many disadvantages and drawbacks.
  • the inventive data processing system and method advantageously provide a number of scalable novel image/object recognition and processing techniques that are capable of dramatically improving the efficacy, reliability, and accuracy of conventional and future surveillance, detection, identification, verification, matching, navigation, and similar types of systems utilizing image acquisition/analysis of any kind.
  • the use of the novel data processing system and method of the present invention advantageously provides unequaled levels of tolerance to degradation in quality, decrease in available portion, increased noise, and variation in positioning and/or orientation in the images being analyzed as compared to the corresponding reference image(s).
  • the techniques of the inventive data processing system and method are especially effective under conditions of strong, real-time limitations, shock/vibration overloading, atmospheric noise, arbitrary signal distortions, etc., because they rely on utilization of a newly developed novel type of spatial response surface functions, hereinafter referred to as “pleograms”, that are unique for each initially observed image, and that are later used for further matching thereto. Due to their inherent redundancy, comparable to holographic-based measurements, pleograms are extremely informative and stable.
  • the novel pleographic approach is based on 3-dimentional object identification procedures that have substantially higher discriminative power than previously known image analysis techniques.
  • the novel techniques based on the inventive pleographic approach and method provide, in particular, measurement accuracy and factual continuity due to an efficient data restoration between discrete reference points that results in higher effective resolution.
  • the novel pleographic technique also meets the criteria of the minimal mean risk of real-time decision-making and is readily suitable for actual implementation including by way of simple real-time computations.
  • novel pleographic image analysis (PIA) system and method of the present invention include, but are not limited to, the following distinct unique features and advantages over previously known solutions:
  • inventive PIA system and method are advantageous in a wide variety of commercial applications such as biomedical diagnostics support, biometric access control, law enforcement, security, navigation, etc.
  • inventive PIA system and method are also particularly effective in military applications, such as automatic target detection (ATD), automatic target recognition (ATR), synthetic aperture radar (SAR), and so on.
  • ATD automatic target detection
  • ATR automatic target recognition
  • SAR synthetic aperture radar
  • the use of novel PIA techniques substantially increases the reliability/accuracy of target identification, increases the noise-stability, reduces processing time, and lowers the minimum requirements of the sensors and images captured in the course of system operation.
  • FIG. 1A shows a diagram of a 3-D pleographic response function generated in accordance with various embodiments of the inventive pleographic image analysis technique
  • FIG. 1B shows a block diagram of an exemplary hierarchical infrastructure for optimizing the implementation of various embodiments of the inventive pleographic image analysis technique in an image analysis system
  • FIG. 2A shows a block diagram of an exemplary embodiment of an inventive pleographic image analysis system of the present invention
  • FIG. 2B shows logic flow diagrams of exemplary processes of enrollment and analysis stages of operation of the inventive pleographic image analysis system of FIG. 2A ;
  • FIG. 3A shows a diagram of exemplary utilization of a pleographic response function in connection with comparing a reference image to an observed image during utilization of the inventive pleographic image analysis system of FIG. 2A ;
  • FIG. 3B shows a diagram of exemplary formation of a reference image and a corresponding pleogram, during the operation of the inventive pleographic image analysis system of FIG. 2A ;
  • FIGS. 4 to 26 show various exemplary images, corresponding templates and exemplary user interface screens of various embodiments of exemplary data processing systems utilizing the inventive pleographic image analysis system of FIG. 2A .
  • the data processing system and method of the present invention remedy the disadvantages of previously known image analysis solutions by providing a platform-independent image analysis system architecture and technology platform comprising a plurality of scalable novel image/object recognition and processing techniques that are capable of dramatically improving the efficacy, reliability, and accuracy of conventional and future surveillance, detection, identification, verification, matching, navigation, and similar types of systems utilizing image acquisition/analysis of any kind.
  • the use of the novel data processing system and method of the present invention advantageously provides unequaled levels of tolerance to degradation in quality, decrease in available portion, increased noise, and variation in positioning and/or orientation in the images being analyzed as compared to the corresponding reference image(s).
  • the novel system and method utilize a revolutionary “pleographic” image analysis approach, that is based on 3-dimentional object identification procedures that have substantially higher discriminative power than previously known image analysis techniques, and that provide many additional advantages.
  • the inventive pleographic image analysis (PIA) system and method is also modular, and may include at least a portion of the following analytic “modules”, as part of its multi-level structure:
  • the novel PIA technique improves the efficacy the of any observation, identification, and verification system, and, when implemented in an appropriate image analysis system, is capable of substantially increasing image identification reliability/accuracy, reduce system processing time, and lower minimum requirements for analyzed images in the course of system operation.
  • the core novel PIA technique employs a new image comparison approach based on a 3-dimentional object identification procedure that comprises replacement of conventional “image-to-image” matching, with “surface-to-surface” identification, that inter alia provides higher reliability and accuracy when compared with traditional approaches.
  • PIA technique image matching utilizes a novel type of response functions called “pleograms”, such as shown, by way of example, in FIG. 1A , hereof, and is preferably conducted in accordance with a dynamically determined threshold that is individually tailored for the reference image during a prior enrollment stage.
  • the novel PIA technique provides, in particular, a high degree of measurement accuracy and factual continuity due to an efficient data restoration between discrete reference points that results in a practically infinite resolution.
  • the typical pair of hypotheses in an identification system is “H 0 : This data belongs to by the template pattern #1” and “H 1 : This data does not belong to by the template pattern #1”.
  • the system accepts hypothesis H 0 only if its probability greatly exceeds that of its alternative, H 1 .
  • the system establishes a decision threshold with which it compares the ratio of these two probabilities. If this likelihood ratio exceeds the threshold, a positive decision is taken; otherwise, the hypothesis is rejected. This condition can be written as
  • d is the data set and T is the threshold.
  • H v ) is the loss function, defining the loss upon accepting hypothesis H if the actually valid hypothesis is H v ;
  • H v ) is the a posteriori probability that the observation Z originates from hypothesis H v ;
  • P(H v ) is the a priori probability to observe hypothesis H v .
  • the a posteriori probability is calculated based on the true distributions, which are available only in the simplest cases.
  • an analytical high-precision approximation method might be built defining a work density f(z), where z—some vector-image with independent components.
  • H v ) may be written as a sum of one-dimensional functions
  • binary algorithms provide the highest ADP.
  • Correlation algorithms are applicable for sea surface imaging and are optimal in the case of Gaussian (normal) distributions of observations. Although normal distributions are rarely met in the cases of interest, empirical utilization of correlation algorithms can also potentially provide useful results.
  • the inventive PIA technique employs a novel global image comparison process that first acquires a reference image during an enrollment stage, and then forms a corresponding “comparison” surface RF relief, comprising of multiple reference segments from the reference image. Later during an identification stage, the algorithm forms the similar RF surface over the identification image, and then compares all pixels-to-pixels of the reference RF surface enclosed. Matching runs in accordance with a dynamically determined threshold that is individually tailored for the reference image during the enrollment stage. When the threshold is exceeded, the identification image is considered identified; otherwise identification is rejected.
  • the method includes the capability of determining multiple most likely positions for the multiple reference RF segments over the identification image. Such an approach based on 3-dimentional RF surface comparison has been developed for the first time.
  • the PIA system 10 operates in two separate stages—a Template Formation (Enrollment) stage 30 and a Template Identification (Recognition) stage 40 .
  • the system 10 includes a control unit for executing an enrollment control program during the enrollment stage and executing a identification control program during the recognition stage, a memory for storing data and the control programs, an image acquisition system for acquiring reference and identified image and other components.
  • a reference image is observed and obtained.
  • the reference image is then filtered and preprocessed into a formation template (FT.
  • the control program matches the FT against the same reference image forming 3D surface of Response Function.
  • a dynamic threshold is tailored as a function of uniqueness of the reference template.
  • An exemplary process of reference image formation is illustrated in FIG. 3B .
  • the majority of image recognition algorithms work as decision-making procedures based on so-called Response Function (RF) analysis.
  • RF Response Function
  • FIG. 3A the location of the extreme RF value (or level) defines a true matching point on reference image 52 and the observed image 50 .
  • the inventive PIA methodology employs a novel image recognition approach where the algorithm forms an array of RF values from the reference image 52 . Plotting of this RF array, creates a 3D surface that looks like a mountain terrain relief where each point is the RF level for given X-Y coordinates on the image—the pleogram and, correspondingly, the reference pleogram 54 for the reference image 52 .
  • the algorithm forms a RF surface/pleogram to be compared with the reference pleogram 54 .
  • the image and the extreme value from MLE is taken as a true match point.
  • the MLE peak is be compared with the given threshold. When the threshold is exceeded, the target is considered captured, otherwise the identification is rejected.
  • 3-Dimentional RF Identification Throughout the identification stage, in the course of scanning an input image of image formation is obtained. On its base a new 3D template is formed absolutely in the same way as it was done on the enrollment stage, and the program matches this against the input image, forming a new RF surface.
  • the control unit retrieves the reference RF surface (see FIG. 3 ), matching this against the new RF surface of the observed image to determine at least one best match position. It results in some secondary RF surface (see FIG. 4 ) obtained as “surface-to-surface” joint matching that should possess a narrow and high RF peak. This, in turn, certainly provides the object identification high reliability and accuracy.
  • the observed image and the reference template may be temporarily coarsened by a predetermined factor. Optimal statistical algorithms for the reference and observed RF surfaces joint matching has been developed, simulated and tested on the base of real-world images.
  • the dimensions of observed image z be M ⁇ N pixels, those of the reference a m ⁇ n pixels, with M>m and N>n. Then the number of possible positions of a on z is (M ⁇ m+1) ⁇ (N ⁇ n+1). The task of a recognition procedure is then to test each of the (M ⁇ m+1) ⁇ (N ⁇ n+1) hypotheses concerning the matching location of a and select the one corresponding to the highest similarity between a and the respective z fragments.
  • the general recognition procedure understood as hypothesis testing, can be formulated as follows.
  • the measured image z is compared by a certain rule with the reference a. This comparison yields a quantity ⁇ that is a measure of similarity between two images. After all the hypotheses have been tested, a two-dimensional field ⁇ is obtained. The global extremum of this field is the point of highest similarity.
  • the magnitude ⁇ is called as the decision function (or response function, mentioned above).
  • the decision function is constructed from the posterior probability P(z
  • Imaging devices are usually designed so that the noise can be assumed to be independent in different image pixels (at most, there may exist some correlation between immediate neighbors that can be disregarded without much loss of estimate accuracy.)
  • GWD Weibull density
  • ⁇ ⁇ r > b r L % ⁇ ⁇ ⁇ ( L + r b ) ⁇ ⁇ ( L ) ( 6 )
  • the distribution (5) may be used as a density in (2) or, more exactly, logarithm of (5) in (3). Then (2) transforms to
  • the RF value can be directly found for the set of all pixel of the observed image z, reference image a, and pre-calculated ⁇ .
  • the GWD spreads all over a large number of real-world situations. For instance, for the particular case of
  • N ⁇ ( ⁇ ⁇ ⁇ ) 1 ⁇ ⁇ 2 ⁇ ⁇ ⁇ ⁇ ⁇ - ⁇ 2 2 ⁇ ⁇ 2 ( 10 )
  • the GWD use provides the statistical description for most of the practically interesting cases in imagery.
  • RF surface identification The basic response function (RF) surface approach consists in the inherent surface property use. Indeed, each function (3) level obtains as a result of strong neighbor hypotheses overlapping so these surfaces are much more stable and conservative than raw images. Hence, an image recognition procedure based on “RF surface identification” should provide highest reliability and accuracy in comparison with traditional approaches.
  • the inventive PIA System and Method are aimed at optimizing the performance of identification systems, and comprise at least the following basic features that are advantageous over, and distinct from, previously known solutions, systems and methodologies:
  • the technology is superior to others in that it allows one to employ any amounts of input data, including physically dissimilar measurements, which practically removes the limitation on the attainable FAR and FRR values.
  • a PIA exemplary procedure is the so-called zonal model of a terrestrial and sea surface images.
  • the Earth's surface contains extended regions in which the surface reflectivity is approximately uniform. This is due to the same physical nature of surface cover in the region (grass, forest, open soil, water, rough sea, flat sea, and so forth). Such regions are usually shown in maps, making it possible to use maps for preparing references. Such regions will be called homogeneous zones.
  • FIG. 4 An observed wake is shown in FIG. 4 as an example of a maritime image with a V-shaped structure. In order to form a reference object, this image should be broken down into homogeneous zones. Since there is only one structure of interest—the wake to be selected—two homogeneous zones should be identified: the “wake” zone and “non-wake” zone. See— FIG. 4 .
  • a special selecting procedure locates the wake pattern in the top-left location on the original image (indicated by the yellow square). After this reference area is extracted ( FIG. 5 ), a zoning algorithm breaks down the selected fragment into two homogeneous zones developing an Area Coded Template (ACT).
  • ACT Area Coded Template
  • FIG. 7 After processing, the figure- 8 ACT appears as shown in FIG. 7 :
  • the obtained figure- 8 ACT accurately matches the original reference image. This template should be used further for recognition and identification purposes.
  • FIG. 8 The original figure- 8 image from FIG. 6 (enlarged) is shown below in FIG. 8 where the visually observed signature area is indicated by the yellow rectangle.
  • the ACT is turned into a clarified figure- 8 binary template ( FIG. 8 ):
  • HMC proprietary algorithms and programs have been used for the automated signature match.
  • the highest likelihood template position is selected by the program and shown by the yellow dashed rectangle in FIG. 9 below.
  • the image fragment found by the program is the same one that was visually selected in FIG. 8 .
  • FIG. 10 view from above
  • FIG. 11 isometric view
  • the extreme value on the pleogram is easily seen as a bright point in the grayscale MLE field ( FIG. 10 ) and as a high and narrow peak on the pleogram relief ( FIG. 11 ).
  • the binary ACT is moving along the reference image seeking its most likely location.
  • the true figure- 8 location immediately gives a strong response (pleogram peak) that provides high reliability; the peak has the narrow cross-section that delivers good accuracy.
  • This result is very promising.
  • the figure- 8 object on the reference image FIG. 9
  • HMC proprietary software can be utilized to search, detect and classify the objects of interest to the proposed effort.
  • Image pre-processing/improvement included procedures such as Overall Trend Removal (OTR), Linear Range Adjustment (LRA), Histogram Range Stretch (HRS), and others.
  • OTR Overall Trend Removal
  • LRA Linear Range Adjustment
  • HRS Histogram Range Stretch
  • the image was also rotated by 30 degrees and the contrast degraded. Exponentially distributed multiplicative speckle is clearly seen in the picture (see FIG. 12 ) like a grained structure. All together this makes the output image unrecognizable by the human eye.
  • the Match Engine (ME) program selects a particular image area (green square set) on the reference Image A.
  • the selected fragment is being matched against the extracted observed image, Image B on the right.
  • Image joint recognition is considered positive with the “SUCCESSFUL MATCH” outcome at the top of the screen in the “Results” box, if the matching Score (see green arrow in FIG. 13 ) is higher than a given Threshold value (circled in red). Otherwise recognition is considered as false and an “UNCERTAIN MATCH” outcome provided ( FIG. 14 ). All scores are normalized to 1: the closer the score is to unity, the higher matching reliability. Additionally the program measures reciprocal Shifts and Rotation angles between images, and total Displacement.

Abstract

The inventive data processing system and method enable recognition, matching, and/or identification of images and/or of objects, utilizing at least one novel pleographic data processing technique.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The present patent application claims priority from the commonly assigned co-pending U.S. Provisional Patent Application Ser. No. 61/191,836, entitled “SYSTEM AND METHOD FOR PLEOGRAPHIC RECOGNITION, MATCHING, AND IDENTIFICATION OF IMAGES AND OBJECTS”, filed Sep. 12, 2008.
  • FIELD OF THE INVENTION
  • The present invention relates generally to data processing systems and methods for automatic recognition of images and objects, and more particularly to a system and method for recognizing, matching, and/or identifying images and/or objects utilizing at least one novel pleographic data processing technique.
  • BACKGROUND AND SUMMARY OF THE INVENTION
  • Image analysis techniques are utilized in a vast array of everyday applications ranging from consumer systems to industrial, scientific, medical, law enforcement systems and solutions. However, current image analysis systems suffer from many disadvantages and drawbacks.
  • The inventive data processing system and method advantageously provide a number of scalable novel image/object recognition and processing techniques that are capable of dramatically improving the efficacy, reliability, and accuracy of conventional and future surveillance, detection, identification, verification, matching, navigation, and similar types of systems utilizing image acquisition/analysis of any kind. Moreover, the use of the novel data processing system and method of the present invention, advantageously provides unequaled levels of tolerance to degradation in quality, decrease in available portion, increased noise, and variation in positioning and/or orientation in the images being analyzed as compared to the corresponding reference image(s).
  • The techniques of the inventive data processing system and method are especially effective under conditions of strong, real-time limitations, shock/vibration overloading, atmospheric noise, arbitrary signal distortions, etc., because they rely on utilization of a newly developed novel type of spatial response surface functions, hereinafter referred to as “pleograms”, that are unique for each initially observed image, and that are later used for further matching thereto. Due to their inherent redundancy, comparable to holographic-based measurements, pleograms are extremely informative and stable. As a result of this property, utilization of pleograms in image analysis (e.g., identification, recognition, matching, etc.) applications creates an environment that is practically immune to strong signal/image distortions, arbitrary noises, etc., thereby providing a high level of reliability in comparison to conventional raw image matching. Moreover, the novel pleographic approach is based on 3-dimentional object identification procedures that have substantially higher discriminative power than previously known image analysis techniques. The novel techniques based on the inventive pleographic approach and method provide, in particular, measurement accuracy and factual continuity due to an efficient data restoration between discrete reference points that results in higher effective resolution. The novel pleographic technique also meets the criteria of the minimal mean risk of real-time decision-making and is readily suitable for actual implementation including by way of simple real-time computations.
  • In summary, the novel pleographic image analysis (PIA) system and method of the present invention, include, but are not limited to, the following distinct unique features and advantages over previously known solutions:
      • Utilization of all available information about the observations and utilizes this information for full likelihood estimate,
      • No specific assumption regarding the noise distributions are required—any arbitrary random distortions are admissible for statistically optimal algorithm synthesis,
      • Significant tolerance to imperfect images and conditions—the matching reliability remains satisfied even if the images are strongly damaged
      • Mere pipeline computations that provide high speed processing
      • The technology can be deployed in a variety of platforms
  • The inventive PIA system and method are advantageous in a wide variety of commercial applications such as biomedical diagnostics support, biometric access control, law enforcement, security, navigation, etc. In inventive PIA system and method are also particularly effective in military applications, such as automatic target detection (ATD), automatic target recognition (ATR), synthetic aperture radar (SAR), and so on. In such applications, the use of novel PIA techniques substantially increases the reliability/accuracy of target identification, increases the noise-stability, reduces processing time, and lowers the minimum requirements of the sensors and images captured in the course of system operation.
  • Other objects and features of the present invention will become apparent from the following detailed description considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings, wherein like reference characters denote corresponding or similar elements throughout the various figures:
  • FIG. 1A shows a diagram of a 3-D pleographic response function generated in accordance with various embodiments of the inventive pleographic image analysis technique;
  • FIG. 1B shows a block diagram of an exemplary hierarchical infrastructure for optimizing the implementation of various embodiments of the inventive pleographic image analysis technique in an image analysis system;
  • FIG. 2A shows a block diagram of an exemplary embodiment of an inventive pleographic image analysis system of the present invention;
  • FIG. 2B shows logic flow diagrams of exemplary processes of enrollment and analysis stages of operation of the inventive pleographic image analysis system of FIG. 2A;
  • FIG. 3A shows a diagram of exemplary utilization of a pleographic response function in connection with comparing a reference image to an observed image during utilization of the inventive pleographic image analysis system of FIG. 2A;
  • FIG. 3B shows a diagram of exemplary formation of a reference image and a corresponding pleogram, during the operation of the inventive pleographic image analysis system of FIG. 2A; and
  • FIGS. 4 to 26 show various exemplary images, corresponding templates and exemplary user interface screens of various embodiments of exemplary data processing systems utilizing the inventive pleographic image analysis system of FIG. 2A.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • The data processing system and method of the present invention remedy the disadvantages of previously known image analysis solutions by providing a platform-independent image analysis system architecture and technology platform comprising a plurality of scalable novel image/object recognition and processing techniques that are capable of dramatically improving the efficacy, reliability, and accuracy of conventional and future surveillance, detection, identification, verification, matching, navigation, and similar types of systems utilizing image acquisition/analysis of any kind. Moreover, the use of the novel data processing system and method of the present invention, advantageously provides unequaled levels of tolerance to degradation in quality, decrease in available portion, increased noise, and variation in positioning and/or orientation in the images being analyzed as compared to the corresponding reference image(s). Advantageously, the novel system and method utilize a revolutionary “pleographic” image analysis approach, that is based on 3-dimentional object identification procedures that have substantially higher discriminative power than previously known image analysis techniques, and that provide many additional advantages.
  • The inventive pleographic image analysis (PIA) system and method, is also modular, and may include at least a portion of the following analytic “modules”, as part of its multi-level structure:
      • a) semi-empirical algorithms used as a rule for linear objects;
      • b) statistical binary algorithms used in Biometrics and for other physical black-and-white nature objects;
      • c) Distribution-Free Estimation (DFE) non-parametric algorithms;
      • d) optimal Maximum Likelihood Estimation (MLE) algorithms;
      • e) quasi-optimal zonal MLE algorithms; and/or
      • f) 3D pleographic algorithms.
  • As is noted above, the novel PIA technique improves the efficacy the of any observation, identification, and verification system, and, when implemented in an appropriate image analysis system, is capable of substantially increasing image identification reliability/accuracy, reduce system processing time, and lower minimum requirements for analyzed images in the course of system operation.
  • Essentially, the core novel PIA technique employs a new image comparison approach based on a 3-dimentional object identification procedure that comprises replacement of conventional “image-to-image” matching, with “surface-to-surface” identification, that inter alia provides higher reliability and accuracy when compared with traditional approaches. PIA technique image matching utilizes a novel type of response functions called “pleograms”, such as shown, by way of example, in FIG. 1A, hereof, and is preferably conducted in accordance with a dynamically determined threshold that is individually tailored for the reference image during a prior enrollment stage. As is discussed in greater detail below, the novel PIA technique provides, in particular, a high degree of measurement accuracy and factual continuity due to an efficient data restoration between discrete reference points that results in a practically infinite resolution.
  • Prior to describing the various embodiments of the inventive PIA techniques, it would be useful to provide an overview of the analytic and algorithmic background thereof. At the outset, it should be noted that the various embodiments of the PIA technique meet the general statistical criterion of Bayesian Mean Risk and in most situations can be reduced to Maximum Likelihood Estimates (MLE)—an overview of these principles and related factors is provided below:
      • Optimal decision making: The most general mathematical approach to making the best possible decisions based on a given collection of input data is known as the minimum risk method. For simplicity, we henceforth consider its special case referred to as the maximum likelihood method.
      • The observed input data is supposed to be a set of random variables, d=(d1, d2, . . . ). The distribution of each variable di depends on the adopted hypothesis H about the actual origin of the input data: f(di)=f(di|H). A typical hypothesis might be formulated as “H: This data was produced by the template pattern #1.” The total distribution of the entire data set is thus also conditional with respect to the adopted (or tested) hypothesis: f(d)=f (d|H). The maximum likelihood method consists in calculating the conditional probabilities of the observed data set that correspond to all admissible hypotheses, i.e., f(d|H), i=0, 1, . . . , M, and then deciding in favor of that hypothesis which yields the highest conditional probability.
  • The typical pair of hypotheses in an identification system is “H0: This data belongs to by the template pattern #1” and “H1: This data does not belong to by the template pattern #1”. In this case, the system accepts hypothesis H0 only if its probability greatly exceeds that of its alternative, H1. To this end, the system establishes a decision threshold with which it compares the ratio of these two probabilities. If this likelihood ratio exceeds the threshold, a positive decision is taken; otherwise, the hypothesis is rejected. This condition can be written as
  • L ( d ) = Prob ( H 0 d ) Prob ( H 1 d ) = Prob ( H 0 d ) 1 - Prob ( H 0 d ) > T , ( 1 )
  • where d is the data set and T is the threshold.
  • Mean Risk Estimate and 3D Response Function Matching. As is noted above, it is well known that the most general criterion in hypothesis testing is the Bayesian mean risk
  • R ( H Z ) = v C ( H H v ) P ( Z H v ) P ( H v ) ( 2 )
  • where C(H|Hv) is the loss function, defining the loss upon accepting hypothesis H if the actually valid hypothesis is Hv; P(Z|Hv) is the a posteriori probability that the observation Z originates from hypothesis Hv; P(Hv) is the a priori probability to observe hypothesis Hv.
  • Usually, the a posteriori probability is calculated based on the true distributions, which are available only in the simplest cases. However, it could be shown that for the wide range of observed images an analytical high-precision approximation method might be built defining a work density f(z), where z—some vector-image with independent components. In this case the logarithmic maximum likelihood function for multidimensional probability P(Z|Hv) may be written as a sum of one-dimensional functions
  • φ ( z ) = μ ϕ ( z μ ) ( 3 )
    φ=ln f  (4)
  • reducing the procedure to the summation of independent contribution from all pixel pairs over images.
  • Referring now to FIG. 1B, in preferable practice, the novel PIA technique may be implemented in a technical solution having a hierarchical structure in which pleographic (i.e., PIA) solutions, are 3D generalizations of traditional statistical estimations to provide the highest Algorithm Discriminative Power (ADP). The PIA-based algorithms are highly suited to the most challenging observation cases under conditions of strong image distortions including random noise, speckle, arbitrary rotations and stretches, poor resolution, low contrast, etc.
  • In cases where statistical features of the images under observation are known, optimal algorithms are synthesized on the basis of an acting measurement distribution and associated numerical parameters or their estimations. If an analytical representation of the distribution is unknown but some reasonable assumptions regarding noise character—additive, multiplicative, etc—can be made, adaptive zonal algorithms must be used. They are designed on the basis of homogeneous zone selection and a procedure that provides high ADP. For completely uncertain random measurements, self-learning quasi-optimal algorithms are applicable and their creation requires a teaching step.
  • For some specific measurements with a dual-zone physical nature, binary algorithms provide the highest ADP. Correlation algorithms are applicable for sea surface imaging and are optimal in the case of Gaussian (normal) distributions of observations. Although normal distributions are rarely met in the cases of interest, empirical utilization of correlation algorithms can also potentially provide useful results.
  • All image recognition (and related image analysis) algorithms work as decision-making procedures that are each based on a so-called corresponding “Response Function” (RF). The inventive PIA technique employs a novel global image comparison process that first acquires a reference image during an enrollment stage, and then forms a corresponding “comparison” surface RF relief, comprising of multiple reference segments from the reference image. Later during an identification stage, the algorithm forms the similar RF surface over the identification image, and then compares all pixels-to-pixels of the reference RF surface enclosed. Matching runs in accordance with a dynamically determined threshold that is individually tailored for the reference image during the enrollment stage. When the threshold is exceeded, the identification image is considered identified; otherwise identification is rejected. The method includes the capability of determining multiple most likely positions for the multiple reference RF segments over the identification image. Such an approach based on 3-dimentional RF surface comparison has been developed for the first time.
  • Experimentation has demonstrated that the aforementioned novel pleography technique is proving particularly effective in highly-challenging recognition cases. Technique consists of construction of multi-dimensional response surfaces—pleograms—that will be formed for each analyzed image and are used for its further ID. Due to inherent redundancy, pleograms are extremely conservative and stable making this technique practically immune to strong image distortions and providing highest ID reliability when compared to raw image matching. Response function forms a 3D surface that look like a mountain terrain relief. Its typical picture is shown on FIG. 1.
  • Referring now to FIGS. 2A and 2B, the PIA system 10 operates in two separate stages—a Template Formation (Enrollment) stage 30 and a Template Identification (Recognition) stage 40. The system 10 includes a control unit for executing an enrollment control program during the enrollment stage and executing a identification control program during the recognition stage, a memory for storing data and the control programs, an image acquisition system for acquiring reference and identified image and other components.
  • Throughout the enrollment stage, a reference image is observed and obtained. The reference image is then filtered and preprocessed into a formation template (FT. After that, the control program matches the FT against the same reference image forming 3D surface of Response Function. A dynamic threshold is tailored as a function of uniqueness of the reference template. An exemplary process of reference image formation is illustrated in FIG. 3B.
  • The majority of image recognition algorithms work as decision-making procedures based on so-called Response Function (RF) analysis. Referring now to FIG. 3A, the location of the extreme RF value (or level) defines a true matching point on reference image 52 and the observed image 50. The inventive PIA methodology, employs a novel image recognition approach where the algorithm forms an array of RF values from the reference image 52. Plotting of this RF array, creates a 3D surface that looks like a mountain terrain relief where each point is the RF level for given X-Y coordinates on the image—the pleogram and, correspondingly, the reference pleogram 54 for the reference image 52. At the recognition stage for the observed image 50, the algorithm forms a RF surface/pleogram to be compared with the reference pleogram 54.
  • For the verification task, the image and the extreme value from MLE, is taken as a true match point. For the detection task, the MLE peak is be compared with the given threshold. When the threshold is exceeded, the target is considered captured, otherwise the identification is rejected.
  • This innovative pleographic approach, that replaces “image-to-image” matching by “surface-to-surface” comparison, brings significant advantages. Each pleogram level is obtained as a result of separate pixel matching contributions and strong neighbor/area fragment overlapping so these surfaces tend to be much more stable than raw images. Hence, an image recognition procedure based on the novel PIA technique will provide higher reliability and accuracy as compared to traditional approaches. Such a technique should withstand very powerful distortions such as image damages, rotations, speckle, geometric distortions, overall image amplitude trend, among others.
  • 3-Dimentional RF Identification Throughout the identification stage, in the course of scanning an input image of image formation is obtained. On its base a new 3D template is formed absolutely in the same way as it was done on the enrollment stage, and the program matches this against the input image, forming a new RF surface. The control unit then retrieves the reference RF surface (see FIG. 3), matching this against the new RF surface of the observed image to determine at least one best match position. It results in some secondary RF surface (see FIG. 4) obtained as “surface-to-surface” joint matching that should possess a narrow and high RF peak. This, in turn, certainly provides the object identification high reliability and accuracy.
  • If the dynamic threshold is exceeded, the input image is considered identified; otherwise identification is rejected. To speed up the determination of the best match positions, the observed image and the reference template may be temporarily coarsened by a predetermined factor. Optimal statistical algorithms for the reference and observed RF surfaces joint matching has been developed, simulated and tested on the base of real-world images.
  • A number of previous results, designs, and applications result in creation of a quite new multi-dimensional approach and method based on the spatial response function use and possessing a perfectly different reliability—these functional surfaces are very informative and conservative due to high redundancy gained in their every pixel. Moreover, it is not an abstract theory: first, we use most part of its particular algorithms already; secondly, as it frequently happens, not a simple and even sophisticated enough approach leads to pretty mere computational procedures highly stable versus various distortions, for instance, strong image deteriorations. This approach is applicable to any physical objects, not images only: there are no any serious restrictions to the technique utilization.
  • In terms of biomedical image ID, it means that even strong image quality lowering does not lead to any noticeable reduction of the method discriminative power; it works for any possible images and not only images. In very deed, it results in novel science direction, new technological and technical solutions. Although its physical basics are quite different, its philosophical analogy is holography—from Greek ‘holos’—‘whole’—where all signal parameters (amplitudes and phases) are recorded “in whole”. This technique was named as ‘pleography’ from Greek ‘pleonasmos’, ‘pleonazon’—‘redundancy’, ‘redundant’, and obtained surfaces as pleograms'. In the same manner as for holograms, deleting of a tangible original information part (contrast, brightness, even entire image areas) does not get the pleogram-based decision to a fatal mismatch.
  • Image Recognition Algorithms
  • Let the dimensions of observed image z be M×N pixels, those of the reference a m×n pixels, with M>m and N>n. Then the number of possible positions of a on z is (M−m+1)×(N−n+1). The task of a recognition procedure is then to test each of the (M−m+1)×(N−n+1) hypotheses concerning the matching location of a and select the one corresponding to the highest similarity between a and the respective z fragments.
  • The general recognition procedure, understood as hypothesis testing, can be formulated as follows. The measured image z is compared by a certain rule with the reference a. This comparison yields a quantity φ that is a measure of similarity between two images. After all the hypotheses have been tested, a two-dimensional field φ is obtained. The global extremum of this field is the point of highest similarity. The magnitude φ is called as the decision function (or response function, mentioned above).
  • In the framework of the statistical approach, the decision function is constructed from the posterior probability P(z|a) corresponding to the event that the signal z is actually the reference a affected by noises. If the statistics of noise is known this allows one to derive for each particular case an optimal recognition procedure.
  • Imaging devices are usually designed so that the noise can be assumed to be independent in different image pixels (at most, there may exist some correlation between immediate neighbors that can be disregarded without much loss of estimate accuracy.)
  • If the assumption about noise independence is valid, the multidimensional probability P(.|.) becomes a product of one-dimensional functions:
  • P ( z a ) = μ f ( z μ a μ ) . ( 1 )
  • where zμ, aμ are the brightness values in the μth pixel of the respective image. For the sake of computational convenience, this expression is usually replaced by its logarithm
  • φ ( z a ) = μ ϕ ( z μ a μ ) ( 2 )
    φ=ln P

  • φ=ln p  (3)
  • reducing the procedure to the summation of independent contributions from all image pixel pairs. It may be shown that the image pixel intensity z also undergoes unknown variations with power α. After the transform
  • ζ = b ( z b ) a ( 4 )
  • new magnitude ζ is distributed as
  • f ( ζ b , L , α ) = L L Γ ( L ) α b ( ζ b ) aL - 1 - L ( ζ b ) a ( 5 )
  • The distribution obtained here will be called as generalized Weibull density (GWD); Weibull had considered its simplified form for reliability engineering.
  • The generalized Weibull density (GWD) moments are defined as
  • < ζ r >= b r L % Γ ( L + r b ) Γ ( L ) ( 6 )
  • Now the distribution (5) may be used as a density in (2) or, more exactly, logarithm of (5) in (3). Then (2) transforms to
  • φ ( z a ) = - μ [ ( z μ a μ ) a + ln ( z μ a μ ) α ] = max ( 7 )
  • Thus, the RF value can be directly found for the set of all pixel of the observed image z, reference image a, and pre-calculated α.
  • Statistical Comprehension of GWD
  • The GWD spreads all over a large number of real-world situations. For instance, for the particular case of
  • α=1 it transforms to the Laplace distribution that typical for the electromagnetic power statistical description:
  • f ( ζ b , L , 1 ) = L Γ ( L ) 1 b ( ζ b ) L - 1 - L ( ζ b ) ( 8 )
  • For α=2 the GWD becomes the Raleigh distribution applicable for electromagnetic amplitudes:
  • f ( ζ b , L , 2 ) = L Γ ( L ) 2 a ( ζ a ) 2 L - 1 - L ( ζ a ) 2 ζ = ζ , a = b . ( 9 )
  • For α=2, L=½, b=σ, ζ>0 the GWD coincides with the normal distribution mentioned above in conjunction with incoherent systems:
  • N ( ζ σ ) = 1 σ 2 π - ζ 2 2 σ 2 ( 10 )
  • And, finally, for α=1 and L=a and after a linear substitution
  • x = L ζ b
  • the Gamma density occurs:
  • G ( x , a ) = 1 Γ ( a ) x a - 1 - x ( 11 )
  • Thus, the GWD use provides the statistical description for most of the practically interesting cases in imagery.
  • Experimental Data
  • The basic response function (RF) surface approach consists in the inherent surface property use. Indeed, each function (3) level obtains as a result of strong neighbor hypotheses overlapping so these surfaces are much more stable and conservative than raw images. Hence, an image recognition procedure based on “RF surface identification” should provide highest reliability and accuracy in comparison with traditional approaches.
  • Space Observation Recognition
  • This approach is quite applicable to aerospace observations. Let us consider two radar image of NJ (the Passaic River area), received from different satellites, Orbimage and SPIN-2, FIG. 20 and FIG. 21.
  • Images are slightly shifted one another but they comprise mutual areas. Despite the fact that images look quite different, the 3D RF-based fragment recognition is absolutely right, see FIG. 22. Practically, there are no any restriction regarding image origination and this method usage.
  • CONCLUSIONS
  • The inventive PIA System and Method are aimed at optimizing the performance of identification systems, and comprise at least the following basic features that are advantageous over, and distinct from, previously known solutions, systems and methodologies:
      • Feature independence: Matching is carried out between two images in their entirety, rather than between some specific shapes, or features, detected in the images. This approach has at least three significant advantages:
        • all available information contained in the images is fully utilized
        • no extra errors resulting from incorrect feature detection can occur
        • the absence or presence of any particular shapes in the images has no influence on the result
      • Ability to stand severe image distortions: The technology works well even when the input image is degraded by various distortion factors such as noise, surface pollution, bruises, etc.,
      • Ability to jointly use dissimilar image features: the input data mix may include multiple image areas, all such information pieces complementing each other and comprising a highly redundant data set.
      • Compatibility with existing cameras/readers: the technology has been successfully tested on from-the-shelf market sensor products.
  • These features make this technology the best in its class with respect to the attainable error rates and the scope of applications. The technology provides a uniform framework for implementing a wide variety of observation/identification systems customized each for its particular application.
  • The technology is superior to others in that it allows one to employ any amounts of input data, including physically dissimilar measurements, which practically removes the limitation on the attainable FAR and FRR values.
  • At the same time, a high degree of redundancy requires the corresponding processing time. An obvious solution is to retain only the most typical and small-sized shapes in the image, discarding all the rest. This approach gave rise to various skeleton techniques, including reference point algorithms. In fact, that was the usual contradiction between the necessary data amounts and the need to process them in a very limited time. In inventor employing advanced mathematical techniques, was able to solve the contradiction mentioned above. The algorithms and software can process huge amounts of data in just a fraction of a second. This is accomplished with the aid of sophisticated pipeline procedures and compact storage of the reference data.
  • Exemplary PIA Utilization
  • A PIA exemplary procedure is the so-called zonal model of a terrestrial and sea surface images. Usually, the Earth's surface contains extended regions in which the surface reflectivity is approximately uniform. This is due to the same physical nature of surface cover in the region (grass, forest, open soil, water, rough sea, flat sea, and so forth). Such regions are usually shown in maps, making it possible to use maps for preparing references. Such regions will be called homogeneous zones.
  • Homogeneous Zone Selection
  • An observed wake is shown in FIG. 4 as an example of a maritime image with a V-shaped structure. In order to form a reference object, this image should be broken down into homogeneous zones. Since there is only one structure of interest—the wake to be selected—two homogeneous zones should be identified: the “wake” zone and “non-wake” zone. See—FIG. 4. Reference Wake Radar Image
  • A special selecting procedure locates the wake pattern in the top-left location on the original image (indicated by the yellow square). After this reference area is extracted (FIG. 5), a zoning algorithm breaks down the selected fragment into two homogeneous zones developing an Area Coded Template (ACT).
  • See FIG. 5. Reference Area and Area Coded Template
  • The same procedure can be applied to the low-contrast figure-8 images shown below in FIG. 6; the figure-8 signature is located within the yellow rectangle where the white dashed lines show signature axes. The same image, pre-processed to increase contrast, is shown in the image on the right in FIG. 6.
      • See FIG. 6. SAR Figure-8 Signature Image (left) and Pre-Processed Image (right)
  • After processing, the figure-8 ACT appears as shown in FIG. 7:
      • See FIG. 7. Figure-8 Signature ACT
  • The obtained figure-8 ACT accurately matches the original reference image. This template should be used further for recognition and identification purposes.
  • SAR Image Signature Recognition. Identification Reliability and Accuracy
  • The primary goal of the example presented here was to estimate matching feasibility for sea surface images as a part of Maritime Moving Target Identification (MMTI) solutions; mass statistical simulation was not the objective. Computational experiments have been carried out in four steps:
      • 1. Image visual analysis
      • 2. Automated program signature identification
      • 3. Formation of randomly distorted observed images
      • 4. Matching of observed images to their reference prototype
  • The original figure-8 image from FIG. 6 (enlarged) is shown below in FIG. 8 where the visually observed signature area is indicated by the yellow rectangle. For further matching purposes, the ACT is turned into a clarified figure-8 binary template (FIG. 8):
      • See FIG. 8. Original (Reference) SAR Observed Image and Figure-8 Binary Template
  • HMC proprietary algorithms and programs have been used for the automated signature match. The highest likelihood template position is selected by the program and shown by the yellow dashed rectangle in FIG. 9 below. As can be seen, the image fragment found by the program is the same one that was visually selected in FIG. 8.
      • See FIG. 9. Joint Binary ACT and SAR Image Matching
  • The pleogram for this matching process is shown in FIG. 10 (view from above) and FIG. 11 (isometric view).
      • See FIG. 10. Figure-8 Pleogram, FIG. 11. Figure-8 Pleogram
  • The extreme value on the pleogram is easily seen as a bright point in the grayscale MLE field (FIG. 10) and as a high and narrow peak on the pleogram relief (FIG. 11). In the course of matching, the binary ACT is moving along the reference image seeking its most likely location. The true figure-8 location immediately gives a strong response (pleogram peak) that provides high reliability; the peak has the narrow cross-section that delivers good accuracy. This result is very promising. Indeed, the figure-8 object on the reference image (FIG. 9) has an intermittent structure, low contrast with respect to the background, and is barely distinguished by the naked eye. Nonetheless, the applied algorithms and procedures have quite successfully identified the required signature location providing acceptable reliability and accuracy. This example supports the assertion that HMC proprietary software can be utilized to search, detect and classify the objects of interest to the proposed effort.
  • To further demonstrate the power of the proposed matching approach, observed images were modified using a statistical model that includes the following random and deterministic components:
      • Unknown gain
      • Receiver additive noise
      • Amplitude (brightness) fluctuations (speckle)
      • Contrast degradation
      • Geometrical stretch
      • Rotations
  • Image pre-processing/improvement included procedures such as Overall Trend Removal (OTR), Linear Range Adjustment (LRA), Histogram Range Stretch (HRS), and others.
  • The figure-8 image has been distorted in correspondence with this model with the receiver additive noise variance a'=128 (entire pixel brightness range is [0, 255]); such a σ2 value leads to strong useful signal suppression. The image was also rotated by 30 degrees and the contrast degraded. Exponentially distributed multiplicative speckle is clearly seen in the picture (see FIG. 12) like a grained structure. All together this makes the output image unrecognizable by the human eye.
      • See FIG. 12. Distorted Observed Image
  • The central part of the distorted area—shown by the white frame—has been extracted for further matching as an observed image. Matching was performed using HMC proprietary image processing software in Relative Measurement (RM) research mode. The RM interface and matching results are shown on FIG. 13. The Match Engine (ME) program selects a particular image area (green square set) on the reference Image A.
  • The selected fragment is being matched against the extracted observed image, Image B on the right. Image joint recognition is considered positive with the “SUCCESSFUL MATCH” outcome at the top of the screen in the “Results” box, if the matching Score (see green arrow in FIG. 13) is higher than a given Threshold value (circled in red). Otherwise recognition is considered as false and an “UNCERTAIN MATCH” outcome provided (FIG. 14). All scores are normalized to 1: the closer the score is to unity, the higher matching reliability. Additionally the program measures reciprocal Shifts and Rotation angles between images, and total Displacement.
  • As seen in FIG. 13, the observed image—despite strong distortions and noises—has been matched to the original one correctly. Deviation from the initial coordinates on the reference image (Template Position, X=190 and Y=120, indicated by cyan rectangle in FIG. 13) is small, only 2 and (−2) pixel reciprocal Shifts (shown by brown rectangle).
      • See FIG. 13. Matching Results for Original and Distorted Images
  • This shows our high accuracy and, due to a strong (˜0.82) final Score, also high reliability. The 30 degree rotation angle has been measured with high precision as well (window Rotation in FIG. 13, circled in blue).
  • If the observed image fragment (Image B in FIG. 14) does not include the signature of interest the same reference image fragment (Template Position, X=190 and Y=120, indicated by cyan rectangle), does not positively match to any area on image B:
      • See FIG. 14. Matching Results for Reference and Background Images
  • For this example of a step in the scanning process, the matching attempt comes up to some random area with wrong Shifts, low Score, and wrongly defined Rotation. Thus, this computational experiment has been conducted purely under conditions of the same reference image, reference template position, threshold, and noise realization. It shows that our proprietary algorithms and software accurately identify the signature if it really exists in the analyzed image and reject identification if the image contains background only.
  • Linear and V-shaped structure were utilized for further identification. Images B in FIG. 15 and FIG. 16 have been strongly distorted including nonlinear geometrical stretch and powerful clutter.
      • See FIG. 15. Linear Trace Matching
  • Despite these distortions, the correct matching has been accomplished by the program. The displayed experimental results demonstrate our software's capability under conditions of the strong distortions and noises making it reasonable to utilize this solution approach for small and low-contrast target detection and identification.
  • Thus, while there have been shown and described and pointed out fundamental novel features of the invention as applied to preferred embodiments thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices and methods illustrated, and in their operation, may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto.

Claims (10)

1. An image matching method and system used to determine or verify the identity of an unknown pattern by comparing one or more of the images to known images stored in a data base, the system comprising at least one of the following components:
(a) Pre-Matching Image Filtration
(b) Pleographic Spatial Reference Data Matching
(c) Phase Space Critical Path Hypothesis Testing
(d) Speed-Up Coarsened Reference Field Processing
(e) Pre-Calculated Data Stream Treatment
(f) Inter-Resolution Measurement Estimation
(g) Dissimilar Data Joint Utilization
(h) Image input subsystem, operable to create a digital image of the unknown image and to store said image in a memory of said input subsystem;
(i) Template creation subsystem, coupled to said image input subsystem, operable to receive said image of said unknown image and to enroll this creating a digital template based on locations of image features in said bit-map;
(j) Classification subsystem, coupled to said image input subsystem and said template creation subsystem, operable to receive said image and said template and to assign said unknown image to a primary image category based on distinct patterns present in said image and digital information extracted from said template;
(k) Image storage subsystem, coupled to said classification subsystem, said template creation subsystem, and said image input subsystem, operable to store said image and said template in memory locations that correspond to the primary image category and also to store known bit-maps and templates of known bit-maps in memory locations that correspond to the primary image category; and
(l) Search subsystem, coupled to the image storage subsystem, operable to compare said template of said unknown image to templates of said known image that are of the same primary image category as said unknown image formation, and to produce a result indicating a probability that said unknown image is identical to one of said known image formations.
2. A method for verifying the identity of a subject implemented in system of claim 1, comprising the steps of:
(a) Acquiring an enrollment image of at least a substantial portion of the subject's formation,
(b) Identifying areas of said enrollment image having at least a predetermined uniqueness level and having dissimilar features;
(c) generating and storing a plurality templates based on said identified areas, said templates comprising redundant data sets representative of different unique areas of said enrollment image;
(d) At a later time acquiring a verification image of at least a substantial portion of the subject's formation; and
(e) Determining whether said verification image corresponds to said enrollment image utilizing said plural templates.
3. In an automated image classification and identification system for determining or verifying the identity of an unknown image formation by comparing one or more of the image images to known image images stored in a database, a method comprising the steps of:
(a) Storing the known images in the data base, having a plurality of memory locations, each one of said memory locations correspond to a primary category of a image classification system;
(b) Receiving one of the unknown images;
(c) Automatically determining to which primary category of the image classification system the unknown image corresponds, whether a match exists between said unknown image and one of the known images of the same classification primary category.
4. An image matching system of claim 1 implemented in an identification device supplied to, and preferably secured on, a user and equipped by the remote car keyless option.
5. An image matching system of claim 1 implemented in an identification device supplied to, and preferably secured on, a user and equipped by the remote home keyless option.
6. An image matching system of claim 1 implemented in an identification device supplied to, and preferably secured on, a user and equipped by the remote gun control option.
7. An image matching system of claim 1 implemented in an identification device supplied to, and preferably secured on, a user and equipped by all required programmable combinations of claims from 4 to 7.
8. An image matching system of claim 1 implemented in an identification device supplied to, and preferably secured on, a user and equipped by the POS account access option.
9. A method and system for image identification and storing them a machine searchable image database comprising:
(a) means for scanning said image in a predetermined scan area;
(b) means for determining the image match score;
(c) means for assigning said image with a particular image type;
(d) Means for storing said type for each known image in a machine searchable database.
10. An image matching system of claim 3 combined with an image structure analysis and enhancement software tool kit jointly aimed to pathology study and identification against a pre-selected calibrated database.
US12/558,520 2008-09-12 2009-09-14 System and method for pleographic recognition, matching, and identification of images and objects Abandoned US20100067806A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US12/558,520 US20100067806A1 (en) 2008-09-12 2009-09-14 System and method for pleographic recognition, matching, and identification of images and objects
PCT/US2010/048758 WO2011032142A2 (en) 2009-09-14 2010-09-14 System and method for pleographic recognition, matching, and identification of images and objects
US13/040,335 US20110188707A1 (en) 2008-09-12 2011-03-04 System and Method for Pleographic Subject Identification, Targeting, and Homing Utilizing Electromagnetic Imaging in at Least One Selected Band
US14/822,979 US20150347832A1 (en) 2008-09-12 2015-08-11 System and method for pleographic subject identification, targeting, and homing utilizing electromagnetic imaging in at least one selected band
US14/822,974 US9542618B2 (en) 2008-09-12 2015-08-11 System and method for pleographic recognition, matching, and identification of images and objects

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US19183608P 2008-09-12 2008-09-12
US12/558,520 US20100067806A1 (en) 2008-09-12 2009-09-14 System and method for pleographic recognition, matching, and identification of images and objects

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US13/040,335 Continuation-In-Part US20110188707A1 (en) 2008-09-12 2011-03-04 System and Method for Pleographic Subject Identification, Targeting, and Homing Utilizing Electromagnetic Imaging in at Least One Selected Band
US14/822,974 Continuation US9542618B2 (en) 2008-09-12 2015-08-11 System and method for pleographic recognition, matching, and identification of images and objects

Publications (1)

Publication Number Publication Date
US20100067806A1 true US20100067806A1 (en) 2010-03-18

Family

ID=42007274

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/558,520 Abandoned US20100067806A1 (en) 2008-09-12 2009-09-14 System and method for pleographic recognition, matching, and identification of images and objects
US14/822,974 Expired - Fee Related US9542618B2 (en) 2008-09-12 2015-08-11 System and method for pleographic recognition, matching, and identification of images and objects

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/822,974 Expired - Fee Related US9542618B2 (en) 2008-09-12 2015-08-11 System and method for pleographic recognition, matching, and identification of images and objects

Country Status (2)

Country Link
US (2) US20100067806A1 (en)
WO (1) WO2011032142A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923711A (en) * 2010-07-16 2010-12-22 西安电子科技大学 SAR (Synthetic Aperture Radar) image change detection method based on neighborhood similarity and mask enhancement
CN102567744A (en) * 2011-12-29 2012-07-11 中国科学院自动化研究所 Method for determining quality of iris image based on machine learning
US20150098607A1 (en) * 2013-10-07 2015-04-09 Hong Kong Applied Science and Technology Research Institute Company Limited Deformable Surface Tracking in Augmented Reality Applications
CN108256413A (en) * 2017-11-27 2018-07-06 科大讯飞股份有限公司 It can traffic areas detection method and device, storage medium, electronic equipment
CN108491753A (en) * 2018-01-26 2018-09-04 西安电子科技大学 The Classification of Polarimetric SAR Image method of the non-stationary modeling of Polarization scattering
US20180372862A1 (en) * 2017-06-22 2018-12-27 The Boeing Company Synthetic aperture radar mapping and registration systems and methods
CN110188707A (en) * 2019-06-03 2019-08-30 西安工业大学 A kind of SAR target identification system and method based on transfer learning
CN112507315A (en) * 2021-02-05 2021-03-16 红石阳光(北京)科技股份有限公司 Personnel passing detection system based on intelligent brain

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358258B (en) * 2017-07-07 2020-07-07 西安电子科技大学 SAR image target classification based on NSCT double CNN channels and selective attention mechanism
CN111222571B (en) * 2020-01-06 2021-12-14 腾讯科技(深圳)有限公司 Image special effect processing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5579471A (en) * 1992-11-09 1996-11-26 International Business Machines Corporation Image query system and method
US5636292A (en) * 1995-05-08 1997-06-03 Digimarc Corporation Steganography methods employing embedded calibration data
US5913205A (en) * 1996-03-29 1999-06-15 Virage, Inc. Query optimization for visual information retrieval system
US6628808B1 (en) * 1999-07-28 2003-09-30 Datacard Corporation Apparatus and method for verifying a scanned image
US6721463B2 (en) * 1996-12-27 2004-04-13 Fujitsu Limited Apparatus and method for extracting management information from image
US7054509B2 (en) * 2000-10-21 2006-05-30 Cardiff Software, Inc. Determining form identification through the spatial relationship of input data
US7130466B2 (en) * 2000-12-21 2006-10-31 Cobion Ag System and method for compiling images from a database and comparing the compiled images with known images
US7324711B2 (en) * 2004-02-26 2008-01-29 Xerox Corporation Method for automated image indexing and retrieval

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6173068B1 (en) * 1996-07-29 2001-01-09 Mikos, Ltd. Method and apparatus for recognizing and classifying individuals based on minutiae
US5917928A (en) * 1997-07-14 1999-06-29 Bes Systems, Inc. System and method for automatically verifying identity of a subject
US7016539B1 (en) * 1998-07-13 2006-03-21 Cognex Corporation Method for fast, robust, multi-dimensional pattern recognition
JP2003346153A (en) * 2002-05-29 2003-12-05 Tsubakimoto Chain Co Pattern matching method, pattern matching device, computer program, and recording medium
US20050054910A1 (en) * 2003-07-14 2005-03-10 Sunnybrook And Women's College Health Sciences Centre Optical image-based position tracking for magnetic resonance imaging applications
WO2005024562A2 (en) * 2003-08-11 2005-03-17 Eloret Corporation System and method for pattern recognition in sequential data
US7697026B2 (en) * 2004-03-16 2010-04-13 3Vr Security, Inc. Pipeline architecture for analyzing multiple video streams
WO2006034366A1 (en) * 2004-09-21 2006-03-30 Siemens Medical Solutions Usa, Inc. Hierarchical medical image view determination
US7844081B2 (en) * 2006-05-15 2010-11-30 Battelle Memorial Institute Imaging systems and methods for obtaining and using biometric information
US8090166B2 (en) * 2006-09-21 2012-01-03 Surgix Ltd. Medical image analysis
US20090082637A1 (en) * 2007-09-21 2009-03-26 Michael Galperin Multi-modality fusion classifier with integrated non-imaging factors
US9131128B2 (en) * 2011-09-28 2015-09-08 The United States Of America As Represented By The Secretary Of The Army System and processor implemented method for improved image quality and generating an image of a target illuminated by quantum particles
US8385688B2 (en) * 2008-08-27 2013-02-26 International Business Machines Corporation System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans
US8996570B2 (en) * 2010-09-16 2015-03-31 Omnyx, LLC Histology workflow management system
US9599461B2 (en) * 2010-11-16 2017-03-21 Ectoscan Systems, Llc Surface data acquisition, storage, and assessment system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5579471A (en) * 1992-11-09 1996-11-26 International Business Machines Corporation Image query system and method
US5751286A (en) * 1992-11-09 1998-05-12 International Business Machines Corporation Image query system and method
US5636292A (en) * 1995-05-08 1997-06-03 Digimarc Corporation Steganography methods employing embedded calibration data
US5636292C1 (en) * 1995-05-08 2002-06-18 Digimarc Corp Steganography methods employing embedded calibration data
US5913205A (en) * 1996-03-29 1999-06-15 Virage, Inc. Query optimization for visual information retrieval system
US6721463B2 (en) * 1996-12-27 2004-04-13 Fujitsu Limited Apparatus and method for extracting management information from image
US6628808B1 (en) * 1999-07-28 2003-09-30 Datacard Corporation Apparatus and method for verifying a scanned image
US7054509B2 (en) * 2000-10-21 2006-05-30 Cardiff Software, Inc. Determining form identification through the spatial relationship of input data
US7130466B2 (en) * 2000-12-21 2006-10-31 Cobion Ag System and method for compiling images from a database and comparing the compiled images with known images
US7324711B2 (en) * 2004-02-26 2008-01-29 Xerox Corporation Method for automated image indexing and retrieval

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923711A (en) * 2010-07-16 2010-12-22 西安电子科技大学 SAR (Synthetic Aperture Radar) image change detection method based on neighborhood similarity and mask enhancement
CN101923711B (en) * 2010-07-16 2012-06-20 西安电子科技大学 SAR (Synthetic Aperture Radar) image change detection method based on neighborhood similarity and mask enhancement
CN102567744A (en) * 2011-12-29 2012-07-11 中国科学院自动化研究所 Method for determining quality of iris image based on machine learning
US20150098607A1 (en) * 2013-10-07 2015-04-09 Hong Kong Applied Science and Technology Research Institute Company Limited Deformable Surface Tracking in Augmented Reality Applications
US9147113B2 (en) * 2013-10-07 2015-09-29 Hong Kong Applied Science and Technology Research Institute Company Limited Deformable surface tracking in augmented reality applications
US20180372862A1 (en) * 2017-06-22 2018-12-27 The Boeing Company Synthetic aperture radar mapping and registration systems and methods
CN109116350A (en) * 2017-06-22 2019-01-01 波音公司 Synthetic aperture radar mapping and registration arrangement and method
US11131767B2 (en) * 2017-06-22 2021-09-28 The Boeing Company Synthetic aperture radar mapping and registration systems and methods
CN108256413A (en) * 2017-11-27 2018-07-06 科大讯飞股份有限公司 It can traffic areas detection method and device, storage medium, electronic equipment
CN108491753A (en) * 2018-01-26 2018-09-04 西安电子科技大学 The Classification of Polarimetric SAR Image method of the non-stationary modeling of Polarization scattering
CN110188707A (en) * 2019-06-03 2019-08-30 西安工业大学 A kind of SAR target identification system and method based on transfer learning
CN112507315A (en) * 2021-02-05 2021-03-16 红石阳光(北京)科技股份有限公司 Personnel passing detection system based on intelligent brain

Also Published As

Publication number Publication date
WO2011032142A2 (en) 2011-03-17
US20150347868A1 (en) 2015-12-03
US9542618B2 (en) 2017-01-10
WO2011032142A3 (en) 2011-07-21

Similar Documents

Publication Publication Date Title
US9542618B2 (en) System and method for pleographic recognition, matching, and identification of images and objects
Ye et al. Robust registration of multimodal remote sensing images based on structural similarity
US9785819B1 (en) Systems and methods for biometric image alignment
Hassanein et al. A survey on Hough transform, theory, techniques and applications
US20200285959A1 (en) Training method for generative adversarial network, image processing method, device and storage medium
EP2294531B1 (en) Scale robust feature-based identifiers for image identification
US8280196B2 (en) Image retrieval apparatus, control method for the same, and storage medium
Pflug et al. A comparative study on texture and surface descriptors for ear biometrics
US20100014755A1 (en) System and method for grid-based image segmentation and matching
US20080031524A1 (en) Increasing Accuracy of Discrete Curve Transform Estimates for Curve Matching in Higher Dimensions
Jeong et al. Semi-local structure patterns for robust face detection
US8452078B2 (en) System and method for object recognition and classification using a three-dimensional system with adaptive feature detectors
CN101147159A (en) Fast method of object detection by statistical template matching
EP2177898A1 (en) Method for selecting an optimized evaluation feature subset for an inspection of free-form surfaces and method for inspecting a free-form surface
Almqvist et al. Learning to detect misaligned point clouds
US20230147685A1 (en) Generalized anomaly detection
Kowkabi et al. Hybrid preprocessing algorithm for endmember extraction using clustering, over-segmentation, and local entropy criterion
Tralic et al. Copy-move forgery detection using cellular automata
Choi et al. Similarity analysis of actual fake fingerprints and generated fake fingerprints by dcgan
US6694059B1 (en) Robustness enhancement and evaluation of image information extraction
CN111354038B (en) Anchor detection method and device, electronic equipment and storage medium
US20230069960A1 (en) Generalized anomaly detection
JP2001243465A (en) Method and device for matching fingerprint image
Boshra et al. Predicting an upper bound on SAR ATR performance
Cheng The distinctiveness of a curve in a parameterized neighborhood: extraction and applications

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION