US20150310459A1 - System and method for video-based detection of drive-arounds in a retail setting - Google Patents
System and method for video-based detection of drive-arounds in a retail setting Download PDFInfo
- Publication number
- US20150310459A1 US20150310459A1 US14/280,863 US201414280863A US2015310459A1 US 20150310459 A1 US20150310459 A1 US 20150310459A1 US 201414280863 A US201414280863 A US 201414280863A US 2015310459 A1 US2015310459 A1 US 2015310459A1
- Authority
- US
- United States
- Prior art keywords
- customer
- drive
- premises
- retail establishment
- queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
Definitions
- fast food companies and/or other restaurant businesses tend to have a strong interest in numerous customer and/or store qualities and metrics that affect customer experience, such as dining area cleanliness, table usage, queue lengths, experience time in-store and drive-thru, specific order timing, order accuracy, and customer response.
- Modern retail processes are becoming heavily data-driven, and retailers therefore have a strong interest in numerous customer and store metrics such as queue lengths, experience time in-store and/or drive-thru, specific order timing, order accuracy, and customer response.
- customer and store metrics such as queue lengths, experience time in-store and/or drive-thru, specific order timing, order accuracy, and customer response.
- detection and diagnosing of abnormal events particularly those related to customer volumes exceeding the capacity of a store.
- Such events include queue lengths and waiting times exceeding certain desired thresholds, which may in turn lead to customer dissatisfaction, customer drive-offs (in vehicular queues) and walk-offs (in pedestrian queues), and customer drive-arounds.
- Drive-arounds occur when a customer drives into a retail setting with the intention of making a purchase and, upon examination of the existing customer volume (e.g., from parking occupancy or drive-thru queue length), decides to leave the premises. Drive-arounds result in lost sales and in unnecessary added traffic in the premises, as well as potential losses in repeat business. There is currently no automated solution to the detection of these events, since current solutions for operations analytics involve manual annotation often carried out by contractors. Furthermore, other events of interest may not currently be detected at all.
- a method for detection of drive-arounds in a retail setting comprises acquiring images of a retail establishment, analyzing the images to detect entry of a customer onto the premises of the retail establishment, tracking a detected customer's location as the customer traverses the premises of the retail establishment, analyzing the images to detect exit of the detected customer from the premises of the retail establishment, and generating a drive-around notification if the customer does not enter a prescribed area or remain on the premises of the retail location for at least a prescribed minimum period of time.
- the customer can include a customer within a vehicle, and the analyzing and tracking can include analyzing and tracking a vehicle on the premises of the retail establishment.
- the images of the retail establishment can include images of a parking lot of the retail establishment including one or more parking spaces for vehicles, and the prescribed area can include the one or more parking spaces.
- the method can further comprise monitoring the one or more parking spaces to determine whether the detected customer enters a parking space, and discarding the customer as a drive-around candidate if the detected customer enters a parking space.
- the one or more parking spaces can be previously identified regions within a camera's field-of-view. Alternatively, automated or semi-automated methods for identification of parking space locations can be implemented, as taught in U.S.
- the images of the retail establishment can include images of a drive-thru queue of the retail establishment, and the prescribed area can include the drive-thru queue.
- the method can further comprise monitoring the drive-thru queue to determine whether the detected customer enters the drive-thru queue, and discarding the customer as a drive-around candidate if the detected customer enters either a parking space.
- the drive-thru queue can include a static queue region previously identified within a camera's field of view.
- the drive-thru queue can include an automatically detected dynamic queue region.
- the method can further comprise calculating the total time the detected customer is on the premises, comparing the calculated time to a threshold time, and generating the notification only if the calculated time does not exceed the threshold time.
- the steps of the method can be performed in real-time, and/or using one or more computer vision techniques.
- a system for detection of customer drive-arounds in a retail setting comprises a device for monitoring customers including a memory in communication with a processor configured to acquire a series of images of a retail establishment, analyze the images to detect entry of a customer onto the premises of the retail establishment, track a detected customer's location as the customer traverses the premises of the retail establishment, analyze the images to detect exit of the detected customer from the premises of the retail establishment, and generate a drive-around notification if the customer does not enter a prescribed area or remain on the premises of the retail location for at least a prescribed minimum period of time.
- the customer can include a vehicle, and the analyzing and tracking can include analyzing and tracking the vehicle on the premises of the retail establishment.
- the images of the retail establishment can include images of a parking lot of the retail establishment including one or more parking spaces for vehicles, and the prescribed area can include the one or more parking spaces.
- the processor can be further configured to monitor the one or more parking spaces to determine whether the detected customer enters a parking space, and discard the customer as a drive-around candidate if the detected customer enters a parking space.
- the one or more parking spaces can be previously identified regions within a camera's field-of-view.
- the images of the retail establishment can include images of a drive-thru queue of the retail establishment, and the prescribed area can include the drive-thru queue.
- the processor can be further configured to monitor the drive-thru queue to determine whether the detected customer enters the drive-thru queue, and discard the customer as a drive-around candidate if the detected customer enters either a parking space.
- the drive-thru queue can include a static queue region previously identified within a camera's field of view.
- the drive-thru queue can include an automatically detected dynamic queue region.
- the processor can be further configured to calculate the total time the detected customer is on the premises, compare the calculated time to a threshold time, and generate the notification only if the calculated time does not exceed the threshold time.
- the processor can be configured to generate the notification in real-time as images are acquired and/or perform at least one computer vision technique.
- FIG. 1 is a block diagram of a drive-around determination system according to an exemplary embodiment of the present disclosure.
- FIG. 2 shows an example scenario of a multi-lane queue being monitored by two cameras of a multi-camera network, where vehicles are supposed to form in a single file before the ‘split point’, and then split into two separate queues, one for each order point.
- FIG. 3 shows an example scenario of an entrance point into the retail premises being monitored.
- FIG. 4 shows an example set of camera views of the retail premises being monitored with a network of cameras.
- FIGS. 5A and 5B show two video frames where parking occupancy is marked.
- FIG. 5A shows a vehicle being tracked, while FIG. 5B shows the vehicle occupying a parking spot and the parking indicator for that parking spot having been updated.
- FIG. 6 illustrates one example output of a queue monitoring module in accordance with the present disclosure.
- FIG. 7 illustrates a flowchart of an exemplary method in accordance with the present disclosure.
- the present disclosure relates to a system and method for video-based detection of drive-arounds in vehicular queues performed on an ensemble of images or a video sequence.
- drive-around refers to instances when a customer drives into a retail setting with the intention of making a purchase and, upon examination of the existing customer volume (e.g., from parking occupancy or drive-thru queue length), decides to leave the premises.
- incoming frame and/or “current frame” refer to a video frame that is currently being processed for foreground/motion detection, tracking, object recognition, and other video analytics and computer vision processes. This processing can be focused on a region of interest (“ROI”) within the frame.
- ROI region of interest
- the present disclosure describes a method and system 2 for automated video-based detection of vehicular drive-arounds in retail settings.
- the system 2 and method can be integrated with drive-thru and in-store customer tracking and timing systems, or can be provided as a stand-alone system.
- the system 2 includes a CPU 4 that is adapted for controlling an analysis of video data received by the system 2 .
- An I/O interface 6 is provided, such as a network interface for communicating with external devices.
- the I/O interface 6 may include, for example, a modem, a router, a cable, and and/or Ethernet port, etc.
- the system 2 includes a memory containing a number of modules containing computer executable instructions.
- the modules include:
- a video acquisition module 12 which acquires video from the retail setting of interest
- a customer entry detection module 14 which determines when a customer enters the retail store premises
- a customer tracking module 16 which determines the location of detected customers as they traverse the retail store premises
- a parking lot monitoring module 18 which determines whether the customer occupies a parking spot
- a drive-thru queue monitoring module 20 which determines whether the customer enters the queue based on the queue configuration and tracking information output by the customer tracking module 16 ;
- a customer timing module 22 which determines the length of the stay of the customer in the premises
- a customer exit detection module 24 which determines when a customer exits the retail store premises
- a drive-around determination module 26 which detects the occurrence of a customer drive-around and issues an alert.
- the memory 8 may represent any type of tangible computer readable medium such as random access memory (RAM), read only memory (ROM), magnetic disk or tape, optical disk, flash memory, or holographic memory. In one embodiment, the memory 8 comprises a combination of random access memory and read only memory.
- the CPU 4 can be variously embodied, such as by a single-core processor, a dual-core processor (or more generally by a multiple-core processor), a digital processor and cooperating math coprocessor, a digital controller, or the like.
- the CPU in addition to controlling the operation of the system 2 , executes instructions stored in memory 8 for performing the parts of the system and method outlined in FIG. 1 . In some embodiments, the CPU 4 and memory 8 may be combined in a single chip. It will be appreciated that aspects of the present disclosure can be performed, executed, or otherwise carried out using a wide range of hardware.
- Modules 14 , 16 , 18 , 20 , 22 , and 24 generally process the video acquired by module 12 to compute their respective outputs.
- Module 26 uses the outputs from those previous modules to make the determination regarding a drive-around. Each module is discussed in detail below.
- the video acquisition module 12 includes at least one, but possibly multiple video cameras that acquire video of the region of interest, including the retail store premises and its surroundings.
- the type of cameras could be of any of a variety of surveillance cameras suitable for viewing the region of interest and operating at frame rates sufficient to view the queue events of interest, such as common RGB cameras that may also have a “night mode”, and operate at 30 frame/sec, for example.
- the cameras can include near infrared (NIR) capabilities at the low-end portion of a near-infrared spectrum (700 nm-1000 nm). No specific requirements are needed regarding spatial or temporal resolutions.
- the image source in one embodiment, can include a surveillance camera with a video graphics array size that is about 1280 pixels wide and 720 pixels tall with a frame rate of thirty (30) or more frames per second.
- the video acquisition module 12 can be a device adapted to relay and/or transmit the video captured by the camera to the customer entry detection tracking module 14 .
- the video acquisition module 12 can include a camera sensitive to visible light or having specific spectral sensitivities, a network of such cameras, a line-scan camera, a computer, a hard drive, or other image sensing and storage devices.
- the video acquisition module 12 may acquire input from any suitable source, such as a workstation, a database, a memory storage device, such as a disk, or the like.
- the video acquisition module 12 is in communication with the CPU 4 , and memory 8 .
- FIG. 2 shows two sample frames of videos captured with two cameras which include a vehicular multi-lane queue Q multi at a fast-food restaurant, and are used in the following exemplary disclosure.
- FIG. 4 shows images from 8 cameras used to cover most of the real estate associated with the exemplary retail premises. Image data from these cameras can be fed to the image acquisition module, and processed in accordance with the present disclosure.
- the customer entry detection module 14 determines whether a customer (e.g., a vehicle, a pedestrian, etc.) has entered the retail business premises.
- This module generally includes a video- or image-based object detector placed adjacent to the entrance points to the retail business premises.
- FIG. 3 shows a sample frame of a camera pointed towards one of the entrances E of the exemplary restaurant being monitored, along with the manually selected image area (dashed line box on left identified generally by reference numeral 40 ) where entrance events are detected.
- Customer entry detection module 14 operates by detecting an initial instance of a vehicle V entering the monitored premises.
- a background estimation method that allows for foreground detection to be performed is used.
- a pixel-wise statistical model of historical pixel behavior is constructed for the boxed entrance detection area in FIG. 3 , for instance in the form of a pixel-wise Gaussian Mixture Model (GMM).
- GMM Gaussian Mixture Model
- Other statistical models can be used, including running averages and medians, non-parametric models, and parametric models having different distributions.
- the GMM describes statistically the historical behavior of the pixels in the highlighted area; for each new incoming frame, the pixel values in the area are compared to their respective GMM and a determination is made as to whether their values correspond to the observed history.
- a foreground detection signal is triggered.
- a vehicle detection signal is triggered.
- Morphological operations usually accompany pixel-wise decisions in order to filter out noises and to fill holes in detections.
- motion detection algorithms that detect significant motion in the detection area. Motion detection is usually performed via temporal frame differencing and morphological filtering. This, in contrast to foreground detection, which also detects stationary foreground objects, motion detection only detects objects in motion at a speed determined by the frame rate of the video and the video acquisition geometry. In other embodiments, computer vision techniques for object recognition and localization can be used on still frames.
- These techniques typically entail a training stage where the appearance of multiple labeled sample objects in a given feature space (e.g., Harris Corners, SIFT, HOG, LBP, etc.) is fed to a classifier (e.g., support vector machine—SVM, neural network, decision tree, expectation-maximization—EM, k nearest neighbors—k-NN, other clustering algorithms, etc.) that is trained on the available feature representations of the labeled samples.
- SVM support vector machine
- EM expectation-maximization
- k nearest neighbors k nearest neighbors
- other clustering algorithms etc.
- the classifier can be trained on features of vehicles (positive samples) as well as features of asphalt, grass, pedestrians, etc. (negative features). Upon operation of the trained classifier, a classification score on an image test area of interest is issued indicating a matching score of the test area relative to the positive samples. A high matching score would indicate detection of a vehicle.
- video processing is used to track customers within the area of interest.
- the customer tracking module 16 Upon the detection of entry of a customer by the customer entry detection module 14 , the customer tracking module 16 initiates a tracker for the customer and tracks its location in motion across the field of view of the camera(s). Tracking algorithms including point- and global feature-based, silhouette/contour, and particle filter trackers can be used. In one embodiment, a cloud-point-based tracker is used that tracks sets of interest points per vehicle.
- the customer tracking module 16 outputs spatio-temporal information describing the location of the customers across the range of frames in which they are present within the field of view of the camera(s). Specifically, for each object being tracked, the local tracking module outputs its location in pixel coordinates and the corresponding frame number across the range of frames for which the object remains within the field of view of the camera(s).
- the acquired video frame(s) is a projection of a three-dimensional space onto a two-dimensional plane
- ambiguities can arise when the subjects are represented in the pixel domain (i.e., pixel coordinates). These ambiguities are introduced by perspective distortion, which is intrinsic to the video data.
- apparent discontinuities in motion patterns can exist when a subject moves between the different coordinate systems. These discontinuities make it more difficult to interpret the data.
- these ambiguities can be resolved by performing a geometric transformation by converting the pixel coordinates to real-world coordinates.
- the coordinate systems of each individual camera are mapped to a single, common coordinate system.
- the spatial coordinates corresponding to the tracking data from a first camera can be mapped to the coordinate system of a second camera.
- the spatial coordinates corresponding to the tracking data from multiple cameras can be mapped to an arbitrary common coordinate system. Any existing camera calibration process can be used to perform the estimated geometric transformation.
- One approach is described in the disclosure of co-pending and commonly assigned U.S. application Ser. No. 13/868,267, entitled “Traffic Camera Calibration Update Utilizing Scene Analysis,” filed Apr.
- Re-identification refers to the task of establishing correspondences of images of a given object across multiple camera views.
- color-based re-identification is used because it has been found to be most robust to the drastic changes in perspective and distortion a vehicle experiences as it traverses a typical camera network, as illustrated in FIG. 4 , which shows images 42 from eight (8) cameras used to cover most of the real estate associated with the exemplary retail premises.
- Other features that remain unchanged across different camera views can be used to track subjects across different camera views. These features can include biometric features, clothing patterns or colors, license plates, vehicle make and model, etc. Further details of tracking across multiple camera views are set forth in commonly-assigned U.S.
- the parking lot monitoring module 18 determines whether the detected/tracked customer enters a parking spot.
- parking monitoring is performed by manually labeling areas (see hatched areas without cars present, labeled P) corresponding to parking spots and detecting, via the output of the tracking module, whether a vehicle enters one of these areas (see cross-hatched areas with cars present, labeled V).
- FIGS. 5A and 5B show two video frames 44 and 46 where parking occupancy is indicated.
- FIG. 5A shows a vehicle being tracked, while FIG. 5B shows the vehicle occupying a parking spot and the parking indicator for that parking spot having been updated.
- Approaches for determining parking occupancy from video and images are provided in co-pending and commonly assigned U.S. Ser. No. 13/922,336, entitled “A Method for Detecting Large Size and Passenger Vehicles from Fixed Cameras,” filed Jun. 20, 2013, by Orhan Bulan, et al.; U.S. patent application Ser. No. 13/835,386, entitled “Two-Dimensional and Three-Dimensional Sliding Window-Based Methods and Systems for Detecting Vehicles,” filed Mar.
- Another criteria to discard a vehicle as a candidate for a drive-around is to determine that the vehicle enters the drive-thru queue.
- the drive-thru monitoring module 20 determines whether the customer enters the queue.
- One approach for determining a queue configuration is provided in co-pending and commonly assigned U.S. patent application Ser. No. 14/261,013, entitled “System and Method for Video-Based Determination of Queue Configuration,” filed Apr. 24, 2014 by, Edgar A. Bernal, et al., the content of which is totally incorporated herein by reference.
- vehicles that are located in that queue configuration can be discarded.
- FIG. 6 illustrates one example output of a queue monitoring module 20 in accordance with the present disclosure.
- the cross-hatched areas comprise the drive-thru queue.
- the drive-thru queue monitoring module 20 extracts queue-related parameters.
- the queue-related parameters can vary between applications.
- the parameters which the module 20 extracts can be based on their relevance to the task at hand. Examples of queue-related parameters include, but are not limited to, a split point location, queue length, queue width, any imbalance in side-by-side order point queues, queue outline/boundaries, and statistics of queue times/wait periods, etc.
- the module 20 can extract the boundary of the queue configuration. Based on the estimated queue boundary, determinations as to whether a given subject (e.g., vehicle) forms part of a queue or leaves a queue can be made. Detection of a vehicle (e.g., customer) entering a queue can be performed by comparing the location of the vehicle given its corresponding tracking information with the detected queue outline. If the estimated vehicle location is inside the confines of the queue as defined by the estimated outline, a vehicle may be deemed to have joined the queue. In some embodiments, a decision as to whether a vehicle joined a queue can be made after a certain number of frames the vehicle position is detected to be within the confines of the queue.
- a given subject e.g., vehicle
- Detection of a vehicle (e.g., customer) entering a queue can be performed by comparing the location of the vehicle given its corresponding tracking information with the detected queue outline. If the estimated vehicle location is inside the confines of the queue as defined by the estimated outline, a vehicle may be
- One aspect of automatically detecting queue dynamics is that the data can aid businesses in making informed decisions aimed at improving customer throughput rates, whether in real-time or for the future.
- histories of queue-related parameters, and their correlation with abnormal events, can shed light into measures the business can take to avoid a reoccurrence of these undesired events.
- Non-limiting examples include banks (indoor and drive-thru teller lanes), grocery and retail stores (check-out lanes), airports (security check points, ticketing kiosks, boarding areas and platforms), road routes (i.e., construction, detours, etc.), restaurants (such as fast food counters and drive-thrus), theaters, and the like, etc.
- the queue configuration and queue-related parameter information computed by the present disclosure can aid these applications.
- the drive-thru queue monitoring module 20 in concert with the customer tracking module 16 detects events of customers joining a queue through tracking of a customer. If a vehicle is detected entering a queue, then it can be discarded as a drive-around candidate.
- Another criteria to discard a vehicle as a candidate for a drive-around is to determine that the length of stay of the vehicle in the premises exceeds a predetermined threshold. It will be appreciated that a decision regarding drive-around can be confidently made based on the outputs of the parking and queue monitoring modules 18 and 20 .
- the Customer Timing Module 22 can serve as an additional indicator to decrease false positives.
- timing information can be seamlessly extracted from tracking information, so this module generally does not impose a significant added computational load. Specifically, if the frame rate f in frames per second of the video acquired by the Video Acquisition Module 12 and the number of frames N a vehicle has been tracked are known, then the length of time the vehicle has been tracked can be computed as N/f.
- the customer exit detection module 24 makes this determination. Similar to the customer entry detection module, this module detects the presence of a vehicle in a region associated with an exit point (see dashed line polygon X in FIG. 3 , for example). Unlike the entry detection module 14 , however, module 24 makes a decision based on the vehicle tracking information provided by the tracking module 16 . In that sense, its operation is more closely related to that of the parking monitoring module 18 . Specifically, as a trajectory is detected to enter or traverse the exit region, a notification of an exit event associated with the customer being tracked is issued.
- the drive-around determination module 26 takes the outputs of modules 14 , 16 , 18 , 20 , 22 and 24 to make a determination as to whether a vehicle drove around the premises. Specifically, if a detected vehicle was tracked around the premises from an entrance to an exit point, was not determined to join a queue or to occupy a parking spot, and, optionally, stayed in the premises of the retail business for a short length of time relative to a predetermined threshold, a drive-around notification is triggered.
- system and method described here performs video processing.
- a primary application is notification of drive-arounds as they happen (real-time) so that the cause and effects can be mitigated in realtime.
- such a system and method utilizes real-time processing where alerts can be given within seconds of the event.
- An alternative approach implements a post-operation review, where an analyst or store manager can review information on the anomalies at a later time to understand store performance.
- a post operation review would not utilize real-time processing and could be performed on the video data at a later time or place as desired.
- FIG. 7 an exemplary method 50 in accordance with the present disclosure is illustrated in flowchart form.
- the method includes acquiring video from an area of interest in process step 52 .
- a customer entry is detected in process step 54 , and the customer is tracked in process step 56 .
- a parking lot is monitored in process step 58 and if, in process step 60 , the customer enters a parking space, then the method diverts to process step 62 and that customer is discarded as a drive-around candidate. If the customer does not enter a parking space, then it is determined in process steps 64 and 66 whether the customer entered a drive-thru queue. If so, the method diverts to process step 68 and that customer is discarded as a drive-around candidate.
- the method continues to process step 70 whereat the exits are monitored to determine whether the customer exits the monitored premises at process step 74 . If the customer exits the monitored premises, the method diverts to process step 76 where a drive-around notification is generated. Otherwise, the method loops back to the track customer process step 56 and continues until either the customer parks, enters the drive-thru queue, or exits the premises. It will be appreciated that this method is exemplary and that aspects of the method can be carried out in different sequences and/or simultaneously depending on a particular implementation. In addition, the method can be used to monitor a multitude of customers simultaneously.
Abstract
Description
- This application claims priority to and the benefit of the filing date of U.S. Provisional Patent Application Ser. No. 61/984,421, filed Apr. 25, 2014, which application is hereby incorporated by reference.
- Advances and increased availability of surveillance technology over the past few decades have made it increasingly common to capture and store video footage of retail settings for the protection of companies, as well as for the security and protection of employees and customers. This data has also been of interest to retail markets for its potential for data-mining and estimating consumer behavior and experience to aid both real-time decision making and historical analysis. For some large companies, slight improvements in efficiency or customer experience can have a large financial impact.
- Several efforts have been made at developing retail-setting applications for surveillance video beyond well-known security and safety applications. For example, one such application counts detected people and records the count according to the direction of movement of the people. In other applications, vision equipment is used to monitor queues, and/or groups of people within queues. Still other applications attempt to monitor various behaviors within a reception setting.
- One industry that is particularly heavily data-driven is fast food restaurants. Accordingly, fast food companies and/or other restaurant businesses tend to have a strong interest in numerous customer and/or store qualities and metrics that affect customer experience, such as dining area cleanliness, table usage, queue lengths, experience time in-store and drive-thru, specific order timing, order accuracy, and customer response.
- Modern retail processes are becoming heavily data-driven, and retailers therefore have a strong interest in numerous customer and store metrics such as queue lengths, experience time in-store and/or drive-thru, specific order timing, order accuracy, and customer response. Of particular interest is the detection and diagnosing of abnormal events, particularly those related to customer volumes exceeding the capacity of a store. Such events include queue lengths and waiting times exceeding certain desired thresholds, which may in turn lead to customer dissatisfaction, customer drive-offs (in vehicular queues) and walk-offs (in pedestrian queues), and customer drive-arounds.
- Drive-arounds occur when a customer drives into a retail setting with the intention of making a purchase and, upon examination of the existing customer volume (e.g., from parking occupancy or drive-thru queue length), decides to leave the premises. Drive-arounds result in lost sales and in unnecessary added traffic in the premises, as well as potential losses in repeat business. There is currently no automated solution to the detection of these events, since current solutions for operations analytics involve manual annotation often carried out by contractors. Furthermore, other events of interest may not currently be detected at all.
- The following references, the disclosures of which are incorporated herein in their entireties by reference are mentioned:
- U.S. patent application Ser. No. 13/964,652, filed Aug. 12, 2013, by Shreve et al., and entitled “Heuristic-Based Approach for Automatic Payment Gesture Classification and Detection”;
- U.S. patent application Ser. No. 13/933,194, filed Jul. 2, 2013, by Mongeon et al., and entitled “Queue Group Leader Identification”;
- U.S. patent application Ser. No. 13/973,330, filed Aug. 22, 2013, by Bernal et al., and entitled “System and Method for Object Tracking and Timing Across Multiple Camera Views”; and,
- U.S. patent application Ser. No. 14/261,013, filed Apr. 24, 2014, by Bernal et al., and entitled “System and Method for Video-Based Determination of Queue Configuration Parameters”.
- U.S. patent application Ser. No. 14/195,036, filed Mar. 3, 2014, by Li et al., and entitled “Method and Apparatus for Processing Image of Scene of Interest”;
- U.S. patent application Ser. No. 14/089,887, filed Nov. 26, 2013, by Bernal et al., and entitled “Method and System for Video-Based Vehicle Tracking Adaptable to Traffic Conditions”;
- U.S. patent application Ser. No. 14/078,765, filed Nov. 13, 2013, by Bernal et al., and entitled “System and Method for Using Apparent Size and Orientation of an Object to improve Video-Based Tracking in Regularized Environments”;
- U.S. patent application Ser. No. 14/068,503, filed Oct. 31, 2013, by Bulan et al., and entitled “Bus Lane Infraction Detection Method and System”;
- U.S. patent application Ser. No. 14/050,041, filed Oct. 9, 2013, by Bernal et al., and entitled “Video Based Method and System for Automated Side-by-Side Traffic Load Balancing”;
- U.S. patent application Ser. No. 14/017,360, filed Sep. 4, 2013, by Bernal et al. and entitled “Robust and Computationally Efficient Video-Based Object Tracking in Regularized Motion Environments”;
- U.S. Patent Application Publication No. 2014/0063263, published Mar. 6, 2014, by Bernal et al. and entitled “System and Method for Object Tracking and Timing Across Multiple Camera Views”;
- U.S. Patent Application Publication No. 2013/0106595, published May 2, 2013, by Loce et al., and entitled “Vehicle Reverse Detection Method and System via Video Acquisition and Processing”;
- U.S. Patent Application Publication No. 2013/0076913, published Mar. 28, 2013, by Xu et al., and entitled “System and Method for Object Identification and Tracking”;
- U.S. Patent Application Publication No. 2013/0058523, published Mar. 7, 2013, by Wu et al., and entitled “Unsupervised Parameter Settings for Object Tracking Algorithms”;
- U.S. Patent Application Publication No. 2009/0002489, published Jan. 1, 2009, by Yang et al., and entitled “Efficient Tracking Multiple Objects Through Occlusion”;
- Azari, M.; Seyfi, A.; Rezaie, A. H., “Real Time Multiple Object Tracking and Occlusion Reasoning Using Adaptive Kalman Filters”, Machine Vision and Image Processing (MVIP), 2011, 7th Iranian, pages 1-5, Nov. 16-17, 2011.
- According to one aspect, a method for detection of drive-arounds in a retail setting comprises acquiring images of a retail establishment, analyzing the images to detect entry of a customer onto the premises of the retail establishment, tracking a detected customer's location as the customer traverses the premises of the retail establishment, analyzing the images to detect exit of the detected customer from the premises of the retail establishment, and generating a drive-around notification if the customer does not enter a prescribed area or remain on the premises of the retail location for at least a prescribed minimum period of time.
- The customer can include a customer within a vehicle, and the analyzing and tracking can include analyzing and tracking a vehicle on the premises of the retail establishment. The images of the retail establishment can include images of a parking lot of the retail establishment including one or more parking spaces for vehicles, and the prescribed area can include the one or more parking spaces. The method can further comprise monitoring the one or more parking spaces to determine whether the detected customer enters a parking space, and discarding the customer as a drive-around candidate if the detected customer enters a parking space. The one or more parking spaces can be previously identified regions within a camera's field-of-view. Alternatively, automated or semi-automated methods for identification of parking space locations can be implemented, as taught in U.S. patent application Ser. No. 13/433,809 entitled “Method of Determining Parking Lot Occupancy from Digital Camera Images,” filed Mar. 29, 2012, by Diana Delibaltov et al.
- The images of the retail establishment can include images of a drive-thru queue of the retail establishment, and the prescribed area can include the drive-thru queue. The method can further comprise monitoring the drive-thru queue to determine whether the detected customer enters the drive-thru queue, and discarding the customer as a drive-around candidate if the detected customer enters either a parking space. The drive-thru queue can include a static queue region previously identified within a camera's field of view. The drive-thru queue can include an automatically detected dynamic queue region.
- The method can further comprise calculating the total time the detected customer is on the premises, comparing the calculated time to a threshold time, and generating the notification only if the calculated time does not exceed the threshold time. The steps of the method can be performed in real-time, and/or using one or more computer vision techniques.
- In accordance with another aspect, a system for detection of customer drive-arounds in a retail setting comprises a device for monitoring customers including a memory in communication with a processor configured to acquire a series of images of a retail establishment, analyze the images to detect entry of a customer onto the premises of the retail establishment, track a detected customer's location as the customer traverses the premises of the retail establishment, analyze the images to detect exit of the detected customer from the premises of the retail establishment, and generate a drive-around notification if the customer does not enter a prescribed area or remain on the premises of the retail location for at least a prescribed minimum period of time.
- The customer can include a vehicle, and the analyzing and tracking can include analyzing and tracking the vehicle on the premises of the retail establishment. The images of the retail establishment can include images of a parking lot of the retail establishment including one or more parking spaces for vehicles, and the prescribed area can include the one or more parking spaces. The processor can be further configured to monitor the one or more parking spaces to determine whether the detected customer enters a parking space, and discard the customer as a drive-around candidate if the detected customer enters a parking space. The one or more parking spaces can be previously identified regions within a camera's field-of-view.
- The images of the retail establishment can include images of a drive-thru queue of the retail establishment, and the prescribed area can include the drive-thru queue. The processor can be further configured to monitor the drive-thru queue to determine whether the detected customer enters the drive-thru queue, and discard the customer as a drive-around candidate if the detected customer enters either a parking space. The drive-thru queue can include a static queue region previously identified within a camera's field of view. The drive-thru queue can include an automatically detected dynamic queue region.
- The processor can be further configured to calculate the total time the detected customer is on the premises, compare the calculated time to a threshold time, and generate the notification only if the calculated time does not exceed the threshold time. The processor can be configured to generate the notification in real-time as images are acquired and/or perform at least one computer vision technique.
-
FIG. 1 is a block diagram of a drive-around determination system according to an exemplary embodiment of the present disclosure. -
FIG. 2 shows an example scenario of a multi-lane queue being monitored by two cameras of a multi-camera network, where vehicles are supposed to form in a single file before the ‘split point’, and then split into two separate queues, one for each order point. -
FIG. 3 shows an example scenario of an entrance point into the retail premises being monitored. -
FIG. 4 shows an example set of camera views of the retail premises being monitored with a network of cameras. -
FIGS. 5A and 5B show two video frames where parking occupancy is marked.FIG. 5A shows a vehicle being tracked, whileFIG. 5B shows the vehicle occupying a parking spot and the parking indicator for that parking spot having been updated. -
FIG. 6 illustrates one example output of a queue monitoring module in accordance with the present disclosure. -
FIG. 7 illustrates a flowchart of an exemplary method in accordance with the present disclosure. - The present disclosure relates to a system and method for video-based detection of drive-arounds in vehicular queues performed on an ensemble of images or a video sequence. Herein, the term “drive-around” refers to instances when a customer drives into a retail setting with the intention of making a purchase and, upon examination of the existing customer volume (e.g., from parking occupancy or drive-thru queue length), decides to leave the premises. Additionally, the terms “incoming frame” and/or “current frame” refer to a video frame that is currently being processed for foreground/motion detection, tracking, object recognition, and other video analytics and computer vision processes. This processing can be focused on a region of interest (“ROI”) within the frame.
- With reference to
FIG. 1 , the present disclosure describes a method andsystem 2 for automated video-based detection of vehicular drive-arounds in retail settings. Thesystem 2 and method can be integrated with drive-thru and in-store customer tracking and timing systems, or can be provided as a stand-alone system. - The
system 2 includes aCPU 4 that is adapted for controlling an analysis of video data received by thesystem 2. An I/O interface 6 is provided, such as a network interface for communicating with external devices. The I/O interface 6 may include, for example, a modem, a router, a cable, and and/or Ethernet port, etc. Thesystem 2 includes a memory containing a number of modules containing computer executable instructions. The modules include: - a
video acquisition module 12 which acquires video from the retail setting of interest; - a customer
entry detection module 14 which determines when a customer enters the retail store premises; - a
customer tracking module 16 which determines the location of detected customers as they traverse the retail store premises; - a parking
lot monitoring module 18 which determines whether the customer occupies a parking spot; - a drive-thru
queue monitoring module 20 which determines whether the customer enters the queue based on the queue configuration and tracking information output by thecustomer tracking module 16; - a
customer timing module 22 which determines the length of the stay of the customer in the premises; - a customer
exit detection module 24 which determines when a customer exits the retail store premises; and - a drive-around
determination module 26 which detects the occurrence of a customer drive-around and issues an alert. - It will be appreciated that the
memory 8 may represent any type of tangible computer readable medium such as random access memory (RAM), read only memory (ROM), magnetic disk or tape, optical disk, flash memory, or holographic memory. In one embodiment, thememory 8 comprises a combination of random access memory and read only memory. TheCPU 4 can be variously embodied, such as by a single-core processor, a dual-core processor (or more generally by a multiple-core processor), a digital processor and cooperating math coprocessor, a digital controller, or the like. The CPU, in addition to controlling the operation of thesystem 2, executes instructions stored inmemory 8 for performing the parts of the system and method outlined inFIG. 1 . In some embodiments, theCPU 4 andmemory 8 may be combined in a single chip. It will be appreciated that aspects of the present disclosure can be performed, executed, or otherwise carried out using a wide range of hardware. -
Modules module 12 to compute their respective outputs.Module 26 uses the outputs from those previous modules to make the determination regarding a drive-around. Each module is discussed in detail below. -
Video Acquisition Module 12 - The
video acquisition module 12 includes at least one, but possibly multiple video cameras that acquire video of the region of interest, including the retail store premises and its surroundings. The type of cameras could be of any of a variety of surveillance cameras suitable for viewing the region of interest and operating at frame rates sufficient to view the queue events of interest, such as common RGB cameras that may also have a “night mode”, and operate at 30 frame/sec, for example. The cameras can include near infrared (NIR) capabilities at the low-end portion of a near-infrared spectrum (700 nm-1000 nm). No specific requirements are needed regarding spatial or temporal resolutions. The image source, in one embodiment, can include a surveillance camera with a video graphics array size that is about 1280 pixels wide and 720 pixels tall with a frame rate of thirty (30) or more frames per second. - In one embodiment, the
video acquisition module 12 can be a device adapted to relay and/or transmit the video captured by the camera to the customer entrydetection tracking module 14. Thevideo acquisition module 12 can include a camera sensitive to visible light or having specific spectral sensitivities, a network of such cameras, a line-scan camera, a computer, a hard drive, or other image sensing and storage devices. In another embodiment, thevideo acquisition module 12 may acquire input from any suitable source, such as a workstation, a database, a memory storage device, such as a disk, or the like. Thevideo acquisition module 12 is in communication with theCPU 4, andmemory 8. -
FIG. 2 shows two sample frames of videos captured with two cameras which include a vehicular multi-lane queue Qmulti at a fast-food restaurant, and are used in the following exemplary disclosure.FIG. 4 shows images from 8 cameras used to cover most of the real estate associated with the exemplary retail premises. Image data from these cameras can be fed to the image acquisition module, and processed in accordance with the present disclosure. - Customer
Entry Detection Module 14 - The customer
entry detection module 14 determines whether a customer (e.g., a vehicle, a pedestrian, etc.) has entered the retail business premises. This module generally includes a video- or image-based object detector placed adjacent to the entrance points to the retail business premises.FIG. 3 shows a sample frame of a camera pointed towards one of the entrances E of the exemplary restaurant being monitored, along with the manually selected image area (dashed line box on left identified generally by reference numeral 40) where entrance events are detected. Customerentry detection module 14 operates by detecting an initial instance of a vehicle V entering the monitored premises. - In one embodiment, a background estimation method that allows for foreground detection to be performed is used. According to this approach, a pixel-wise statistical model of historical pixel behavior is constructed for the boxed entrance detection area in
FIG. 3 , for instance in the form of a pixel-wise Gaussian Mixture Model (GMM). Other statistical models can be used, including running averages and medians, non-parametric models, and parametric models having different distributions. The GMM describes statistically the historical behavior of the pixels in the highlighted area; for each new incoming frame, the pixel values in the area are compared to their respective GMM and a determination is made as to whether their values correspond to the observed history. If they don't, which happens, for example, when a car traverses the detection area, a foreground detection signal is triggered. When a foreground detection signal is triggered for a large enough number of pixels, a vehicle detection signal is triggered. Morphological operations usually accompany pixel-wise decisions in order to filter out noises and to fill holes in detections. - Alternative implementations for this module include motion detection algorithms that detect significant motion in the detection area. Motion detection is usually performed via temporal frame differencing and morphological filtering. This, in contrast to foreground detection, which also detects stationary foreground objects, motion detection only detects objects in motion at a speed determined by the frame rate of the video and the video acquisition geometry. In other embodiments, computer vision techniques for object recognition and localization can be used on still frames.
- These techniques typically entail a training stage where the appearance of multiple labeled sample objects in a given feature space (e.g., Harris Corners, SIFT, HOG, LBP, etc.) is fed to a classifier (e.g., support vector machine—SVM, neural network, decision tree, expectation-maximization—EM, k nearest neighbors—k-NN, other clustering algorithms, etc.) that is trained on the available feature representations of the labeled samples. The trained classifier is then applied to features extracted from image areas adjacent to the entrance points to the retail business premises from frames of interest and outputs the parameters of bounding boxes (e.g., location, width and height) surrounding the matching candidates. In one embodiment, the classifier can be trained on features of vehicles (positive samples) as well as features of asphalt, grass, pedestrians, etc. (negative features). Upon operation of the trained classifier, a classification score on an image test area of interest is issued indicating a matching score of the test area relative to the positive samples. A high matching score would indicate detection of a vehicle.
-
Customer Tracking Module 16 - Using the acquired video as input information, video processing is used to track customers within the area of interest. Upon the detection of entry of a customer by the customer
entry detection module 14, thecustomer tracking module 16 initiates a tracker for the customer and tracks its location in motion across the field of view of the camera(s). Tracking algorithms including point- and global feature-based, silhouette/contour, and particle filter trackers can be used. In one embodiment, a cloud-point-based tracker is used that tracks sets of interest points per vehicle. Thecustomer tracking module 16 outputs spatio-temporal information describing the location of the customers across the range of frames in which they are present within the field of view of the camera(s). Specifically, for each object being tracked, the local tracking module outputs its location in pixel coordinates and the corresponding frame number across the range of frames for which the object remains within the field of view of the camera(s). - Since the area traversed by a vehicle driving around is extensive, customer tracking may need to be performed across multiple camera views. Because the acquired video frame(s) is a projection of a three-dimensional space onto a two-dimensional plane, ambiguities can arise when the subjects are represented in the pixel domain (i.e., pixel coordinates). These ambiguities are introduced by perspective distortion, which is intrinsic to the video data. In the embodiments where video data is acquired from more than one camera (each associated with its own coordinate system), apparent discontinuities in motion patterns can exist when a subject moves between the different coordinate systems. These discontinuities make it more difficult to interpret the data.
- In one embodiment, these ambiguities can be resolved by performing a geometric transformation by converting the pixel coordinates to real-world coordinates. Particularly in a case where multiple cameras cover the entire queue area, the coordinate systems of each individual camera are mapped to a single, common coordinate system. For example, the spatial coordinates corresponding to the tracking data from a first camera can be mapped to the coordinate system of a second camera. In another embodiment, the spatial coordinates corresponding to the tracking data from multiple cameras can be mapped to an arbitrary common coordinate system. Any existing camera calibration process can be used to perform the estimated geometric transformation. One approach is described in the disclosure of co-pending and commonly assigned U.S. application Ser. No. 13/868,267, entitled “Traffic Camera Calibration Update Utilizing Scene Analysis,” filed Apr. 13, 2013 by, Wencheng Wu, et al., the content of which is totally incorporated herein by reference. While calibrating a camera can require knowledge of the intrinsic parameters of the camera, the calibration required herein need not be exhaustive to eliminate ambiguities in the tracking information. For example, a magnification parameter may not need to be estimated.
- In addition to within camera tracking techniques, tracking across cameras requires object re-identification to be performed. Re-identification refers to the task of establishing correspondences of images of a given object across multiple camera views. In one embodiment, color-based re-identification is used because it has been found to be most robust to the drastic changes in perspective and distortion a vehicle experiences as it traverses a typical camera network, as illustrated in
FIG. 4 , which showsimages 42 from eight (8) cameras used to cover most of the real estate associated with the exemplary retail premises. Other features that remain unchanged across different camera views can be used to track subjects across different camera views. These features can include biometric features, clothing patterns or colors, license plates, vehicle make and model, etc. Further details of tracking across multiple camera views are set forth in commonly-assigned U.S. patent application Ser. No. 13/973,330 filed on Aug. 22, 2013, entitled SYSTEM AND METHOD FOR OBJECT TRACKING AND TIMING ACROSS MULTIPLE CAMERA VIEWS, which application is incorporated by reference in its entirety. - Parking
Lot Monitoring Module 18 - One of the criteria to discard a detected vehicle as a candidate for drive-around is to determine that it enters a parking spot. The parking
lot monitoring module 18 determines whether the detected/tracked customer enters a parking spot. With reference toFIGS. 5A and 5B , in one embodiment parking monitoring is performed by manually labeling areas (see hatched areas without cars present, labeled P) corresponding to parking spots and detecting, via the output of the tracking module, whether a vehicle enters one of these areas (see cross-hatched areas with cars present, labeled V). -
FIGS. 5A and 5B show twovideo frames FIG. 5A shows a vehicle being tracked, whileFIG. 5B shows the vehicle occupying a parking spot and the parking indicator for that parking spot having been updated. Approaches for determining parking occupancy from video and images are provided in co-pending and commonly assigned U.S. Ser. No. 13/922,336, entitled “A Method for Detecting Large Size and Passenger Vehicles from Fixed Cameras,” filed Jun. 20, 2013, by Orhan Bulan, et al.; U.S. patent application Ser. No. 13/835,386, entitled “Two-Dimensional and Three-Dimensional Sliding Window-Based Methods and Systems for Detecting Vehicles,” filed Mar. 5, 2013, by Orhan Bulan, et al., “A System and Method for Available Parking Space Estimation for Multispace On-Street Parking,” filed Apr. 6, 2012, by Orhan Bulan, et al.; U.S. patent application Ser. No. 13/433,809 entitled “Method of Determining Parking Lot Occupancy from Digital Camera Images,” filed Mar. 29, 2012, by Diana Delibaltov et al.; and U.S. patent application Ser. No. 13/441,294 entitled “Video-based detector and notifier for short-term parking violation enforcement,” filed Apr. 6, 2012, all of which are totally incorporated herein by reference. - Drive-Thru
Queue Monitoring Module 20 - Another criteria to discard a vehicle as a candidate for a drive-around is to determine that the vehicle enters the drive-thru queue. The drive-thru
monitoring module 20 determines whether the customer enters the queue. One approach for determining a queue configuration is provided in co-pending and commonly assigned U.S. patent application Ser. No. 14/261,013, entitled “System and Method for Video-Based Determination of Queue Configuration,” filed Apr. 24, 2014 by, Edgar A. Bernal, et al., the content of which is totally incorporated herein by reference. In one contemplated embodiment, vehicles that are located in that queue configuration can be discarded.FIG. 6 illustrates one example output of aqueue monitoring module 20 in accordance with the present disclosure. The cross-hatched areas comprise the drive-thru queue. - The drive-thru
queue monitoring module 20 extracts queue-related parameters. Generally, the queue-related parameters can vary between applications. The parameters which themodule 20 extracts can be based on their relevance to the task at hand. Examples of queue-related parameters include, but are not limited to, a split point location, queue length, queue width, any imbalance in side-by-side order point queues, queue outline/boundaries, and statistics of queue times/wait periods, etc. - In one contemplated embodiment, for example, the
module 20 can extract the boundary of the queue configuration. Based on the estimated queue boundary, determinations as to whether a given subject (e.g., vehicle) forms part of a queue or leaves a queue can be made. Detection of a vehicle (e.g., customer) entering a queue can be performed by comparing the location of the vehicle given its corresponding tracking information with the detected queue outline. If the estimated vehicle location is inside the confines of the queue as defined by the estimated outline, a vehicle may be deemed to have joined the queue. In some embodiments, a decision as to whether a vehicle joined a queue can be made after a certain number of frames the vehicle position is detected to be within the confines of the queue. In addition to purely spatial information, other types of data can be used to make the determination process more robust. For example, if the speed and the direction of motion of a vehicle are significantly different from that of vehicles still in the queue, then the vehicle can be deemed not to be part of the queue. - One aspect of automatically detecting queue dynamics is that the data can aid businesses in making informed decisions aimed at improving customer throughput rates, whether in real-time or for the future. Similarly, histories of queue-related parameters, and their correlation with abnormal events, (such as vehicle drive-offs, drive-bys, drive-arounds at a drive-thru queue area, or walk-offs in a pedestrian queue) and other anomalous incidents can shed light into measures the business can take to avoid a reoccurrence of these undesired events.
- There is no limitation made herein to the type of business or the subject (such as customers and/or vehicles) being monitored in the queue area. The embodiments contemplated herein are amenable to any application where subjects can wait in queues to reach a goods/service point. Non-limiting examples, for illustrative purposes only, include banks (indoor and drive-thru teller lanes), grocery and retail stores (check-out lanes), airports (security check points, ticketing kiosks, boarding areas and platforms), road routes (i.e., construction, detours, etc.), restaurants (such as fast food counters and drive-thrus), theaters, and the like, etc. The queue configuration and queue-related parameter information computed by the present disclosure can aid these applications.
- The drive-thru
queue monitoring module 20, in concert with thecustomer tracking module 16 detects events of customers joining a queue through tracking of a customer. If a vehicle is detected entering a queue, then it can be discarded as a drive-around candidate. -
Customer Timing Module 22 - Another criteria to discard a vehicle as a candidate for a drive-around is to determine that the length of stay of the vehicle in the premises exceeds a predetermined threshold. It will be appreciated that a decision regarding drive-around can be confidently made based on the outputs of the parking and
queue monitoring modules Customer Timing Module 22 can serve as an additional indicator to decrease false positives. Also, timing information can be seamlessly extracted from tracking information, so this module generally does not impose a significant added computational load. Specifically, if the frame rate f in frames per second of the video acquired by theVideo Acquisition Module 12 and the number of frames N a vehicle has been tracked are known, then the length of time the vehicle has been tracked can be computed as N/f. - Customer
Exit Detection Module 24 - A determination that a vehicle drove around cannot be made until the customer exits the premises. The customer
exit detection module 24 makes this determination. Similar to the customer entry detection module, this module detects the presence of a vehicle in a region associated with an exit point (see dashed line polygon X inFIG. 3 , for example). Unlike theentry detection module 14, however,module 24 makes a decision based on the vehicle tracking information provided by thetracking module 16. In that sense, its operation is more closely related to that of theparking monitoring module 18. Specifically, as a trajectory is detected to enter or traverse the exit region, a notification of an exit event associated with the customer being tracked is issued. - Drive-
Around Determination Module 26 - The drive-around
determination module 26 takes the outputs ofmodules - It will be appreciated that the system and method described here performs video processing. A primary application is notification of drive-arounds as they happen (real-time) so that the cause and effects can be mitigated in realtime. Accordingly, such a system and method utilizes real-time processing where alerts can be given within seconds of the event. An alternative approach implements a post-operation review, where an analyst or store manager can review information on the anomalies at a later time to understand store performance. A post operation review would not utilize real-time processing and could be performed on the video data at a later time or place as desired.
- Turning to
FIG. 7 , an exemplary method 50 in accordance with the present disclosure is illustrated in flowchart form. The method includes acquiring video from an area of interest inprocess step 52. Next, a customer entry is detected inprocess step 54, and the customer is tracked inprocess step 56. A parking lot is monitored inprocess step 58 and if, inprocess step 60, the customer enters a parking space, then the method diverts to processstep 62 and that customer is discarded as a drive-around candidate. If the customer does not enter a parking space, then it is determined in process steps 64 and 66 whether the customer entered a drive-thru queue. If so, the method diverts to processstep 68 and that customer is discarded as a drive-around candidate. If not, the method continues to processstep 70 whereat the exits are monitored to determine whether the customer exits the monitored premises atprocess step 74. If the customer exits the monitored premises, the method diverts to process step 76 where a drive-around notification is generated. Otherwise, the method loops back to the trackcustomer process step 56 and continues until either the customer parks, enters the drive-thru queue, or exits the premises. It will be appreciated that this method is exemplary and that aspects of the method can be carried out in different sequences and/or simultaneously depending on a particular implementation. In addition, the method can be used to monitor a multitude of customers simultaneously. - Although the method is illustrated and described above in the form of a series of acts or events, it will be appreciated that the various methods or processes of the present disclosure are not limited by the illustrated ordering of such acts or events. In this regard, except as specifically provided hereinafter, some acts or events may occur in different order and/or concurrently with other acts or events apart from those illustrated and described herein in accordance with the disclosure. It is further noted that not all illustrated steps may be required to implement a process or method in accordance with the present disclosure, and one or more such acts may be combined. The illustrated methods and other methods of the disclosure may be implemented in hardware, software, or combinations thereof, in order to provide the control functionality described herein, and may be employed in any system including but not limited to the above illustrated system, wherein the disclosure is not limited to the specific applications and embodiments illustrated and described herein.
- It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
Claims (22)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/280,863 US9940633B2 (en) | 2014-04-25 | 2014-05-19 | System and method for video-based detection of drive-arounds in a retail setting |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461984421P | 2014-04-25 | 2014-04-25 | |
US14/280,863 US9940633B2 (en) | 2014-04-25 | 2014-05-19 | System and method for video-based detection of drive-arounds in a retail setting |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150310459A1 true US20150310459A1 (en) | 2015-10-29 |
US9940633B2 US9940633B2 (en) | 2018-04-10 |
Family
ID=54335161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/280,863 Active 2037-02-09 US9940633B2 (en) | 2014-04-25 | 2014-05-19 | System and method for video-based detection of drive-arounds in a retail setting |
Country Status (1)
Country | Link |
---|---|
US (1) | US9940633B2 (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150312529A1 (en) * | 2014-04-24 | 2015-10-29 | Xerox Corporation | System and method for video-based determination of queue configuration parameters |
US10185965B2 (en) * | 2013-09-27 | 2019-01-22 | Panasonic Intellectual Property Management Co., Ltd. | Stay duration measurement method and system for measuring moving objects in a surveillance area |
US20190156276A1 (en) * | 2017-08-07 | 2019-05-23 | Standard Cognition, Corp | Realtime inventory tracking using deep learning |
US10474993B2 (en) | 2017-08-07 | 2019-11-12 | Standard Cognition, Corp. | Systems and methods for deep learning-based notifications |
US10474991B2 (en) | 2017-08-07 | 2019-11-12 | Standard Cognition, Corp. | Deep learning-based store realograms |
JP2020035238A (en) * | 2018-08-30 | 2020-03-05 | Zホールディングス株式会社 | Information processor, information processing method, and information processing program |
US10650545B2 (en) | 2017-08-07 | 2020-05-12 | Standard Cognition, Corp. | Systems and methods to check-in shoppers in a cashier-less store |
US20200258241A1 (en) * | 2019-02-13 | 2020-08-13 | Adobe Inc. | Representation learning using joint semantic vectors |
US10853965B2 (en) | 2017-08-07 | 2020-12-01 | Standard Cognition, Corp | Directional impression analysis using deep learning |
US11023850B2 (en) | 2017-08-07 | 2021-06-01 | Standard Cognition, Corp. | Realtime inventory location management using deep learning |
US11068966B2 (en) * | 2016-05-05 | 2021-07-20 | Conduent Business Services, Llc | System and method for lane merge sequencing in drive-thru restaurant applications |
US11200692B2 (en) | 2017-08-07 | 2021-12-14 | Standard Cognition, Corp | Systems and methods to check-in shoppers in a cashier-less store |
US11232575B2 (en) | 2019-04-18 | 2022-01-25 | Standard Cognition, Corp | Systems and methods for deep learning-based subject persistence |
US11232687B2 (en) | 2017-08-07 | 2022-01-25 | Standard Cognition, Corp | Deep learning-based shopper statuses in a cashier-less store |
US11250376B2 (en) | 2017-08-07 | 2022-02-15 | Standard Cognition, Corp | Product correlation analysis using deep learning |
US20220092597A1 (en) * | 2019-03-28 | 2022-03-24 | Ncr Corporation | Frictionless and unassisted return processing |
US11303853B2 (en) | 2020-06-26 | 2022-04-12 | Standard Cognition, Corp. | Systems and methods for automated design of camera placement and cameras arrangements for autonomous checkout |
US11361468B2 (en) | 2020-06-26 | 2022-06-14 | Standard Cognition, Corp. | Systems and methods for automated recalibration of sensors for autonomous checkout |
US11594079B2 (en) * | 2018-12-18 | 2023-02-28 | Walmart Apollo, Llc | Methods and apparatus for vehicle arrival notification based on object detection |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080018738A1 (en) * | 2005-05-31 | 2008-01-24 | Objectvideo, Inc. | Video analytics for retail business process monitoring |
US8224028B1 (en) * | 2008-05-02 | 2012-07-17 | Verint Systems Ltd. | System and method for queue analysis using video analytics |
US20120319597A1 (en) * | 2010-03-10 | 2012-12-20 | Young Suk Park | Automatic lighting control system |
US20130258107A1 (en) * | 2012-03-29 | 2013-10-03 | Xerox Corporation | Method of determining parking lot occupancy from digital camera images |
US20130265419A1 (en) * | 2012-04-06 | 2013-10-10 | Xerox Corporation | System and method for available parking space estimation for multispace on-street parking |
US20140161315A1 (en) * | 2012-12-10 | 2014-06-12 | Verint Systems Ltd. | Irregular Event Detection in Push Notifications |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0700623A1 (en) | 1993-05-14 | 1996-03-13 | Rct Systems, Inc. | Video traffic monitor for retail establishments and the like |
US5581625A (en) | 1994-01-31 | 1996-12-03 | International Business Machines Corporation | Stereo vision system for counting items in a queue |
US5953055A (en) | 1996-08-08 | 1999-09-14 | Ncr Corporation | System and method for detecting and analyzing a queue |
JP2000200357A (en) | 1998-10-27 | 2000-07-18 | Toshiba Tec Corp | Method and device for collecting human movement line information |
US7688349B2 (en) | 2001-12-07 | 2010-03-30 | International Business Machines Corporation | Method of detecting and tracking groups of people |
-
2014
- 2014-05-19 US US14/280,863 patent/US9940633B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080018738A1 (en) * | 2005-05-31 | 2008-01-24 | Objectvideo, Inc. | Video analytics for retail business process monitoring |
US8224028B1 (en) * | 2008-05-02 | 2012-07-17 | Verint Systems Ltd. | System and method for queue analysis using video analytics |
US20120319597A1 (en) * | 2010-03-10 | 2012-12-20 | Young Suk Park | Automatic lighting control system |
US20130258107A1 (en) * | 2012-03-29 | 2013-10-03 | Xerox Corporation | Method of determining parking lot occupancy from digital camera images |
US20130265419A1 (en) * | 2012-04-06 | 2013-10-10 | Xerox Corporation | System and method for available parking space estimation for multispace on-street parking |
US20140161315A1 (en) * | 2012-12-10 | 2014-06-12 | Verint Systems Ltd. | Irregular Event Detection in Push Notifications |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10185965B2 (en) * | 2013-09-27 | 2019-01-22 | Panasonic Intellectual Property Management Co., Ltd. | Stay duration measurement method and system for measuring moving objects in a surveillance area |
US9846811B2 (en) * | 2014-04-24 | 2017-12-19 | Conduent Business Services, Llc | System and method for video-based determination of queue configuration parameters |
US20150312529A1 (en) * | 2014-04-24 | 2015-10-29 | Xerox Corporation | System and method for video-based determination of queue configuration parameters |
US11068966B2 (en) * | 2016-05-05 | 2021-07-20 | Conduent Business Services, Llc | System and method for lane merge sequencing in drive-thru restaurant applications |
US11295270B2 (en) | 2017-08-07 | 2022-04-05 | Standard Cognition, Corp. | Deep learning-based store realograms |
US11195146B2 (en) | 2017-08-07 | 2021-12-07 | Standard Cognition, Corp. | Systems and methods for deep learning-based shopper tracking |
US10474992B2 (en) | 2017-08-07 | 2019-11-12 | Standard Cognition, Corp. | Machine learning-based subject tracking |
US10474988B2 (en) | 2017-08-07 | 2019-11-12 | Standard Cognition, Corp. | Predicting inventory events using foreground/background processing |
US10474991B2 (en) | 2017-08-07 | 2019-11-12 | Standard Cognition, Corp. | Deep learning-based store realograms |
US11810317B2 (en) | 2017-08-07 | 2023-11-07 | Standard Cognition, Corp. | Systems and methods to check-in shoppers in a cashier-less store |
US10650545B2 (en) | 2017-08-07 | 2020-05-12 | Standard Cognition, Corp. | Systems and methods to check-in shoppers in a cashier-less store |
US11270260B2 (en) | 2017-08-07 | 2022-03-08 | Standard Cognition Corp. | Systems and methods for deep learning-based shopper tracking |
US10853965B2 (en) | 2017-08-07 | 2020-12-01 | Standard Cognition, Corp | Directional impression analysis using deep learning |
US11023850B2 (en) | 2017-08-07 | 2021-06-01 | Standard Cognition, Corp. | Realtime inventory location management using deep learning |
US11544866B2 (en) | 2017-08-07 | 2023-01-03 | Standard Cognition, Corp | Directional impression analysis using deep learning |
US10445694B2 (en) * | 2017-08-07 | 2019-10-15 | Standard Cognition, Corp. | Realtime inventory tracking using deep learning |
US11538186B2 (en) | 2017-08-07 | 2022-12-27 | Standard Cognition, Corp. | Systems and methods to check-in shoppers in a cashier-less store |
US20190156276A1 (en) * | 2017-08-07 | 2019-05-23 | Standard Cognition, Corp | Realtime inventory tracking using deep learning |
US11200692B2 (en) | 2017-08-07 | 2021-12-14 | Standard Cognition, Corp | Systems and methods to check-in shoppers in a cashier-less store |
US10474993B2 (en) | 2017-08-07 | 2019-11-12 | Standard Cognition, Corp. | Systems and methods for deep learning-based notifications |
US11232687B2 (en) | 2017-08-07 | 2022-01-25 | Standard Cognition, Corp | Deep learning-based shopper statuses in a cashier-less store |
US11250376B2 (en) | 2017-08-07 | 2022-02-15 | Standard Cognition, Corp | Product correlation analysis using deep learning |
JP2020035238A (en) * | 2018-08-30 | 2020-03-05 | Zホールディングス株式会社 | Information processor, information processing method, and information processing program |
US11594079B2 (en) * | 2018-12-18 | 2023-02-28 | Walmart Apollo, Llc | Methods and apparatus for vehicle arrival notification based on object detection |
US20200258241A1 (en) * | 2019-02-13 | 2020-08-13 | Adobe Inc. | Representation learning using joint semantic vectors |
US11836932B2 (en) * | 2019-02-13 | 2023-12-05 | Adobe Inc. | Classifying motion in a video using detected visual features |
US20210319566A1 (en) * | 2019-02-13 | 2021-10-14 | Adobe Inc. | Classifying motion in a video using detected visual features |
US11062460B2 (en) * | 2019-02-13 | 2021-07-13 | Adobe Inc. | Representation learning using joint semantic vectors |
US20220092597A1 (en) * | 2019-03-28 | 2022-03-24 | Ncr Corporation | Frictionless and unassisted return processing |
US11232575B2 (en) | 2019-04-18 | 2022-01-25 | Standard Cognition, Corp | Systems and methods for deep learning-based subject persistence |
US11948313B2 (en) | 2019-04-18 | 2024-04-02 | Standard Cognition, Corp | Systems and methods of implementing multiple trained inference engines to identify and track subjects over multiple identification intervals |
US11361468B2 (en) | 2020-06-26 | 2022-06-14 | Standard Cognition, Corp. | Systems and methods for automated recalibration of sensors for autonomous checkout |
US11818508B2 (en) | 2020-06-26 | 2023-11-14 | Standard Cognition, Corp. | Systems and methods for automated design of camera placement and cameras arrangements for autonomous checkout |
US11303853B2 (en) | 2020-06-26 | 2022-04-12 | Standard Cognition, Corp. | Systems and methods for automated design of camera placement and cameras arrangements for autonomous checkout |
Also Published As
Publication number | Publication date |
---|---|
US9940633B2 (en) | 2018-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9940633B2 (en) | System and method for video-based detection of drive-arounds in a retail setting | |
US20150310365A1 (en) | System and method for video-based detection of goods received event in a vehicular drive-thru | |
US10176384B2 (en) | Method and system for automated sequencing of vehicles in side-by-side drive-thru configurations via appearance-based classification | |
US10262328B2 (en) | System and method for video-based detection of drive-offs and walk-offs in vehicular and pedestrian queues | |
US9779331B2 (en) | Method and system for partial occlusion handling in vehicle tracking using deformable parts model | |
US10552687B2 (en) | Visual monitoring of queues using auxillary devices | |
US9641763B2 (en) | System and method for object tracking and timing across multiple camera views | |
US9471889B2 (en) | Video tracking based method for automatic sequencing of vehicles in drive-thru applications | |
US10346688B2 (en) | Congestion-state-monitoring system | |
US9536153B2 (en) | Methods and systems for goods received gesture recognition | |
US7801330B2 (en) | Target detection and tracking from video streams | |
US9576371B2 (en) | Busyness defection and notification method and system | |
US8682036B2 (en) | System and method for street-parking-vehicle identification through license plate capturing | |
US20200226523A1 (en) | Realtime video monitoring applied to reduce customer wait times | |
US20060291695A1 (en) | Target detection and tracking from overhead video streams | |
US20120075450A1 (en) | Activity determination as function of transaction log | |
US9865056B2 (en) | Video based method and system for automated side-by-side drive thru load balancing | |
US20190385173A1 (en) | System and method for assessing customer service times | |
KR102260123B1 (en) | Apparatus for Sensing Event on Region of Interest and Driving Method Thereof | |
Denman et al. | Automatic surveillance in transportation hubs: No longer just about catching the bad guy | |
JP6708368B2 (en) | Method and system for partial concealment processing in vehicle tracking using deformable partial model | |
Sabnis et al. | Video Monitoring System at Fuel Stations | |
Djeraba et al. | Flow Estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: XEROX CORPORATION, CONNECTICUT Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BERNAL, EDGAR A.;LI, QUN;REEL/FRAME:032922/0280 Effective date: 20140516 |
|
AS | Assignment |
Owner name: CONDUENT BUSINESS SERVICES, LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:041542/0022 Effective date: 20170112 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:050326/0511 Effective date: 20190423 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: CONDUENT HEALTH ASSESSMENTS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT CASUALTY CLAIMS SOLUTIONS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT BUSINESS SOLUTIONS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT COMMERCIAL SOLUTIONS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: ADVECTIS, INC., GEORGIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT TRANSPORT SOLUTIONS, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT STATE & LOCAL SOLUTIONS, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT BUSINESS SERVICES, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., NORTH CAROLINA Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057970/0001 Effective date: 20211015 Owner name: U.S. BANK, NATIONAL ASSOCIATION, CONNECTICUT Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057969/0445 Effective date: 20211015 |