US20070118805A1 - Virtual environment capture - Google Patents

Virtual environment capture Download PDF

Info

Publication number
US20070118805A1
US20070118805A1 US11/624,593 US62459307A US2007118805A1 US 20070118805 A1 US20070118805 A1 US 20070118805A1 US 62459307 A US62459307 A US 62459307A US 2007118805 A1 US2007118805 A1 US 2007118805A1
Authority
US
United States
Prior art keywords
room
rangefinder
environment
points
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/624,593
Inventor
Matthew Kraus
Benito Graniela
Mary Pigora
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Science Applications International Corp SAIC
Original Assignee
Science Applications International Corp SAIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Science Applications International Corp SAIC filed Critical Science Applications International Corp SAIC
Priority to US11/624,593 priority Critical patent/US20070118805A1/en
Publication of US20070118805A1 publication Critical patent/US20070118805A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Definitions

  • aspects of the present invention relate to computer modeling of environments. More specifically, aspects of the present invention relate to accurately capturing virtual environments.
  • Blocking access to these points makes subsequent image processing difficult as a computer would need to determine whether non-planar points represent furniture or a curve in the wall of a room. To accurately determine the walls and contents of a room, multiple passes may be needed to attempt to eliminate blind spots caused by furniture and other objects. Further, these techniques do not capture data with knowledge of what is being captured.
  • spherical photographic images have been used to capture photographic information surrounding a point in space in a room. While these photographic images may be assembled into a video stream representing a path through the room, deviation from the predefined path is not possible.
  • a user is provided with a system for capturing photorealistic images of a building's interior and associating the images with specific data points.
  • the data points may be obtained through use of a rangefinder.
  • an imaging system may be attached to the rangefinder, thereby capturing both data points and photorealistic images at the same time.
  • the user may use the combination of the rangefinder, imaging system, and a system that monitors the vector to the rangefinder from a position in the room and orientation of the rangefinder to determine distances from a central location to various objects about the user.
  • objects may be selected and modeled using few data points.
  • other sensors may be used in addition to or in place of the rangefinder and the imaging system. Information generated from the sensors may be combined into a virtual environment corresponding to the actual scanned environment.
  • the rangefinder may include a laser rangefinder or other type of rangefinder.
  • FIG. 1 shows a process for modeling objects in accordance with aspects of the present invention.
  • FIG. 2 shows a process for positioning a chair in accordance with aspects of the present invention.
  • FIGS. 3A and 3B show relationships between a data capturing unit's location and an illuminated spot in accordance with aspects of the present invention.
  • FIG. 4 shows a system for capturing and modeling an environment in accordance with aspects of the present invention.
  • FIG. 5 shows an illustrative data capturing system in accordance with aspects of the present invention.
  • FIG. 6 shows the scanning of walls in accordance with aspects of the present invention.
  • FIG. 7 shows the scanning of objects in accordance with aspects of the present invention.
  • FIGS. 8A and 8B show illustrative data points used to determine the location, orientation, and scale of an object in accordance with aspects of the present invention.
  • FIGS. 9A and 9B show the capturing points of multiple objects in accordance with aspects of the present invention.
  • FIG. 10 shows various data capturing techniques for determining points associated with objects in accordance with aspects of the present invention.
  • FIG. 11 shows rooms to be assembled into an environment in accordance with aspects of the present invention.
  • FIG. 12 shows rooms to be assembled into an environment using common data points in accordance with aspects of the present invention.
  • FIG. 13 shows an illustrative process for minimizing and distributing measurement errors by traversing a circuit in accordance with aspects of the present invention.
  • FIG. 14 shows a user interface for modeling an environment in accordance with aspects of the present invention.
  • FIG. 15 shows a process for positioning an object in an environment in accordance with aspects of the present invention.
  • FIG. 16 shows a conceptual set of relationships describing how information regarding an environment may be forwarded to applications in accordance with embodiments of the present invention.
  • aspects of the present invention permit rapid capture and modeling of interior environments. Instead of using clouds of data points, which can require significant post processing to determine individual objects and construct a virtual environment, aspects of the present invention relate to capturing specific points in an actual environment and coordinating the point capture with predefined objects. The coordination with objects simplifies or eliminates post processing requirements.
  • the following description is divided into headings to assist a user in understanding aspects of the present invention.
  • the headings include: modeling; data capture hardware; data gathering processes; data synthesis software; and applications of modeled environments.
  • Interiors of buildings may be used by different entities. For example, architects may use virtual environments during construction of a building so as to properly define locations of walls, HVAC systems and pillars. Architects may also use virtual environments to experiment with remodeling spaces including moving walls and windows. Real estate agents and interior designers may use virtual environments for providing tours to prospective buyers and for suggesting modifications to existing spaces. police and military units may further use virtual model of interior spaces for rehearsing in real-time delicate missions including rescuing hostages. For example, if a unit needed to rescue hostages from an embassy and the embassy was modeled, the unit would be able to rehearse in real time from a variety of different angles how different missions may be accomplished.
  • the modeled environment may be used in distributed simulations in which two or more users or groups of users may interact within the environment. Further, the modeled environment may be used for military training.
  • the system uses a combination of hardware and software to capture and model an environment.
  • a user positions a data capture unit in room.
  • the user selects an object from an object library or object database corresponding to an actual object in the room.
  • the object library or object database stores one or more groups of objects that may be placed in a virtual environment.
  • the object library or database may include walls, floors, ceilings, chairs, office furniture, pictures and other wall hangings, bookshelves, cabinets, lamps, doors, office equipment, sofas, rugs, tables, and the like.
  • the selection of the object from the object library or object database may include a list of points that need to be obtained to locate a virtual instance of the object so that it corresponds to the actual location of the object in the actual room. This may be done by navigating a user interface (using a mouse or voice commands) to have the system retrieve the virtual object from the object library or database.
  • the contents of the object library may be established based on information from a customer requesting the modeled environment. For example, the customer may specify that the environment to be captured may be an office (for which an object library of office furniture is obtained), a residential house (for which an object library of typical home furnishings is obtained), an industrial complex (for which industrial machines and other industrial objects are obtained), and the like. In other situations, a generic library of objects may be used.
  • the points may be gathered in a predefined order as shown in step 12 .
  • the order of the points to be gathered may be explicitly shown in the object from the object library or database.
  • the user manually guides a rangefinder to illuminate spots on the actual object in the room.
  • the rangefinder described herein may be any suitable rangefinder including a laser rangefinder, a sonar-based rangefinder, a system that determines the position of a probe (wired or wirelessly, a ruler, touch probe, GPS, ultra-wideband radio positioning, for instance), and the like.
  • the rangefinder is described as a laser rangefinder.
  • the laser from the laser range finder may be visible light or may be invisible to the unaided eye.
  • the user When the user has positioned the illuminated spot on a desired position of the object, the user then indicates to the system (manually, verbally, or the like) that the current distance from the illuminated spot on the object to the data capture unit should be read as shown in step 13 . Because the system determines the vector to the data capture unit and the angle of the laser rangefinder, the vector from the data capture unit to an illuminated spot may be determined. The location of the captured spot may be used as a three dimensional reference point for the virtual object from the object library.
  • step 14 the virtual object may be placed in the modeled environment.
  • step 15 another point may be captured.
  • step 16 the instance of the object may be oriented in the modeled environment.
  • step 17 another point may be captured and, in step 18 , the instance of the object scaled with the third point. It is appreciated that, alternatively, the placement of the object instance into the virtual environment may occur only after the scale of the object has been determined in step 18 rather than in step 13 .
  • FIG. 2 shows a version of the process of FIG. 1 modeling a chair.
  • a chair 20 exists in an actual room.
  • a user selects a category of a chair most resembling chair 20 from user interface 21 .
  • two versions of chairs are shown in user interface 21 including standard chair 22 and desk chair 23 .
  • at least one location is defined with respect to the instance of the chair, indicating where a received location will correlate to a virtual chair.
  • both of chairs 22 and 23 show locations “1” and “2” that indicate where received locations will be matched to the chairs.
  • a user directs a laser rangefinder from location A 25 to illuminate spots 1 and 2 , sequentially, on chair 24 (which is chair 20 but being illuminated by the laser rangefinder).
  • the user may press a button on the laser rangefinder, may press a button on a mouse, or may speak into a headset to have the system accept the current location illuminated by the laser rangefinder as a location for chair 24 . Additional points may be used to further orient and/or scale each chair.
  • the two locations 1 and 2 scanned from chair 24 are used to orient an instance of the chair in virtual environment 26 .
  • the virtual instance of the chair 28 is registered by the two locations 1 and 2 from position A 27 corresponding to position A 25 in the actual room.
  • One position of the two positions 1 and 2 may be used to locate the chair 28 and the other of the two positions 1 and 2 used to specify the angle of the chair 28 .
  • some objects may only be positioned in an environment. For example, one may need only to specify that a plant exists on a desk, rather than determine the orientation and size of the plant.
  • the size of the plant may be predefined in the library of objects.
  • FIGS. 3A and 3B show a relationship between position A 30 of a mobile data capture unit and that of location 1 on chair 33 .
  • a vector 36 is determined from the floor to position A of the mobile data capture unit. This vector may simply be the height of a tripod perpendicular to a floor.
  • a vector is determined between position A and position B of the laser rangefinder 31 . The determination of this vector may take into account angle 34 and the distance between locations A and B. With the distance from the laser rangefinder 31 and the angle 35 known (from the orientation of the laser rangefinder 31 ), one may determine the vector to spot 1 (and 2 ) on chair 33 .
  • FIG. 3B shows a top down view of the arrangement.
  • a relative vector from tripod 30 to rangefinder 31 is determined.
  • the platform orientation and angle upon which the rangefinder 31 rests is determined.
  • a relative vector from rangefinder 31 to spot 1 of chair 33 is determined. Using these vectors, one may determine the coordinates of spot
  • the data capture hardware may be separated into two systems: a mobile data capture unit and a data assembly station.
  • FIG. 4 shows an example of how environment data 101 may be used by capture and modeling hardware 102 and converted into a modeled environment 103 .
  • Environment data 101 includes a variety of information including distance information from a mobile data capture unit 104 to the surrounding environment, image information regarding how the environment appears, and other additional information including light intensity, heat, sound, materials, and the like. These additional sensors may be important to create an immersive environment for mission rehearsal or telepresence.
  • the distance information may be measured by a laser rangefinder 105 .
  • Image information of the environment data may be captured by camera 106 .
  • Sensor platform location and orientation information may be captured by sensor 114 .
  • additional information capture sensors 107 may be used to capture the other additional information of environment data 101 .
  • the mobile data capture unit 104 transmits information from sensors 105 - 107 and 114 to data assembly station 108 .
  • the mobile data capture unit 104 may export the information to data assembly station 108 without additional processing or may attempt to perform additional processing to minimize need for downstream processing. For example,
  • Data assembly station 108 then creates the modeled environment 103 .
  • the data assembly station 108 may use the raw data from the sensors or may use created objects that were created at the mobile data capture unit 104 .
  • the data assembly station may also perform checks on the information received from the mobile data capture unit 104 to ensure that objects do not overlap or occupy the same physical space. Further, multiple data capture units (three, for example, are shown here) 104 , 112 , and 113 may be used to capture information for the data assembly station 108 .
  • the data assembly station 108 may coordinate and integrate information from the multiple data capture units 104 , 112 , and 113 .
  • the data assembly station 108 may organize the mobile data capture units 104 , 112 , and 113 to capture an environment by, for instance, parsing an environment into sub-environments for each mobile data capture unit to handle independently. Also, the data assembly station 108 may coordinate the mobile data capture units 104 , 112 , and 113 to capture an environment with the mobile data capture units repeatedly scanning rooms, and then integrate the information into a single dataset. Using information from multiple data capture units may help minimize errors by averaging the models from each unit.
  • the integration of data at the data assembly station 108 from one or more mobile data capture units may include a set of software tools that check and address issues with modeled geometry as well as images. For instance, the data assembly station 108 may determine that a room exists between the coverage areas from two mobile data capture units and instruct one or more of the mobile data capture units to capture that room. The data assembly station 108 may include conversion or export to a target runtime environment.
  • the mobile data capture unit 104 and the data assembly station 108 may be combined or may be separate from each other. If separate, the mobile data capture unit 104 may collect and initially test the collected data to ensure completeness of capture of an initial environment (for example, all the walls of a room). The data assembly station 108 may assemble data from the mobile data capture unit 104 , build a run-time format database and test the assembled data to ensure an environment has been accurately captured.
  • each mobile data capture unit may capture data and test for completeness the captured data.
  • the data assembly station 108 collects data from the mobile data capture units. The received data is integrated and tested. Problem areas may be communicated back to the operators of the mobile data capture units (wire or wirelessly) to resample or fix or sample additional data.
  • Data integration software may be running on the data assembly station (for instance, Multi-Gen Creator by Computer Associates, Inc. and OTB-Recompile may be running). It is appreciated that other software may be run in conjunction with or in place of the listed software on the data assembly station.
  • the testing may performed at any level.
  • One advantage of performing testing at the mobile data capture units is that it provides the operators the feedback to know which areas of captured data need to be corrected.
  • the objects created at the mobile data capture unit may have been created with the guidance of a user, who guided the laser rangefinder 105 about the environment and capturing points of the environment or may be inferred from the order or previously captured knowledge of an environment. For example, one may position a door frame in a first room. Moving into the second room, the door frame may be placed in the second room based on 1) the locations of the rooms next to each other, 2) a predefined order in which rooms were to be captured, and/or 3) a determination of the walls being in alignment and the door frame being common to both rooms.
  • the captured points may then be instantiated as objects from a predefined object library 109 .
  • These object models from object library 109 may be selected through user interface 111 as controlled by processor 110 .
  • FIG. 5 shows an illustrative example of mobile data capture unit 104 .
  • Mobile data capture unit 104 may include a movable tripod 201 that supports an articulated arm 202 , which, in turn, supports laser rangefinder 105 and camera 106 . Additional sensors 107 may or may not be included. Additional sensors 107 , if included, may be located at a variety of different locations, not necessarily directly attached to laser rangefinder 105 or camera 106 .
  • Mobile data capture unit 104 may also include a portable computer 203 to receive and temporarily store information from sensors 105 - 107 .
  • the portable computer 203 may include a generic library of objects or may include a geotypical or geospecific object library. For instance, it may include objects from South East Asia if modeling a home in South East Asia. Further, geospecific or geotypical objects may be used in some rooms while only generic objects used in other rooms.
  • the laser rangefinder 105 may be separate from the camera 106 .
  • the laser rangefinder 105 and the camera 106 may be mounted together and aligned so that the camera 106 will see the image surrounding a spot illuminated by the laser rangefinder 105 .
  • the camera 106 can be a digital camera that captures a visual image.
  • the camera 106 may also be or include an infrared/thermal camera so as to see the contents of and what is located behind a wall. Further, camera 106 may also include a night vision camera to accurately capture night vision images.
  • the laser rangefinder 105 determines distance between an object illuminated by its laser spot and itself. Using articulated arm 202 , one may determine the position of the illuminated spot compared to tripod 201 . With tripod 201 fixed at a position in a room, the laser rangefinder 105 may be moved about on articulated arm 202 and have all distances from illuminated objects to tripod 201 determined. These distances may be temporarily stored in portable computer 203 . Additionally or alternatively, the portable computer may transmit received information over connection 204 to data assembly station 108 . Connection 204 may be a wired or wireless connection.
  • Articulated arm 202 may be a multi-jointed arm that includes sensors throughout the arm, where the sensors determine the angular offset of each segment of the arm.
  • the combination of segments may be modeled as a transform matrix and applied to the information from the laser rangefinder 105 .
  • the result provides the location of tripod 201 and the pitch, yaw, and roll of laser rangefinder 105 and camera 106 .
  • Multi-jointed arms are available as the Microscribe arms by Immersion.com of San Jose, Calif. and from Faro Technologies of Florida.
  • the specific arm 202 to be used may be chosen based on the precision required in the modeling of an environment. While the laser rangefinder 105 may not benefit from a determination of roll, the camera 106 benefits in that images captured by the camera may be normalized based on the amount of roll experienced by the camera and the images captured by the camera.
  • Camera 106 may include a digital camera. Any digital camera may be used, with the resulting resolution of the camera affecting the clarity of resultant modeled environments. The images from camera 106 may be mapped onto the surfaces of objects to provide a more realistic version of the object.
  • articulated arm 202 may be replaced by a handheld laser and camera combination.
  • the handheld combination may include a location determining device (including GPS, differential GPS, time multiplexed ultra wideband, and other location determining systems).
  • the handheld unit may transmit its location relative to the tripod 201 , another location, or may associate its position with the data points captured with the laser rangefinder.
  • a system modeling the environment would be able to use the points themselves of an environment, rather than using an encoder arm. For instance, one may use GPS or ultra wide band radio location systems to generate location information without having to be physically connected to the mobile data capture unit 104 .
  • different range finding techniques may be used including physical measurements with a probe or the like may be used to determine points in the environment.
  • the mobile data capture unit 104 may be controlled by a keyboard and/or a mouse.
  • mobile data capture unit 104 may also include a headset 205 including a microphone 206 for receiving voice commands from an operator. The operator may use the microphone 206 to select and instruct the portable computer 203 regarding an object being illuminated by laser rangefinder 105 and captured by camera 106 .
  • the camera 106 does not need to be attached to the laser rangefinder 105 . While attaching the two devices eliminates processing to correlate a spot from the laser rangefinder 105 with an image captured by camera 106 , one may separate the laser rangefinder from the camera and have the camera view an illuminated object and apply an offset matrix to the camera 106 and the laser rangefinder 105 to correlate the camera image with the location of the illuminated object.
  • data assembly station 108 may receive transmitted information over connection 204 from portable computer 203 .
  • Data assembly station 108 may assemble the received information into a modeled environment.
  • Data assembly station 108 may include a computer capable of processing significant data sets.
  • data assembly station 108 may include a 3 gigahertz processor with 1 GB of RAM, 128 MB video card, and an 80 GB hard drive, for instance.
  • the data assembly station 108 may capture rooms or may only assemble rooms from one or more mobile data capture units 104 . Further, data assembly station 108 may perform an error checking operation to determine if any walls, doors, ceilings, and/or floors were not created or need to be resampled.
  • FIG. 6 shows a room to be modeled.
  • a mobile data capture unit is shown at position A 301 .
  • a user may direct the laser rangefinder 105 at wall 302 and determine the locations of spots 302 ′ and 302 ′′ relative to position A.
  • To accurately model a plane one needs at least three data points. However, one may accurately model a plane with fewer than three data points by making certain assumptions or global characteristics of the environment (that may be specified before, during or after data capture). For example, if the environment to be modeled includes walls perpendicular to floors, then only two data points are needed to model a wall. It is appreciated that sometimes multiple data points may be needed for a single wall or wall segment based on errors in readings. From position A 301 , a user may determine the three dimensional position of walls 303 , 304 , and 305 . Of course, one may differ from these assumptions or global characteristics when needed to accurately model an environment.
  • FIG. 7 shows additional objects being modeled from position 301 .
  • a user determines the three dimensional points of doors 401 and 402 .
  • the user determines three dimensional points for desk 403 and chair 404 .
  • the three dimensional points for picture 405 may be determined as well.
  • the visual likeness of picture 405 may also be captured by mobile data capture unit 104 at position 301 .
  • the visual likeness of picture 405 may then be added into the modeled environment.
  • FIGS. 8A and 8B show a number of data points that may be captured to determine the size of an object to be placed in a modeled environment.
  • FIGS. 8A and 8B show a rectangular desk whose dimensions are about to be captured.
  • laser rangefinder 105 a user illuminates a few points on desk 501 and effectively captures the dimensions of desk 501 .
  • FIG. 8A a user may start with a bottom right corner of a desk at position 502 , capture position 503 at the bottom left corner of a desk, capture position 504 at the top left corner of the desk, and then capture position 505 at the top left back corner of the desk 501 .
  • FIG. 8B shows the same dimensions of desk 501 being captured but in a different order.
  • back right corner 506 is determined, followed by the front right bottom corner 507 followed by the front right top corner 508 , then followed by the top front left corner 509 . It is appreciated that various paths may be chosen to determine the dimensions of objects and the locations of objects from mobile data capture unit 104 .
  • the paths shown in FIGS. 8A and 8B are for illustrative purposes only. Other paths may be used to determine the dimensions and location of an object.
  • the user may follow a predefined path or series of locations on an object and have a processor match an instance of an object to the obtained locations.
  • the user may obtain the locations in a random order and have the processor attempt to orient the instance of the object to fit the obtained locations.
  • FIGS. 9A and 9B show objects in a room to be modeled.
  • the objects include a desk 601 , a chair 602 , and a picture 603 on wall 604 in FIG. 9A .
  • a user may start with any of the objects. For instance, a user may start by modeling chair 602 and locating spots 602 ′ and 602 ′′ to accurately position a chair in relation to a mobile data capture unit 104 . Next, a user may highlight spots 601 ′, 601 ′′, 601 ′′′, and 601 ′′′′ to determine the position of desk 601 in relation to a mobile data capture unit 104 .
  • a user may record the location of spots 604 ′ and 604 ′′ to determine the location of wall 604 .
  • the user may record the location of spots 603 ′ and 603 ′′ to determine the dimensions of rectangle picture 603 .
  • one may start with modeling the walls, ceiling, and floor, prior to modeling objects in the room.
  • One advantage to modeling walls, ceilings and floors before modeling other objects is that the other objects may be aligned with the walls, ceilings, and floors as they are instantiated.
  • FIG. 9B shows an example of how the image of picture 603 may be captured as well.
  • a model of a rectangular picture may be obtained from the object library 109 .
  • the instance of the object may use two locations to define itself in an environment.
  • a user may obtain the locations of spot 603 ′ and 603 ′′ as the two locations for the instance of picture 603 .
  • Camera 106 may be aligned with laser rangefinder 105 so that the spot illuminated by laser rangefinder 105 falls roughly in the center of the field of view of camera 106 .
  • Image 603 ′ I is the image captured by camera 106 when the laser rangefinder 105 is illuminating spot 603 ′.
  • image 603 ′′ I is the image captured by camera 106 when the laser rangefinder 105 is illuminating spot 603 ′′. Because of an order of capturing locations as specified by the instance of the picture, the dimensions of the picture may be specified. The instance of the picture may also include an indication of what visual information is relevant to it. For location 603 ′, the relevant image information from image 603 ′ I is the bottom right quadrant of the image.
  • the relevant image information from image 603 ′′ I is the top left quadrant of the image. These two quadrants of the images may be correlated until an overlap is found (shown here as region 603 O).
  • the two images 603 ′ I and 603 ′′ I may be merged using this overlapping region 603 O as a guide. Image portions lying outside the picture may be eliminated. Further, the remaining image may be further parsed, cropped, skewed, stretched, and otherwise manipulated to rectify its image in association with a modeled version of picture 603 . This may be necessary if the angle capturing picture 603 resulted in a foreshortened image of the picture 603 .
  • FIG. 10 shows a variety of spots that may be selected to model walls of a room.
  • the corner of the room as shown in FIG. 10 includes walls 701 and 702 , ceiling 703 , and floor 704 .
  • the floors, ceilings, and walls may be considered objects and/or planes.
  • Wall 701 may be determined by capturing spots 706 and 707 .
  • Wall 702 needed additional spots 708 - 712 to accurately determine its location. This may be because of textures on wall 702 that affected the reflectivity of a spot or other reasons.
  • a user may then repeatedly obtain locations of spots on an object until that object has been correctly located in a room.
  • the height of ceiling 703 may be determined by spot 713 and depth of floor 704 below the mobile capturing device 104 may be determined by spot 714 .
  • a user may select objects corresponding walls, floors, and ceilings. These objects may be grouped. For example, objects representing walls that are perpendicular to both floors and ceilings may be grouped together. Objects representing walls that are not perpendicular to floors and ceilings may be grouped together. Further, slanted ceilings and floors may also be specified as objects. Alternatively, a user may set defaults in the system to start with all walls being perpendicular to floors and ceilings and all floors and ceilings being parallel to one another. Comers may be determined by the intersections of these objects.
  • the intersecting edge between the plane of the ceiling and those of the walls may be determined from the multiple scans between the points by obtaining the maximum/minimum values of the points and using these values to determine the location of the edge.
  • This technique may be used for other objects or intersection of objects as well. This technique uses some degree of automation to speed up data capture, thereby capturing knowledge of the environment at a higher level.
  • FIG. 11 shows three rooms 801 - 803 having been captured.
  • the organization of rooms 801 - 803 into a single unified building may be accomplished by data assembly station 108 .
  • the data assembly station 108 may use a variety of techniques of locating rooms 801 - 803 in relation to each other.
  • a user may indicate to data assembly station 108 which room follows a current room.
  • a mobile data capture unit 104 may include a sensing device or transceiver that identifies its location.
  • the mobile data capture unit 104 may include a global positioning system (GPS) locator that determines its position absolutely.
  • the mobile data capture unit 104 may include a differential GPS locator system.
  • the mobile data capture unit 104 may include a time multiplexed ultra wideband locator.
  • GPS global positioning system
  • FIG. 12 shows a stepwise approach used to determine the position of rooms.
  • Rooms 901 - 903 are being modeled.
  • a user may locate the rooms in relation to one another.
  • a user at location 904 may determine the location of an object common to its current location and the next location to be captured.
  • a doorframe is used as the object between various rooms.
  • a mobile device for instance a T-shaped reflector in which laser rangefinder determines the locations of each arm of the T
  • it is appreciated that most any object that is viewable from two or more locations may be used to coordinate room location determination.
  • a user locates doorframe 908 and determines its location with respect to position 904 .
  • the user moves to position 905 and determines the location of doorframe 908 from the perspective of position 905 .
  • the user determines the location of doorframe 909 from perspective of position 905 , then moves to position 906 and again determines the location of doorframe 909 from the perspective of position 906 .
  • the user locates doorframe 910 from both of positions 906 and 907 . Using this stepwise approach, the locations of the mobile data capture unit as it moves between positions 904 - 907 may be determined.
  • This stepwise approach as shown in FIG. 12 may be used also to eliminate measurement errors as shown in FIG. 13 .
  • a user attempts to map rooms 1001 - 1010 .
  • a user starts at location 1011 and captures each room 1002 - 1010 in succession or in any order.
  • the data assembly station 108 may use the fact that the start and stop locations at position 1011 are the same. Accordingly any error in measurement between the start and stop locations may be sequentially removed from the various measurements of the rooms 1002 - 1010 .
  • the error value may be divided by the number of rooms and the apportioned error correction sequentially subtracted from each room.
  • FIG. 14 shows a user interface displayable on portable computer 203 .
  • Region 1101 shows a virtual room 1102 having been constructed from objects whose locations were determined from position 1103 . The locations where then used to specify the locations of objects as shown in region 1101 .
  • Field of view 1109 shows the field of view of camera 106 from position 1103 .
  • Region 1104 shows a perspective view of a region of virtual room 1102 as currently modeled from the field of view 1109 .
  • Region 1104 is beneficial as it permits a user who may be standing in an actual room facing in a direction corresponding to field of view 1109 to compare his view of the room with the view of virtual room 1102 as shown in region 1104 .
  • the data assembly station 108 may keep a list of those items that need to be captured and inform the mobile data capture units 104 , 112 , and 113 which items need to be captured.
  • the list may contain information and specify if the items are to be geospecific or geotypical to a location. Additionally, attributes needed for those items may be filled in and added to the object library for future retrieval.
  • Region 1105 shows a user interface similar to that of user interface 21 of FIG. 2 .
  • Region 1105 shows various models of objects to be instantiated in a rendered environment.
  • One may select a model library and perform other operations (including opening, closing, saving, and other general operations) using interfaces shown in region 1106 .
  • other operations may include correlating objects to one another (including placing objects like desks and chairs on a floor as opposed to floating in the middle of a room), aligning objects, rotating objects, flipping objects, and deleting objects.
  • one may manipulate the objects placed in region 1101 and 1104 to make them appear as they actually are in an actual room.
  • Region 1107 display is a variety of objects to be selected. For example, various types of office desks may be shown for placement as the desk in FIGS. 5A, 5B , and 6 (or various types of chairs as shown in FIG. 2 ).
  • the system may indicate which points may be used to locate the object. For example, an image of a desk with various points at the corners of the desk may be shown to a user, to prompt the user to select the various points.
  • the points may be selected in any order, or may be selected in a specified order.
  • region 1108 may show various model attributes of a selected object.
  • These after the may include color, texture, reflectivity, material composition, part number, and other options to be associated with the object (for example, an empty bookshelf or a full bookshelf or an empty desk or a cluttered desk or the tracking identification of a computer on a desk).
  • a user may guide the data capture process so as to quickly and efficiently model a room.
  • FIG. 15 shows a process for constructing an object in accordance with aspects of the present invention.
  • a user identifies an object to be modeled.
  • the user adds a point using the rangefinder 105 to the object selected in step 1201 .
  • the system determines if the point was accurately determined (for example, within a tolerance of sub inches to inches). If no, the system returns to step 1202 where another point is obtained. If yes from step 1203 , the system determines in step 1204 if the object selected in step 1201 may be positioned, oriented, and scaled with the current number of points (with some objects requiring more points than others). If no, an additional point is obtained in step 1202 .
  • the designation of which points to obtain may be left to the user or may be identified by the system. If yes from step 1204 , the system instantiates the object into the virtual environment with specific points of the object corresponding to measured values from actual captured locations from laser rangefinder 105 in step 1205 . It is appreciated that the object may be instantiated in step 1205 prior to determination of its orientation and scale, using a default orientation and scale from the object library. Alternatively, one may construct an object from the captured points, not relying on the objects from the object library.
  • step 1206 testing may be performed.
  • Various tests may be performed to attempt to complete the capture data of an environment. For instance, one may test to see if a captured environment encloses 3D space. If not, then the missing pieces may be obtained.
  • One may test to see if all polygons face the interior of a room. While one may specify that both sides of polygons are to be mapped with graphics, a savings of run-time processing power is achieved when only the visible sides of polygons are mapped.
  • Testing the polygons permits one to ensure that the sides of the polygons that are to have graphics are facing inward toward the inside of a room.
  • One may test to ensure that polygons share vertices. The polygons may be adjusted so that vertices match.
  • the specification of these items mentioned above may occur before, during, and/or after data capture.
  • the specification of the height of the ceiling may be specified when a project is first started (along with other requirements including maximum error tolerances) and the like.
  • FIG. 16 shows how environments may be combined with additional information and forwarded to additional applications.
  • maps of an environment 1301 , textures and/or images 1302 , resources 1303 including weather information, environmental information, civil engineering structures, and the like, three-dimensional models 1304 , and additional information regarding an environment (internal or external) 1305 may be combined in database 1306 .
  • This modeling information as stored in database 1306 may be provided to a data exchange system that interacts with a number of applications 1308 - 1309 .
  • these applications may include training simulators for military and police units, architects, real estate agents, interior and exterior decorators, civil planners and civil engineers, and the like.
  • Revenue may be generated from the ability to exchange information as shown in FIG. 16 .
  • source data 1306 may be supplied for a fixed-price to exchange system 1307 .
  • applications may only require certain sets of information from source data 1306 .
  • applications 1308 - 1309 may only wish to pay for specific data sets that they require.
  • These and other approaches may be used to generate income based on the modeling techniques described herein.
  • the system may support a variety of runtime formats. By adding in a new driver for each format, one may export a modeled environment to a desired format (including VRML, CAD, binary separate planes, CTDB, OpenFlight, GIS, SEDRIS, and the like).
  • a customer may be interviewed to determine the extent of the modeling requirements. For instance, a single house may only need a single modeler. However, a complex may require two or more mobile data capture units. Further, these mobile data capture units may have redundant capabilities or may have complementary data capturing capabilities (for instance, one mobile data capturing unit may have a visual camera associated with a rangefinder and second mobile data capturing unit may have a thermal camera associated with the rangefinder). The requirements may also specify the accuracy needed for the project.
  • the mobile data capture unit may be alerted to this issue and the mobile data capture unit instructed to reacquire data points to conform to the accuracy tolerance.
  • the data assembly station may coordinate mobile data capture units to work together as a team to capture environments and not duplicate efforts. To this end, the data assembly station may assign tasks to the mobile data capture units and monitor their completion of each task.

Abstract

A system and method for capturing and modeling an environment is described. A user may direct a rangefinder within an environment to capture data points and correlate the data points with predefined objects. One may also capture images of the environment to be used in the modeling environment. Using the present system, one may easily and quickly capture an environment using relatively few data points.

Description

    RELATED APPLICATION
  • This application is a divisional of U.S. patent application Ser. No. 10/441,121, entitled “Virtual Environment Capture”, filed May 20, 2003, which claims priority to U.S. Provisional Application Ser. No. 60/432,009, entitled “Virtual Environment Capture System,” filed Dec. 10, 2002, whose contents are expressly incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • Aspects of the present invention relate to computer modeling of environments. More specifically, aspects of the present invention relate to accurately capturing virtual environments.
  • 2. Description of Related Art
  • A need exists for three-dimensional models of interior environments. From the gaming community to the military, actual renderings of building interiors based on actual buildings are being sought. For example, applications are being developed to provide training at the individual human level to facilitate improved military operations and capabilities as well as situational awareness. The training of individuals improves with a greater level of realism during the training exercise. For instance, benefits gained form mission rehearsal increase with the realism of the environment in which the mission is rehearsed.
  • Numerous applications exist for modeling outdoor terrain including satellite mapping and traditional surveying techniques. However, interior modeling techniques have been constrained to using pre-existing computer aided design drawings, building layout drawings, and a person's recollection of an environment. These modeling approaches may rely on out-of-date information (for example, blueprints of a 50-year-old building). Further, these modeling approaches fail to accurately define furniture types and furniture placement. In some situations, one may need to orient oneself based on the furniture and wall coverings of a room. Without this information, one may become lost and have to spend valuable time reorienting oneself with the environment. Further, one may want to know what is movable and what is not movable. Accordingly, realistic interiors are needed for simulation and training purposes.
  • Traditional methods exist for rendering building interiors including the use of computer aided design (CAD) drawings or laser scanning for the development of 3-D building models. However, as above, these approaches have not been helpful when attempting to interact a rendered environment in real-time. In some cases, the overload of information has made interaction with these environments difficult. For instance, original CAD drawings are not always current and laser scanning requires significant post processing. In particular, laser scanning about a point in space generates a cloud of data points which need to be processed to eliminate redundant points and convert them into easily-handled polygons. Further, laser scanning from one position is sub-optimal as points in a room are generally occluded by furniture. Blocking access to these points makes subsequent image processing difficult as a computer would need to determine whether non-planar points represent furniture or a curve in the wall of a room. To accurately determine the walls and contents of a room, multiple passes may be needed to attempt to eliminate blind spots caused by furniture and other objects. Further, these techniques do not capture data with knowledge of what is being captured.
  • Some techniques have been used to capture photographs of building interiors. For example, spherical photographic images have been used to capture photographic information surrounding a point in space in a room. While these photographic images may be assembled into a video stream representing a path through the room, deviation from the predefined path is not possible.
  • Accordingly, an improved system is needed for capturing and modeling interior spaces.
  • BRIEF SUMMARY
  • Aspects of the present invention are directed to solving at least one of the issues identified above, thereby providing an enhanced image capture and modeling system for creating virtual environments from building interiors. In one aspect, a user is provided with a system for capturing photorealistic images of a building's interior and associating the images with specific data points. The data points may be obtained through use of a rangefinder. In some aspects, an imaging system may be attached to the rangefinder, thereby capturing both data points and photorealistic images at the same time. The user may use the combination of the rangefinder, imaging system, and a system that monitors the vector to the rangefinder from a position in the room and orientation of the rangefinder to determine distances from a central location to various objects about the user. Because of the user's interaction and selection of specific points of objects, objects may be selected and modeled using few data points. In other aspects, other sensors may be used in addition to or in place of the rangefinder and the imaging system. Information generated from the sensors may be combined into a virtual environment corresponding to the actual scanned environment. The rangefinder may include a laser rangefinder or other type of rangefinder. These and other aspects are described in greater detail below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Aspects of the present invention are illustrated by way of example and not limited in the accompanying figures.
  • FIG. 1 shows a process for modeling objects in accordance with aspects of the present invention.
  • FIG. 2 shows a process for positioning a chair in accordance with aspects of the present invention.
  • FIGS. 3A and 3B show relationships between a data capturing unit's location and an illuminated spot in accordance with aspects of the present invention.
  • FIG. 4 shows a system for capturing and modeling an environment in accordance with aspects of the present invention.
  • FIG. 5 shows an illustrative data capturing system in accordance with aspects of the present invention.
  • FIG. 6 shows the scanning of walls in accordance with aspects of the present invention.
  • FIG. 7 shows the scanning of objects in accordance with aspects of the present invention.
  • FIGS. 8A and 8B show illustrative data points used to determine the location, orientation, and scale of an object in accordance with aspects of the present invention.
  • FIGS. 9A and 9B show the capturing points of multiple objects in accordance with aspects of the present invention.
  • FIG. 10 shows various data capturing techniques for determining points associated with objects in accordance with aspects of the present invention.
  • FIG. 11 shows rooms to be assembled into an environment in accordance with aspects of the present invention.
  • FIG. 12 shows rooms to be assembled into an environment using common data points in accordance with aspects of the present invention.
  • FIG. 13 shows an illustrative process for minimizing and distributing measurement errors by traversing a circuit in accordance with aspects of the present invention.
  • FIG. 14 shows a user interface for modeling an environment in accordance with aspects of the present invention.
  • FIG. 15 shows a process for positioning an object in an environment in accordance with aspects of the present invention.
  • FIG. 16 shows a conceptual set of relationships describing how information regarding an environment may be forwarded to applications in accordance with embodiments of the present invention.
  • DETAILED DESCRIPTION OF THE DRAWINGS
  • Aspects of the present invention permit rapid capture and modeling of interior environments. Instead of using clouds of data points, which can require significant post processing to determine individual objects and construct a virtual environment, aspects of the present invention relate to capturing specific points in an actual environment and coordinating the point capture with predefined objects. The coordination with objects simplifies or eliminates post processing requirements.
  • The following description is divided into headings to assist a user in understanding aspects of the present invention. The headings include: modeling; data capture hardware; data gathering processes; data synthesis software; and applications of modeled environments.
  • Modeling
  • Interiors of buildings may be used by different entities. For example, architects may use virtual environments during construction of a building so as to properly define locations of walls, HVAC systems and pillars. Architects may also use virtual environments to experiment with remodeling spaces including moving walls and windows. Real estate agents and interior designers may use virtual environments for providing tours to prospective buyers and for suggesting modifications to existing spaces. Police and military units may further use virtual model of interior spaces for rehearsing in real-time delicate missions including rescuing hostages. For example, if a unit needed to rescue hostages from an embassy and the embassy was modeled, the unit would be able to rehearse in real time from a variety of different angles how different missions may be accomplished. Also, using a virtual rendering of a building allows experimentation into blowing holes in the walls to permit entry into adjoining spaces as well as determining potential angles from which enemy combatants may be firing. Further, the modeled environment may be used in distributed simulations in which two or more users or groups of users may interact within the environment. Further, the modeled environment may be used for military training.
  • To capture and model interior space, the system uses a combination of hardware and software to capture and model an environment.
  • The following provides a brief overview of how, in one example, one may model a room. Referring to FIG. 1, in step 10, a user positions a data capture unit in room. Next, in step 11, the user selects an object from an object library or object database corresponding to an actual object in the room. The object library or object database stores one or more groups of objects that may be placed in a virtual environment. The object library or database may include walls, floors, ceilings, chairs, office furniture, pictures and other wall hangings, bookshelves, cabinets, lamps, doors, office equipment, sofas, rugs, tables, and the like.
  • The selection of the object from the object library or object database may include a list of points that need to be obtained to locate a virtual instance of the object so that it corresponds to the actual location of the object in the actual room. This may be done by navigating a user interface (using a mouse or voice commands) to have the system retrieve the virtual object from the object library or database. The contents of the object library may be established based on information from a customer requesting the modeled environment. For example, the customer may specify that the environment to be captured may be an office (for which an object library of office furniture is obtained), a residential house (for which an object library of typical home furnishings is obtained), an industrial complex (for which industrial machines and other industrial objects are obtained), and the like. In other situations, a generic library of objects may be used.
  • To ease capturing of the points, the points may be gathered in a predefined order as shown in step 12. The order of the points to be gathered may be explicitly shown in the object from the object library or database. Once the system knows which object will be identified next by the user, the user manually guides a rangefinder to illuminate spots on the actual object in the room. The rangefinder described herein may be any suitable rangefinder including a laser rangefinder, a sonar-based rangefinder, a system that determines the position of a probe (wired or wirelessly, a ruler, touch probe, GPS, ultra-wideband radio positioning, for instance), and the like. For purposes of simplicity, the rangefinder is described as a laser rangefinder.
  • The laser from the laser range finder may be visible light or may be invisible to the unaided eye. When the user has positioned the illuminated spot on a desired position of the object, the user then indicates to the system (manually, verbally, or the like) that the current distance from the illuminated spot on the object to the data capture unit should be read as shown in step 13. Because the system determines the vector to the data capture unit and the angle of the laser rangefinder, the vector from the data capture unit to an illuminated spot may be determined. The location of the captured spot may be used as a three dimensional reference point for the virtual object from the object library.
  • Next, in step 14, the virtual object may be placed in the modeled environment. Next, in step 15, another point may be captured. In step 16, the instance of the object may be oriented in the modeled environment. In step 17, another point may be captured and, in step 18, the instance of the object scaled with the third point. It is appreciated that, alternatively, the placement of the object instance into the virtual environment may occur only after the scale of the object has been determined in step 18 rather than in step 13.
  • FIG. 2 shows a version of the process of FIG. 1 modeling a chair. A chair 20 exists in an actual room. A user selects a category of a chair most resembling chair 20 from user interface 21. Here, two versions of chairs are shown in user interface 21 including standard chair 22 and desk chair 23. With each version of chair 22 and 23, at least one location is defined with respect to the instance of the chair, indicating where a received location will correlate to a virtual chair. For instance, both of chairs 22 and 23 show locations “1” and “2” that indicate where received locations will be matched to the chairs.
  • Next, a user directs a laser rangefinder from location A 25 to illuminate spots 1 and 2, sequentially, on chair 24 (which is chair 20 but being illuminated by the laser rangefinder). The user may press a button on the laser rangefinder, may press a button on a mouse, or may speak into a headset to have the system accept the current location illuminated by the laser rangefinder as a location for chair 24. Additional points may be used to further orient and/or scale each chair.
  • The two locations 1 and 2 scanned from chair 24 are used to orient an instance of the chair in virtual environment 26. The virtual instance of the chair 28 is registered by the two locations 1 and 2 from position A 27 corresponding to position A 25 in the actual room. One position of the two positions 1 and 2 may be used to locate the chair 28 and the other of the two positions 1 and 2 used to specify the angle of the chair 28. It is appreciated that some objects may only be positioned in an environment. For example, one may need only to specify that a plant exists on a desk, rather than determine the orientation and size of the plant. Here, the size of the plant may be predefined in the library of objects.
  • FIGS. 3A and 3B show a relationship between position A 30 of a mobile data capture unit and that of location 1 on chair 33. As shown in FIG. 3A, a vector 36 is determined from the floor to position A of the mobile data capture unit. This vector may simply be the height of a tripod perpendicular to a floor. Next, a vector is determined between position A and position B of the laser rangefinder 31. The determination of this vector may take into account angle 34 and the distance between locations A and B. With the distance from the laser rangefinder 31 and the angle 35 known (from the orientation of the laser rangefinder 31), one may determine the vector to spot 1 (and 2) on chair 33. FIG. 3B shows a top down view of the arrangement. A relative vector from tripod 30 to rangefinder 31 is determined. The platform orientation and angle upon which the rangefinder 31 rests is determined. Finally, a relative vector from rangefinder 31 to spot 1 of chair 33 is determined. Using these vectors, one may determine the coordinates of spot
  • The following describes in greater detail the system used to create the modeled environment.
  • Data Capture Hardware
  • The data capture hardware may be separated into two systems: a mobile data capture unit and a data assembly station. FIG. 4 shows an example of how environment data 101 may be used by capture and modeling hardware 102 and converted into a modeled environment 103. Environment data 101 includes a variety of information including distance information from a mobile data capture unit 104 to the surrounding environment, image information regarding how the environment appears, and other additional information including light intensity, heat, sound, materials, and the like. These additional sensors may be important to create an immersive environment for mission rehearsal or telepresence. The distance information may be measured by a laser rangefinder 105. Image information of the environment data may be captured by camera 106. Sensor platform location and orientation information may be captured by sensor 114. Optionally, additional information capture sensors 107 may be used to capture the other additional information of environment data 101. The mobile data capture unit 104 transmits information from sensors 105-107 and 114 to data assembly station 108. The mobile data capture unit 104 may export the information to data assembly station 108 without additional processing or may attempt to perform additional processing to minimize need for downstream processing. For example,
  • Data assembly station 108 then creates the modeled environment 103. The data assembly station 108 may use the raw data from the sensors or may use created objects that were created at the mobile data capture unit 104. The data assembly station may also perform checks on the information received from the mobile data capture unit 104 to ensure that objects do not overlap or occupy the same physical space. Further, multiple data capture units (three, for example, are shown here) 104, 112, and 113 may be used to capture information for the data assembly station 108. The data assembly station 108 may coordinate and integrate information from the multiple data capture units 104, 112, and 113. Further, the data assembly station 108 may organize the mobile data capture units 104, 112, and 113 to capture an environment by, for instance, parsing an environment into sub-environments for each mobile data capture unit to handle independently. Also, the data assembly station 108 may coordinate the mobile data capture units 104, 112, and 113 to capture an environment with the mobile data capture units repeatedly scanning rooms, and then integrate the information into a single dataset. Using information from multiple data capture units may help minimize errors by averaging the models from each unit. The integration of data at the data assembly station 108 from one or more mobile data capture units may include a set of software tools that check and address issues with modeled geometry as well as images. For instance, the data assembly station 108 may determine that a room exists between the coverage areas from two mobile data capture units and instruct one or more of the mobile data capture units to capture that room. The data assembly station 108 may include conversion or export to a target runtime environment.
  • The mobile data capture unit 104 and the data assembly station 108 may be combined or may be separate from each other. If separate, the mobile data capture unit 104 may collect and initially test the collected data to ensure completeness of capture of an initial environment (for example, all the walls of a room). The data assembly station 108 may assemble data from the mobile data capture unit 104, build a run-time format database and test the assembled data to ensure an environment has been accurately captured.
  • Testing data among multiple mobile data capture units may occur at multiple levels. First, each mobile data capture unit may capture data and test for completeness the captured data. The data assembly station 108 collects data from the mobile data capture units. The received data is integrated and tested. Problem areas may be communicated back to the operators of the mobile data capture units (wire or wirelessly) to resample or fix or sample additional data. Data integration software may be running on the data assembly station (for instance, Multi-Gen Creator by Computer Associates, Inc. and OTB-Recompile may be running). It is appreciated that other software may be run in conjunction with or in place of the listed software on the data assembly station.
  • The testing may performed at any level. One advantage of performing testing at the mobile data capture units is that it provides the operators the feedback to know which areas of captured data need to be corrected.
  • The objects created at the mobile data capture unit may have been created with the guidance of a user, who guided the laser rangefinder 105 about the environment and capturing points of the environment or may be inferred from the order or previously captured knowledge of an environment. For example, one may position a door frame in a first room. Moving into the second room, the door frame may be placed in the second room based on 1) the locations of the rooms next to each other, 2) a predefined order in which rooms were to be captured, and/or 3) a determination of the walls being in alignment and the door frame being common to both rooms. The captured points may then be instantiated as objects from a predefined object library 109. These object models from object library 109 may be selected through user interface 111 as controlled by processor 110.
  • FIG. 5 shows an illustrative example of mobile data capture unit 104. Mobile data capture unit 104 may include a movable tripod 201 that supports an articulated arm 202, which, in turn, supports laser rangefinder 105 and camera 106. Additional sensors 107 may or may not be included. Additional sensors 107, if included, may be located at a variety of different locations, not necessarily directly attached to laser rangefinder 105 or camera 106. Mobile data capture unit 104 may also include a portable computer 203 to receive and temporarily store information from sensors 105-107. The portable computer 203 may include a generic library of objects or may include a geotypical or geospecific object library. For instance, it may include objects from South East Asia if modeling a home in South East Asia. Further, geospecific or geotypical objects may be used in some rooms while only generic objects used in other rooms.
  • In one aspect, the laser rangefinder 105 may be separate from the camera 106. In another aspect, the laser rangefinder 105 and the camera 106 may be mounted together and aligned so that the camera 106 will see the image surrounding a spot illuminated by the laser rangefinder 105. The camera 106 can be a digital camera that captures a visual image. The camera 106 may also be or include an infrared/thermal camera so as to see the contents of and what is located behind a wall. Further, camera 106 may also include a night vision camera to accurately capture night vision images.
  • The laser rangefinder 105 determines distance between an object illuminated by its laser spot and itself. Using articulated arm 202, one may determine the position of the illuminated spot compared to tripod 201. With tripod 201 fixed at a position in a room, the laser rangefinder 105 may be moved about on articulated arm 202 and have all distances from illuminated objects to tripod 201 determined. These distances may be temporarily stored in portable computer 203. Additionally or alternatively, the portable computer may transmit received information over connection 204 to data assembly station 108. Connection 204 may be a wired or wireless connection.
  • Articulated arm 202 may be a multi-jointed arm that includes sensors throughout the arm, where the sensors determine the angular offset of each segment of the arm. The combination of segments may be modeled as a transform matrix and applied to the information from the laser rangefinder 105. The result provides the location of tripod 201 and the pitch, yaw, and roll of laser rangefinder 105 and camera 106. Multi-jointed arms are available as the Microscribe arms by Immersion.com of San Jose, Calif. and from Faro Technologies of Florida. The specific arm 202 to be used may be chosen based on the precision required in the modeling of an environment. While the laser rangefinder 105 may not benefit from a determination of roll, the camera 106 benefits in that images captured by the camera may be normalized based on the amount of roll experienced by the camera and the images captured by the camera.
  • Camera 106 may include a digital camera. Any digital camera may be used, with the resulting resolution of the camera affecting the clarity of resultant modeled environments. The images from camera 106 may be mapped onto the surfaces of objects to provide a more realistic version of the object.
  • Further, articulated arm 202 may be replaced by a handheld laser and camera combination. The handheld combination may include a location determining device (including GPS, differential GPS, time multiplexed ultra wideband, and other location determining systems). The handheld unit may transmit its location relative to the tripod 201, another location, or may associate its position with the data points captured with the laser rangefinder. By associating its position with the information from the laser rangefinder, a system modeling the environment would be able to use the points themselves of an environment, rather than using an encoder arm. For instance, one may use GPS or ultra wide band radio location systems to generate location information without having to be physically connected to the mobile data capture unit 104. Further, different range finding techniques may be used including physical measurements with a probe or the like may be used to determine points in the environment.
  • In one example, the mobile data capture unit 104 may be controlled by a keyboard and/or a mouse. Alternatively, mobile data capture unit 104 may also include a headset 205 including a microphone 206 for receiving voice commands from an operator. The operator may use the microphone 206 to select and instruct the portable computer 203 regarding an object being illuminated by laser rangefinder 105 and captured by camera 106.
  • It is noted that the camera 106 does not need to be attached to the laser rangefinder 105. While attaching the two devices eliminates processing to correlate a spot from the laser rangefinder 105 with an image captured by camera 106, one may separate the laser rangefinder from the camera and have the camera view an illuminated object and apply an offset matrix to the camera 106 and the laser rangefinder 105 to correlate the camera image with the location of the illuminated object.
  • Referring to FIG. 4, data assembly station 108 may receive transmitted information over connection 204 from portable computer 203. Data assembly station 108 may assemble the received information into a modeled environment. Data assembly station 108 may include a computer capable of processing significant data sets. For example, data assembly station 108 may include a 3 gigahertz processor with 1 GB of RAM, 128 MB video card, and an 80 GB hard drive, for instance. The data assembly station 108 may capture rooms or may only assemble rooms from one or more mobile data capture units 104. Further, data assembly station 108 may perform an error checking operation to determine if any walls, doors, ceilings, and/or floors were not created or need to be resampled.
  • Data Gathering Processes
  • FIG. 6 shows a room to be modeled. A mobile data capture unit is shown at position A 301. A user may direct the laser rangefinder 105 at wall 302 and determine the locations of spots 302′ and 302″ relative to position A. To accurately model a plane, one needs at least three data points. However, one may accurately model a plane with fewer than three data points by making certain assumptions or global characteristics of the environment (that may be specified before, during or after data capture). For example, if the environment to be modeled includes walls perpendicular to floors, then only two data points are needed to model a wall. It is appreciated that sometimes multiple data points may be needed for a single wall or wall segment based on errors in readings. From position A 301, a user may determine the three dimensional position of walls 303, 304, and 305. Of course, one may differ from these assumptions or global characteristics when needed to accurately model an environment.
  • FIG. 7 shows additional objects being modeled from position 301. A user determines the three dimensional points of doors 401 and 402. Next, the user determines three dimensional points for desk 403 and chair 404. Finally, the three dimensional points for picture 405 may be determined as well. The visual likeness of picture 405 may also be captured by mobile data capture unit 104 at position 301. The visual likeness of picture 405 may then be added into the modeled environment.
  • FIGS. 8A and 8B show a number of data points that may be captured to determine the size of an object to be placed in a modeled environment. FIGS. 8A and 8B, for example, show a rectangular desk whose dimensions are about to be captured. Using laser rangefinder 105, a user illuminates a few points on desk 501 and effectively captures the dimensions of desk 501. In FIG. 8A, a user may start with a bottom right corner of a desk at position 502, capture position 503 at the bottom left corner of a desk, capture position 504 at the top left corner of the desk, and then capture position 505 at the top left back corner of the desk 501. FIG. 8B shows the same dimensions of desk 501 being captured but in a different order. The location of back right corner 506 is determined, followed by the front right bottom corner 507 followed by the front right top corner 508, then followed by the top front left corner 509. It is appreciated that various paths may be chosen to determine the dimensions of objects and the locations of objects from mobile data capture unit 104. The paths shown in FIGS. 8A and 8B are for illustrative purposes only. Other paths may be used to determine the dimensions and location of an object.
  • In one aspect, the user may follow a predefined path or series of locations on an object and have a processor match an instance of an object to the obtained locations. Alternatively, the user may obtain the locations in a random order and have the processor attempt to orient the instance of the object to fit the obtained locations.
  • FIGS. 9A and 9B show objects in a room to be modeled. The objects include a desk 601, a chair 602, and a picture 603 on wall 604 in FIG. 9A. To model the room has shown in FIG. 9A, a user may start with any of the objects. For instance, a user may start by modeling chair 602 and locating spots 602′ and 602″ to accurately position a chair in relation to a mobile data capture unit 104. Next, a user may highlight spots 601′, 601″, 601′″, and 601″″ to determine the position of desk 601 in relation to a mobile data capture unit 104. Next, a user may record the location of spots 604′ and 604″ to determine the location of wall 604. Finally, the user may record the location of spots 603′ and 603″ to determine the dimensions of rectangle picture 603. Alternatively, one may start with modeling the walls, ceiling, and floor, prior to modeling objects in the room. One advantage to modeling walls, ceilings and floors before modeling other objects is that the other objects may be aligned with the walls, ceilings, and floors as they are instantiated.
  • FIG. 9B shows an example of how the image of picture 603 may be captured as well. A model of a rectangular picture may be obtained from the object library 109. The instance of the object may use two locations to define itself in an environment. A user may obtain the locations of spot 603′ and 603″ as the two locations for the instance of picture 603.
  • Camera 106 may be aligned with laser rangefinder 105 so that the spot illuminated by laser rangefinder 105 falls roughly in the center of the field of view of camera 106. Image 603′ I is the image captured by camera 106 when the laser rangefinder 105 is illuminating spot 603′. Also, image 603″ I is the image captured by camera 106 when the laser rangefinder 105 is illuminating spot 603″. Because of an order of capturing locations as specified by the instance of the picture, the dimensions of the picture may be specified. The instance of the picture may also include an indication of what visual information is relevant to it. For location 603′, the relevant image information from image 603′ I is the bottom right quadrant of the image. For location 603″, the relevant image information from image 603″ I is the top left quadrant of the image. These two quadrants of the images may be correlated until an overlap is found (shown here as region 603 O). The two images 603′ I and 603″ I may be merged using this overlapping region 603 O as a guide. Image portions lying outside the picture may be eliminated. Further, the remaining image may be further parsed, cropped, skewed, stretched, and otherwise manipulated to rectify its image in association with a modeled version of picture 603. This may be necessary if the angle capturing picture 603 resulted in a foreshortened image of the picture 603.
  • FIG. 10 shows a variety of spots that may be selected to model walls of a room. The corner of the room as shown in FIG. 10 includes walls 701 and 702, ceiling 703, and floor 704. The floors, ceilings, and walls may be considered objects and/or planes. Wall 701 may be determined by capturing spots 706 and 707. Wall 702 needed additional spots 708-712 to accurately determine its location. This may be because of textures on wall 702 that affected the reflectivity of a spot or other reasons. A user may then repeatedly obtain locations of spots on an object until that object has been correctly located in a room. The height of ceiling 703 may be determined by spot 713 and depth of floor 704 below the mobile capturing device 104 may be determined by spot 714. A user may select objects corresponding walls, floors, and ceilings. These objects may be grouped. For example, objects representing walls that are perpendicular to both floors and ceilings may be grouped together. Objects representing walls that are not perpendicular to floors and ceilings may be grouped together. Further, slanted ceilings and floors may also be specified as objects. Alternatively, a user may set defaults in the system to start with all walls being perpendicular to floors and ceilings and all floors and ceilings being parallel to one another. Comers may be determined by the intersections of these objects.
  • In an alternative approach, one may concentrate on corners of the room to locate walls, ceilings, and floors. For instance, one may locate spots 715-717 to determine one corner and locate spots 718-720 to determine another corner. Locating corners as opposed to walls per se provides the benefit of being able to locate corners relatively easily when walls are occluded by multiple objects. Alternatively, one may locate walls separately and have locations of corners determined by data assembly station 108 as described above. This alternative approach eliminates issues of corners being occluded by furniture and other objects. Further, one may specify points 715-717 and the rangefinder repeatedly scan between the points, thereby constructing a number of points from which the planes of the walls and ceilings may be determined. Further, the intersecting edge between the plane of the ceiling and those of the walls may be determined from the multiple scans between the points by obtaining the maximum/minimum values of the points and using these values to determine the location of the edge. This technique may be used for other objects or intersection of objects as well. This technique uses some degree of automation to speed up data capture, thereby capturing knowledge of the environment at a higher level.
  • A user may move between multiple rooms to capture the environments of each room. FIG. 11 shows three rooms 801-803 having been captured. The organization of rooms 801-803 into a single unified building may be accomplished by data assembly station 108. The data assembly station 108 may use a variety of techniques of locating rooms 801-803 in relation to each other. For example, a user may indicate to data assembly station 108 which room follows a current room. Alternatively, a mobile data capture unit 104 may include a sensing device or transceiver that identifies its location. For example, the mobile data capture unit 104 may include a global positioning system (GPS) locator that determines its position absolutely. Alternatively, the mobile data capture unit 104 may include a differential GPS locator system. Further, the mobile data capture unit 104 may include a time multiplexed ultra wideband locator.
  • FIG. 12 shows a stepwise approach used to determine the position of rooms. Rooms 901-903 are being modeled. Before, during, or after modeling each room, a user may locate the rooms in relation to one another. First, for example, a user at location 904 may determine the location of an object common to its current location and the next location to be captured. In FIG. 12, a doorframe is used as the object between various rooms. Alternatively, one may use a mobile device (for instance a T-shaped reflector in which laser rangefinder determines the locations of each arm of the T) to place between rooms so as to locate the rooms in relation to one another. It is appreciated that most any object that is viewable from two or more locations may be used to coordinate room location determination. By indicating during capturing which points relate to other previously captured points, one may indicate how rooms interrelate.
  • From position 904, a user locates doorframe 908 and determines its location with respect to position 904. Next, the user moves to position 905 and determines the location of doorframe 908 from the perspective of position 905. Next, the user then determines the location of doorframe 909 from perspective of position 905, then moves to position 906 and again determines the location of doorframe 909 from the perspective of position 906. Finally, the user locates doorframe 910 from both of positions 906 and 907. Using this stepwise approach, the locations of the mobile data capture unit as it moves between positions 904-907 may be determined.
  • This stepwise approach as shown in FIG. 12 may be used also to eliminate measurement errors as shown in FIG. 13. A user attempts to map rooms 1001-1010. A user starts at location 1011 and captures each room 1002-1010 in succession or in any order. Next, upon completing the last room, the data assembly station 108 may use the fact that the start and stop locations at position 1011 are the same. Accordingly any error in measurement between the start and stop locations may be sequentially removed from the various measurements of the rooms 1002-1010. For instance, the error value may be divided by the number of rooms and the apportioned error correction sequentially subtracted from each room.
  • Data Synthesis Software
  • FIG. 14 shows a user interface displayable on portable computer 203. Region 1101 shows a virtual room 1102 having been constructed from objects whose locations were determined from position 1103. The locations where then used to specify the locations of objects as shown in region 1101. Field of view 1109 shows the field of view of camera 106 from position 1103. Region 1104 shows a perspective view of a region of virtual room 1102 as currently modeled from the field of view 1109. Region 1104 is beneficial as it permits a user who may be standing in an actual room facing in a direction corresponding to field of view 1109 to compare his view of the room with the view of virtual room 1102 as shown in region 1104. This way, the user would be able to immediately determine which objects, actually present in the room, are missing from the modeled environment as shown in top down view 1101 and perspective view 1104. Alternatively, the data assembly station 108 may keep a list of those items that need to be captured and inform the mobile data capture units 104, 112, and 113 which items need to be captured. The list may contain information and specify if the items are to be geospecific or geotypical to a location. Additionally, attributes needed for those items may be filled in and added to the object library for future retrieval.
  • Region 1105 shows a user interface similar to that of user interface 21 of FIG. 2. Region 1105 shows various models of objects to be instantiated in a rendered environment. One may select a model library and perform other operations (including opening, closing, saving, and other general operations) using interfaces shown in region 1106. For example, other operations may include correlating objects to one another (including placing objects like desks and chairs on a floor as opposed to floating in the middle of a room), aligning objects, rotating objects, flipping objects, and deleting objects. In short, one may manipulate the objects placed in region 1101 and 1104 to make them appear as they actually are in an actual room.
  • Region 1107 display is a variety of objects to be selected. For example, various types of office desks may be shown for placement as the desk in FIGS. 5A, 5B, and 6 (or various types of chairs as shown in FIG. 2). Upon selection of an object from region 1107, the system may indicate which points may be used to locate the object. For example, an image of a desk with various points at the corners of the desk may be shown to a user, to prompt the user to select the various points. The points may be selected in any order, or may be selected in a specified order. One advantage of selecting in a specified order allows portable computer 203 to rotate and scale an object without substantial user interaction. Finally, region 1108 may show various model attributes of a selected object. These after the may include color, texture, reflectivity, material composition, part number, and other options to be associated with the object (for example, an empty bookshelf or a full bookshelf or an empty desk or a cluttered desk or the tracking identification of a computer on a desk). Using the interface as shown in FIG. 14, or with additional user interface regions or without the shown regions, a user may guide the data capture process so as to quickly and efficiently model a room.
  • FIG. 15 shows a process for constructing an object in accordance with aspects of the present invention. In step 1201, a user identifies an object to be modeled. Next in step 1202, the user adds a point using the rangefinder 105 to the object selected in step 1201. Next, in step 1203, the system determines if the point was accurately determined (for example, within a tolerance of sub inches to inches). If no, the system returns to step 1202 where another point is obtained. If yes from step 1203, the system determines in step 1204 if the object selected in step 1201 may be positioned, oriented, and scaled with the current number of points (with some objects requiring more points than others). If no, an additional point is obtained in step 1202. The designation of which points to obtain may be left to the user or may be identified by the system. If yes from step 1204, the system instantiates the object into the virtual environment with specific points of the object corresponding to measured values from actual captured locations from laser rangefinder 105 in step 1205. It is appreciated that the object may be instantiated in step 1205 prior to determination of its orientation and scale, using a default orientation and scale from the object library. Alternatively, one may construct an object from the captured points, not relying on the objects from the object library.
  • Finally, shown in broken line is step 1206 where testing may be performed. Various tests may be performed to attempt to complete the capture data of an environment. For instance, one may test to see if a captured environment encloses 3D space. If not, then the missing pieces may be obtained. One may test to see if all polygons face the interior of a room. While one may specify that both sides of polygons are to be mapped with graphics, a savings of run-time processing power is achieved when only the visible sides of polygons are mapped. Testing the polygons permits one to ensure that the sides of the polygons that are to have graphics are facing inward toward the inside of a room. One may test to ensure that polygons share vertices. The polygons may be adjusted so that vertices match. One may test to see if distinct rooms share the same rendered space (for instance, to determine if rooms intersect), excessive gaps between walls, walls are a constant thickness (where specified), and floors and ceilings are the same level (where specified). The specification of these items mentioned above may occur before, during, and/or after data capture. In one example, the specification of the height of the ceiling may be specified when a project is first started (along with other requirements including maximum error tolerances) and the like.
  • Applications Of Modeled Environments
  • FIG. 16 shows how environments may be combined with additional information and forwarded to additional applications. For example, maps of an environment 1301, textures and/or images 1302, resources 1303 including weather information, environmental information, civil engineering structures, and the like, three-dimensional models 1304, and additional information regarding an environment (internal or external) 1305 may be combined in database 1306. This modeling information as stored in database 1306 may be provided to a data exchange system that interacts with a number of applications 1308-1309. For example these applications may include training simulators for military and police units, architects, real estate agents, interior and exterior decorators, civil planners and civil engineers, and the like.
  • Revenue may be generated from the ability to exchange information as shown in FIG. 16. For example, source data 1306 may be supplied for a fixed-price to exchange system 1307. Alternatively, applications may only require certain sets of information from source data 1306. Accordingly, applications 1308-1309 may only wish to pay for specific data sets that they require. These and other approaches may be used to generate income based on the modeling techniques described herein. The system may support a variety of runtime formats. By adding in a new driver for each format, one may export a modeled environment to a desired format (including VRML, CAD, binary separate planes, CTDB, OpenFlight, GIS, SEDRIS, and the like).
  • To establish the specific requirements, tolerances, type of object library to use, and the like, a customer may be interviewed to determine the extent of the modeling requirements. For instance, a single house may only need a single modeler. However, a complex may require two or more mobile data capture units. Further, these mobile data capture units may have redundant capabilities or may have complementary data capturing capabilities (for instance, one mobile data capturing unit may have a visual camera associated with a rangefinder and second mobile data capturing unit may have a thermal camera associated with the rangefinder). The requirements may also specify the accuracy needed for the project. If, for example, a project requires certain accuracy and a mobile data capture unit exceeds the predefined accuracy tolerance, then the mobile data capture unit may be alerted to this issue and the mobile data capture unit instructed to reacquire data points to conform to the accuracy tolerance. Further, the data assembly station may coordinate mobile data capture units to work together as a team to capture environments and not duplicate efforts. To this end, the data assembly station may assign tasks to the mobile data capture units and monitor their completion of each task.
  • Aspects of the present invention have been described in terms of illustrative embodiments thereof. Numerous other embodiments, modifications and variations within the scope and spirit of the appended claims will occur to persons of ordinary skill in the art from a review of this disclosure.

Claims (17)

1. A process for reducing errors in a rendered environment comprising the steps of:
modeling a first room from a first position within said first room;
modeling at least a second room connected to said first room from a second position;
returning to said first position within said first room and obtaining distance information from said first position;
determining an error value associated with the environment based on the distance information from said first position; and,
sequentially subtracting the error value from the modeling of said first and said at least second room.
2. The process according to claim 1, further comprising:
modeling at least a third room connected to said second room.
3. The process according to claim 1, wherein the modeling of the first room or the at least a second room includes a process for constructing a virtual object based on actual environment data comprising:
receiving an identification of an object to be created;
receiving at least one point to be associated with said object, said at least one point identified by a rangefinder;
instantiating said object based on said at least one point.
4. The process according to claim 3, wherein the process for constructing a virtual object based on actual environment data further comprises:
obtaining additional points; and
modifying said instantiated object based on said additional points.
5. The process according to claim 3, wherein the process for constructing a virtual object based on actual environment data further comprises:
instructing a user which points of said object a user is to capture with said rangefinder.
6. The process according to claim 5, wherein said points are from locations on a physical object in a room to be modeled.
7. The process according to claim 3, wherein the process for constructing a virtual object based on actual environment data further comprises:
capturing image information with a camera associated with said rangefinder; and
adding the image information to said object.
8. The process according to claim 3, wherein said rangefinder is a laser rangefinder.
9. A modeling and correction system configured to reduce errors in a rendered environment comprising:
a contextual data capture unit configured to receive environment data from a first position and at least a second position, said data capture unit including a rangefinder for measuring specific points selectable by a user;
a data assembly station configured to receive information from said data capture unit, and further having computer-readable instructions on a computer-readable medium that when executed model an environment based on environment data captured by said data capture unit, wherein the modeling comprises:
after receiving the environment data from the first position and at least the second position, receiving a second reading of environment data from the first position;
determining an error value associated with the environment based on the second reading from said first position; and
sequentially subtracting the error value from the modeling of said first position and said at least second position.
10. The system according to claim 9, wherein the first position is in a first room and the at least second position is in a second room connected to the first room.
11. The system according to claim 10, wherein the contextual data capture unit is configured to receive environment data from at least a third position.
12. The system according to claim 9, wherein the computer-executable instructions when executed further comprise a process to construct a virtual object based on actual environment data comprising:
receiving an identification of an object to be created;
receiving at least one point to be associated with said object, said at least one point identified by a rangefinder; and
instantiating said object based on said at least one point.
13. The system according to claim 12, wherein the process for constructing a virtual object based on actual environment data further comprises:
obtaining additional points; and
modifying said instantiated object based on said additional points.
14. The system according to claim 12, wherein the process for constructing a virtual object based on actual environment data further comprises:
instructing a user which points of said object a user is to capture with said rangefinder.
15. The process according to claim 14, wherein said points are from locations on a physical object in a room to be modeled.
16. The system according to claim 12, wherein the process for constructing a virtual object based on actual environment data further comprises:
capturing image information with a camera associated with said rangefinder; and
adding the image information to said object.
17. The process according to claim 9, wherein said rangefinder is a laser rangefinder.
US11/624,593 2002-12-10 2007-01-18 Virtual environment capture Abandoned US20070118805A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/624,593 US20070118805A1 (en) 2002-12-10 2007-01-18 Virtual environment capture

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US43200902P 2002-12-10 2002-12-10
US10/441,121 US7398481B2 (en) 2002-12-10 2003-05-20 Virtual environment capture
US11/624,593 US20070118805A1 (en) 2002-12-10 2007-01-18 Virtual environment capture

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/441,121 Division US7398481B2 (en) 2002-12-10 2003-05-20 Virtual environment capture

Publications (1)

Publication Number Publication Date
US20070118805A1 true US20070118805A1 (en) 2007-05-24

Family

ID=32474653

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/441,121 Expired - Fee Related US7398481B2 (en) 2002-12-10 2003-05-20 Virtual environment capture
US11/624,593 Abandoned US20070118805A1 (en) 2002-12-10 2007-01-18 Virtual environment capture

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/441,121 Expired - Fee Related US7398481B2 (en) 2002-12-10 2003-05-20 Virtual environment capture

Country Status (1)

Country Link
US (2) US7398481B2 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080270945A1 (en) * 2007-04-24 2008-10-30 Fatdoor, Inc. Interior spaces in a geo-spatial environment
CN101363717A (en) * 2007-08-10 2009-02-11 莱卡地球系统公开股份有限公司 Method and measuring system for contactless coordinate measurement of the surface of an object
US20090046895A1 (en) * 2007-08-10 2009-02-19 Leica Geosystems Ag Method and measurement system for contactless coordinate measurement on an object surface
US20090118846A1 (en) * 1999-05-17 2009-05-07 Invensys Systems, Inc. Control systems and methods with smart blocks
WO2009155483A1 (en) * 2008-06-20 2009-12-23 Invensys Systems, Inc. Systems and methods for immersive interaction with actual and/or simulated facilities for process, environmental and industrial control
US20100100851A1 (en) * 2008-10-16 2010-04-22 International Business Machines Corporation Mapping a real-world object in a personal virtual world
US20100138793A1 (en) * 2008-12-02 2010-06-03 Microsoft Corporation Discrete objects for building virtual environments
WO2011069112A1 (en) * 2009-12-03 2011-06-09 Military Wraps Research & Development Realistic immersive training environments
US20110171623A1 (en) * 2008-08-19 2011-07-14 Cincotti K Dominic Simulated structures for urban operations training and methods and systems for creating same
US8023500B2 (en) 1996-08-20 2011-09-20 Invensys Systems, Inc. Methods for process control with change updates
US8090452B2 (en) 1999-06-11 2012-01-03 Invensys Systems, Inc. Methods and apparatus for control using control devices that provide a virtual machine environment and that communicate via an IP network
US8127060B2 (en) 2009-05-29 2012-02-28 Invensys Systems, Inc Methods and apparatus for control configuration with control objects that are fieldbus protocol-aware
US20120075296A1 (en) * 2008-10-08 2012-03-29 Strider Labs, Inc. System and Method for Constructing a 3D Scene Model From an Image
US8368640B2 (en) 1999-05-17 2013-02-05 Invensys Systems, Inc. Process control configuration system with connection validation and configuration
US8463964B2 (en) 2009-05-29 2013-06-11 Invensys Systems, Inc. Methods and apparatus for control configuration with enhanced change-tracking
US8597026B2 (en) 2008-04-11 2013-12-03 Military Wraps, Inc. Immersive training scenario systems and related methods
US20150097828A1 (en) * 2013-10-09 2015-04-09 Trimble Navigation Limited Method and system for 3d modeling using feature detection
US9020240B2 (en) 2007-08-10 2015-04-28 Leica Geosystems Ag Method and surveying system for noncontact coordinate measurement on an object surface
CN105122304A (en) * 2012-11-14 2015-12-02 微软技术许可有限责任公司 Real-time design of living spaces with augmented reality
US9710958B2 (en) 2011-11-29 2017-07-18 Samsung Electronics Co., Ltd. Image processing apparatus and method
US9965837B1 (en) 2015-12-03 2018-05-08 Quasar Blu, LLC Systems and methods for three dimensional environmental modeling
DE102010043136B4 (en) * 2010-10-29 2018-10-31 Hilti Aktiengesellschaft Measuring device and method for a non-contact measurement of distances at a target object
WO2018199351A1 (en) * 2017-04-26 2018-11-01 라인 가부시키가이샤 Method and device for generating image file including sensor data as metadata
US10330441B2 (en) 2008-08-19 2019-06-25 Military Wraps, Inc. Systems and methods for creating realistic immersive training environments and computer programs for facilitating the creation of same
US10607328B2 (en) 2015-12-03 2020-03-31 Quasar Blu, LLC Systems and methods for three-dimensional environmental modeling of a particular location such as a commercial or residential property
US11087445B2 (en) 2015-12-03 2021-08-10 Quasar Blu, LLC Systems and methods for three-dimensional environmental modeling of a particular location such as a commercial or residential property

Families Citing this family (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8751950B2 (en) * 2004-08-17 2014-06-10 Ice Edge Business Solutions Ltd. Capturing a user's intent in design software
US20050131658A1 (en) * 2003-12-16 2005-06-16 Mei Hsaio L.S. Systems and methods for 3D assembly venue modeling
US7728833B2 (en) * 2004-08-18 2010-06-01 Sarnoff Corporation Method for generating a three-dimensional model of a roof structure
RU2452033C2 (en) * 2005-01-03 2012-05-27 Опсигал Контрол Системз Лтд. Systems and methods for night surveillance
US20070030348A1 (en) * 2005-08-04 2007-02-08 Sony Ericsson Mobile Communications Ab Wireless communication device with range finding functions
US20070218900A1 (en) 2006-03-17 2007-09-20 Raj Vasant Abhyanker Map based neighborhood search and community contribution
US8874489B2 (en) 2006-03-17 2014-10-28 Fatdoor, Inc. Short-term residential spaces in a geo-spatial environment
US9459622B2 (en) 2007-01-12 2016-10-04 Legalforce, Inc. Driverless vehicle commerce network and community
US8965409B2 (en) 2006-03-17 2015-02-24 Fatdoor, Inc. User-generated community publication in an online neighborhood social network
US8738545B2 (en) 2006-11-22 2014-05-27 Raj Abhyanker Map based neighborhood search and community contribution
US9070101B2 (en) 2007-01-12 2015-06-30 Fatdoor, Inc. Peer-to-peer neighborhood delivery multi-copter and method
US9071367B2 (en) 2006-03-17 2015-06-30 Fatdoor, Inc. Emergency including crime broadcast in a neighborhood social network
US9037516B2 (en) 2006-03-17 2015-05-19 Fatdoor, Inc. Direct mailing in a geo-spatial environment
US9098545B2 (en) 2007-07-10 2015-08-04 Raj Abhyanker Hot news neighborhood banter in a geo-spatial social network
US9002754B2 (en) 2006-03-17 2015-04-07 Fatdoor, Inc. Campaign in a geo-spatial environment
US9373149B2 (en) 2006-03-17 2016-06-21 Fatdoor, Inc. Autonomous neighborhood vehicle commerce network and community
US9064288B2 (en) 2006-03-17 2015-06-23 Fatdoor, Inc. Government structures and neighborhood leads in a geo-spatial environment
US8732091B1 (en) 2006-03-17 2014-05-20 Raj Abhyanker Security in a geo-spatial environment
US8060344B2 (en) * 2006-06-28 2011-11-15 Sam Stathis Method and system for automatically performing a study of a multidimensional space
FR2905962B1 (en) * 2006-09-15 2011-11-11 Chalets Et Maisons Bois Poirot METHOD OF CALCULATING THE STRAMMENT OF A WALL IN STACKED MADRIERS
US8863245B1 (en) 2006-10-19 2014-10-14 Fatdoor, Inc. Nextdoor neighborhood social network method, apparatus, and system
JP2008108246A (en) * 2006-10-23 2008-05-08 Internatl Business Mach Corp <Ibm> Method, system and computer program for generating virtual image according to position of browsing person
EP2252951B1 (en) 2008-03-11 2021-05-05 Ice Edge Business Solutions, Ltd. Automatically creating and modifying furniture layouts in design software
US9811893B2 (en) 2008-11-04 2017-11-07 The United States Of America, As Represented By The Secretary Of The Navy Composable situational awareness visualization system
DE102008054453A1 (en) * 2008-12-10 2010-06-17 Robert Bosch Gmbh Measuring system for measuring rooms and / or objects
US20100225490A1 (en) * 2009-03-05 2010-09-09 Leuthardt Eric C Postural information system and method including central determining of subject advisory information based on subject status information and postural influencer status information
US20100225491A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method
US20100228490A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method
US20100271200A1 (en) * 2009-03-05 2010-10-28 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method including determining response to subject advisory information
US20100228158A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method including device level determining of subject advisory information based on subject status information and postural influencer status information
US9024976B2 (en) * 2009-03-05 2015-05-05 The Invention Science Fund I, Llc Postural information system and method
US20100225473A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method
US20100228154A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method including determining response to subject advisory information
US20100228487A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method
US20100228153A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method
US20100228159A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method
US20100225498A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Postural information system and method
US20100228488A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method
US20100228494A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method including determining subject advisory information based on prior determined subject advisory information
US20100228492A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of State Of Delaware Postural information system and method including direction generation based on collection of subject advisory information
US20100225474A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method
US20100228495A1 (en) * 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method including determining subject advisory information based on prior determined subject advisory information
WO2011063034A1 (en) * 2009-11-17 2011-05-26 Rtp, Llc Systems and methods for augmented reality
AU2011252083B2 (en) * 2010-05-10 2013-06-13 Leica Geosystems Ag Surveying method
US9898862B2 (en) 2011-03-16 2018-02-20 Oldcastle Buildingenvelope, Inc. System and method for modeling buildings and building products
US8620079B1 (en) * 2011-05-10 2013-12-31 First American Data Tree Llc System and method for extracting information from documents
JP5269972B2 (en) * 2011-11-29 2013-08-21 株式会社東芝 Electronic device and three-dimensional model generation support method
US8965741B2 (en) 2012-04-24 2015-02-24 Microsoft Corporation Context aware surface scanning and reconstruction
US9372088B2 (en) * 2012-08-03 2016-06-21 Robotic Research, Llc Canine handler operations positioning system
AU2013203521B2 (en) * 2012-08-13 2016-05-26 Auto-Measure Pty Limited Building modelling system
US9589078B2 (en) * 2012-09-27 2017-03-07 Futurewei Technologies, Inc. Constructing three dimensional model using user equipment
US9070217B2 (en) * 2013-03-15 2015-06-30 Daqri, Llc Contextual local image recognition dataset
EP2973414B1 (en) 2013-03-15 2021-09-01 Robert Bosch GmbH Apparatus for generation of a room model
US9746330B2 (en) * 2013-08-03 2017-08-29 Robotic Research, Llc System and method for localizing two or more moving nodes
CN103593514B (en) * 2013-10-30 2016-06-01 中国运载火箭技术研究院 Multi-spectral-coversynthetic synthetic environment simulation system
US9439367B2 (en) 2014-02-07 2016-09-13 Arthi Abhyanker Network enabled gardening with a remotely controllable positioning extension
US9457901B2 (en) 2014-04-22 2016-10-04 Fatdoor, Inc. Quadcopter with a printable payload extension system and method
US9004396B1 (en) 2014-04-24 2015-04-14 Fatdoor, Inc. Skyteboard quadcopter and method
US9022324B1 (en) 2014-05-05 2015-05-05 Fatdoor, Inc. Coordination of aerial vehicles through a central server
US9441981B2 (en) 2014-06-20 2016-09-13 Fatdoor, Inc. Variable bus stops across a bus route in a regional transportation network
US9971985B2 (en) 2014-06-20 2018-05-15 Raj Abhyanker Train based community
US9451020B2 (en) 2014-07-18 2016-09-20 Legalforce, Inc. Distributed communication of independent autonomous vehicles to provide redundancy and performance
KR20160024143A (en) * 2014-08-25 2016-03-04 삼성전자주식회사 Method and Electronic Device for image processing
US9911232B2 (en) 2015-02-27 2018-03-06 Microsoft Technology Licensing, Llc Molding and anchoring physically constrained virtual environments to real-world environments
US10671066B2 (en) 2015-03-03 2020-06-02 PreNav, Inc. Scanning environments and tracking unmanned aerial vehicles
US9898864B2 (en) 2015-05-28 2018-02-20 Microsoft Technology Licensing, Llc Shared tactile interaction and user safety in shared space multi-person immersive virtual reality
US9836117B2 (en) 2015-05-28 2017-12-05 Microsoft Technology Licensing, Llc Autonomous drones for tactile feedback in immersive virtual reality
US10617956B2 (en) * 2016-09-30 2020-04-14 Sony Interactive Entertainment Inc. Methods for providing interactive content in a virtual reality scene to guide an HMD user to safety within a real world space
US10643089B2 (en) 2016-10-13 2020-05-05 Ricoh Company, Ltd. Information processing system to obtain and manage images of a property
JP6809913B2 (en) * 2017-01-26 2021-01-06 パナソニック株式会社 Robots, robot control methods, and map generation methods
WO2018144396A1 (en) * 2017-02-02 2018-08-09 PreNav, Inc. Tracking image collection for digital capture of environments, and associated systems and methods
US20180330325A1 (en) 2017-05-12 2018-11-15 Zippy Inc. Method for indicating delivery location and software for same
WO2019014620A1 (en) * 2017-07-13 2019-01-17 Zillow Group, Inc. Capturing, connecting and using building interior data from mobile devices
CN107657609B (en) * 2017-09-29 2020-11-10 西安近代化学研究所 Method for obtaining perforation density of target plate based on laser scanning
US11393179B2 (en) 2020-10-09 2022-07-19 Open Space Labs, Inc. Rendering depth-based three-dimensional model with integrated image frames

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330523B1 (en) * 1996-04-24 2001-12-11 Cyra Technologies, Inc. Integrated system for quickly and accurately imaging and modeling three-dimensional objects
US20020072881A1 (en) * 2000-12-08 2002-06-13 Tracker R&D, Llc System for dynamic and automatic building mapping
US20050046373A1 (en) * 2001-11-03 2005-03-03 Aldred Micheal David Autonomous machine

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU4336300A (en) 1999-04-08 2000-10-23 Internet Pictures Corporation Virtual theater
US6590640B1 (en) * 2000-07-20 2003-07-08 Boards Of Regents, The University Of Texas System Method and apparatus for mapping three-dimensional features
WO2002035909A2 (en) * 2000-11-03 2002-05-10 Siemens Corporate Research, Inc. Video-supported planning and design with physical marker objects sign
WO2003049455A1 (en) * 2001-11-30 2003-06-12 Zaxel Systems, Inc. Image-based rendering for 3d object viewing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330523B1 (en) * 1996-04-24 2001-12-11 Cyra Technologies, Inc. Integrated system for quickly and accurately imaging and modeling three-dimensional objects
US20020072881A1 (en) * 2000-12-08 2002-06-13 Tracker R&D, Llc System for dynamic and automatic building mapping
US20050046373A1 (en) * 2001-11-03 2005-03-03 Aldred Micheal David Autonomous machine

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8023500B2 (en) 1996-08-20 2011-09-20 Invensys Systems, Inc. Methods for process control with change updates
US8368640B2 (en) 1999-05-17 2013-02-05 Invensys Systems, Inc. Process control configuration system with connection validation and configuration
US20090118846A1 (en) * 1999-05-17 2009-05-07 Invensys Systems, Inc. Control systems and methods with smart blocks
US20090118845A1 (en) * 1999-05-17 2009-05-07 Invensys Systems, Inc. Control system configuration and methods with object characteristic swapping
US8229579B2 (en) 1999-05-17 2012-07-24 Invensys Systems, Inc. Control systems and methods with versioning
US8225271B2 (en) 1999-05-17 2012-07-17 Invensys Systems, Inc. Apparatus for control systems with objects that are associated with live data
US8028275B2 (en) 1999-05-17 2011-09-27 Invensys Systems, Inc. Control systems and methods with smart blocks
US8028272B2 (en) 1999-05-17 2011-09-27 Invensys Systems, Inc. Control system configurator and methods with edit selection
US8090452B2 (en) 1999-06-11 2012-01-03 Invensys Systems, Inc. Methods and apparatus for control using control devices that provide a virtual machine environment and that communicate via an IP network
US20080270945A1 (en) * 2007-04-24 2008-10-30 Fatdoor, Inc. Interior spaces in a geo-spatial environment
US9020240B2 (en) 2007-08-10 2015-04-28 Leica Geosystems Ag Method and surveying system for noncontact coordinate measurement on an object surface
US8244030B2 (en) * 2007-08-10 2012-08-14 Leica Geosystems Ag Method and measurement system for contactless coordinate measurement of an object surface
CN101363717A (en) * 2007-08-10 2009-02-11 莱卡地球系统公开股份有限公司 Method and measuring system for contactless coordinate measurement of the surface of an object
US8036452B2 (en) * 2007-08-10 2011-10-11 Leica Geosystems Ag Method and measurement system for contactless coordinate measurement on an object surface
US20110317880A1 (en) * 2007-08-10 2011-12-29 Leiga Geosystems Ag Method and measurement system for contactless coordinate measurement on an object surface
US20090046895A1 (en) * 2007-08-10 2009-02-19 Leica Geosystems Ag Method and measurement system for contactless coordinate measurement on an object surface
US8597026B2 (en) 2008-04-11 2013-12-03 Military Wraps, Inc. Immersive training scenario systems and related methods
WO2009155483A1 (en) * 2008-06-20 2009-12-23 Invensys Systems, Inc. Systems and methods for immersive interaction with actual and/or simulated facilities for process, environmental and industrial control
US20090319058A1 (en) * 2008-06-20 2009-12-24 Invensys Systems, Inc. Systems and methods for immersive interaction with actual and/or simulated facilities for process, environmental and industrial control
US8594814B2 (en) * 2008-06-20 2013-11-26 Invensys Systems, Inc. Systems and methods for immersive interaction with actual and/or simulated facilities for process, environmental and industrial control
CN104407518A (en) * 2008-06-20 2015-03-11 因文西斯系统公司 Systems and methods for immersive interaction with actual and/or simulated facilities for process, environmental and industrial control
US20110171623A1 (en) * 2008-08-19 2011-07-14 Cincotti K Dominic Simulated structures for urban operations training and methods and systems for creating same
US8764456B2 (en) 2008-08-19 2014-07-01 Military Wraps, Inc. Simulated structures for urban operations training and methods and systems for creating same
US10330441B2 (en) 2008-08-19 2019-06-25 Military Wraps, Inc. Systems and methods for creating realistic immersive training environments and computer programs for facilitating the creation of same
US20120075296A1 (en) * 2008-10-08 2012-03-29 Strider Labs, Inc. System and Method for Constructing a 3D Scene Model From an Image
US20100100851A1 (en) * 2008-10-16 2010-04-22 International Business Machines Corporation Mapping a real-world object in a personal virtual world
US20100138793A1 (en) * 2008-12-02 2010-06-03 Microsoft Corporation Discrete objects for building virtual environments
US8463964B2 (en) 2009-05-29 2013-06-11 Invensys Systems, Inc. Methods and apparatus for control configuration with enhanced change-tracking
US8127060B2 (en) 2009-05-29 2012-02-28 Invensys Systems, Inc Methods and apparatus for control configuration with control objects that are fieldbus protocol-aware
WO2011069112A1 (en) * 2009-12-03 2011-06-09 Military Wraps Research & Development Realistic immersive training environments
DE102010043136B4 (en) * 2010-10-29 2018-10-31 Hilti Aktiengesellschaft Measuring device and method for a non-contact measurement of distances at a target object
US9710958B2 (en) 2011-11-29 2017-07-18 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN105122304A (en) * 2012-11-14 2015-12-02 微软技术许可有限责任公司 Real-time design of living spaces with augmented reality
US9355495B2 (en) * 2013-10-09 2016-05-31 Trimble Navigation Limited Method and system for 3D modeling using feature detection
WO2015053954A1 (en) * 2013-10-09 2015-04-16 Trimble Navigation Limited Method and system for 3d modeling using feature detection
US20150097828A1 (en) * 2013-10-09 2015-04-09 Trimble Navigation Limited Method and system for 3d modeling using feature detection
US9965837B1 (en) 2015-12-03 2018-05-08 Quasar Blu, LLC Systems and methods for three dimensional environmental modeling
US10339644B2 (en) 2015-12-03 2019-07-02 Quasar Blu, LLC Systems and methods for three dimensional environmental modeling
US10607328B2 (en) 2015-12-03 2020-03-31 Quasar Blu, LLC Systems and methods for three-dimensional environmental modeling of a particular location such as a commercial or residential property
US11087445B2 (en) 2015-12-03 2021-08-10 Quasar Blu, LLC Systems and methods for three-dimensional environmental modeling of a particular location such as a commercial or residential property
US11798148B2 (en) 2015-12-03 2023-10-24 Echosense, Llc Systems and methods for three-dimensional environmental modeling of a particular location such as a commercial or residential property
WO2018199351A1 (en) * 2017-04-26 2018-11-01 라인 가부시키가이샤 Method and device for generating image file including sensor data as metadata

Also Published As

Publication number Publication date
US20040109012A1 (en) 2004-06-10
US7398481B2 (en) 2008-07-08

Similar Documents

Publication Publication Date Title
US7398481B2 (en) Virtual environment capture
US11408738B2 (en) Automated mapping information generation from inter-connected images
US11238652B2 (en) Presenting integrated building information using building models
US11480433B2 (en) Use of automated mapping information from inter-connected images
US11243656B2 (en) Automated tools for generating mapping information for buildings
US10262231B2 (en) Apparatus and method for spatially referencing images
Baillot et al. Authoring of physical models using mobile computers
JP4685905B2 (en) System for texture rising of electronic display objects
US7130774B2 (en) System for creating measured drawings
US20230032888A1 (en) Automated Determination Of Acquisition Locations Of Acquired Building Images Based On Determined Surrounding Room Data
CN114357598A (en) Automated tool for generating building mapping information
US20120253751A1 (en) Method, tool, and device for assembling a plurality of partial floor plans into a combined floor plan
US20040122628A1 (en) Method and device for generating two-dimensional floor plans
Khoshelham et al. Indoor mapping eyewear: geometric evaluation of spatial mapping capability of HoloLens
Barrile et al. Geomatics and augmented reality experiments for the cultural heritage
US11395102B2 (en) Field cooperation system and management device
KR20160027735A (en) Apparatus and method system and mtehod for building indoor map using cloud point
KR20210086837A (en) Interior simulation method using augmented reality(AR)
KR100757751B1 (en) Apparatus and method for creating a circumstance map of an indoor circumstance
El-Hakim et al. Sensor based creation of indoor virtual environment models
US20220130064A1 (en) Feature Determination, Measurement, and Virtualization From 2-D Image Capture
Kalantari et al. 3D indoor surveying–a low cost approach
Hub et al. Interactive localization and recognition of objects for the blind
EP2323051A1 (en) Method and system for detecting and displaying graphical models and alphanumeric data
CN219301516U (en) Measuring system

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION