US20100177117A1 - Contextual templates for modifying objects in a virtual universe - Google Patents

Contextual templates for modifying objects in a virtual universe Download PDF

Info

Publication number
US20100177117A1
US20100177117A1 US12/353,656 US35365609A US2010177117A1 US 20100177117 A1 US20100177117 A1 US 20100177117A1 US 35365609 A US35365609 A US 35365609A US 2010177117 A1 US2010177117 A1 US 2010177117A1
Authority
US
United States
Prior art keywords
style
contextual
avatar
template
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/353,656
Inventor
Peter George Finn
II Rick Allen Hamilton
Brian Marshall O'Connell
Clifford Alan Pickover
Keith Raymond Walker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US12/353,656 priority Critical patent/US20100177117A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FINN, PETER GEORGE, HAMILTION, RICK ALLEN, O'CONNELL, BRIAN MARSHALL, PICKOVER, CLIFFORD ALAN, WALKER, KEITH RAYMOND
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FINN, PETER GEORGE, HAMILTON, RICK ALLEN, II, O'CONNELL, BRIAN MARSHALL, PICKOVER, CLIFFORD ALAN, WALKER, KEITH RAYMOND
Publication of US20100177117A1 publication Critical patent/US20100177117A1/en
Priority to US13/531,265 priority patent/US8458603B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1438Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using more than one graphics controller
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/22Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of characters or indicia using display control signals derived from coded signals representing the characters or indicia, e.g. with a character-code memory
    • G09G5/24Generation of individual character patterns

Definitions

  • the present invention is related generally to a data processing system and in particular to a method and apparatus for managing objects in a virtual universe. More particularly, the present invention is directed to a computer implemented method, apparatus, and computer usable program code for modifying a style of an object using predefined contextual templates.
  • a virtual universe also referred to as a metaverse or “3-D Internet”, is a computer-based simulated environment.
  • Examples of virtual universes include Second Life®, Entropia Universe, The Sims Online®, There, and Red Light Center.
  • Other examples of virtual universes include multiplayer online games, such as EverQuest®, Ultima Online®, Lineage®, and World of Warcraft® (WoW).
  • Virtual universes are represented using three dimensional (3-D) graphics and landscapes.
  • the properties and elements of the virtual universe often resemble the properties of the real world, such as in terms of physics, houses, and landscapes.
  • Virtual universes may be populated by thousands of users simultaneously. In a virtual universe, users are sometimes referred to as “residents.”
  • avatars The users in a virtual universe can interact, inhabit, and traverse the virtual universe through the use of avatars.
  • An avatar is a graphical representation of a user that other users in the virtual universe can see and interact with.
  • the avatar's appearance is typically selected by the user and often takes the form of a cartoon-like representation of a human.
  • avatars may also have non-human appearances, such as animals, elves, trolls, orcs, fairies, and other fantasy creatures.
  • a viewable field is the field of view for a particular user.
  • the viewable field for a particular user may include objects, as well as avatars belonging to other users.
  • An object is an element in a virtual universe that does not represent a user.
  • An object may be, for example, buildings, statues, billboards, signs, and advertisements in the virtual universe. Objects are prevalent in virtual universes and may be used for various purposes. However, the creation and maintenance of high quality virtual universe objects is frequently expensive and time consuming.
  • a computer implemented method, apparatus, and computer usable program code for modifying object styles in a virtual universe.
  • An object is rendered in accordance with a first contextual style template from a plurality of contextual style templates.
  • the first contextual style template comprises first geometric and texture data to display the object with a first style.
  • a second contextual style template is identified from the plurality of contextual style templates.
  • the set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style.
  • the object is rendered in accordance with second geometric and texture data in the second contextual style template to form a modified object, wherein the modified object is displayed with the second style.
  • FIG. 1 is a pictorial representation of a network of data processing systems in which illustrative embodiments may be implemented;
  • FIG. 2 is a diagram of a data processing system in accordance with an illustrative embodiment of the present invention
  • FIG. 3 is a block diagram illustrating a virtual universe grid server in accordance with an illustrative embodiment
  • FIG. 4 is a block diagram illustrating a real world user identifier in accordance with an illustrative embodiment
  • FIG. 5 is a block diagram of an object avatar rendering table in accordance with an illustrative embodiment
  • FIG. 6 is a block diagram of a viewable area for an object in accordance with an illustrative embodiment
  • FIG. 7 is a block diagram of a viewable area for an object having a focal point at a location other than the location of the object in accordance with an illustrative embodiment
  • FIG. 8 is a block diagram of a restrictions table in accordance with an illustrative embodiment
  • FIG. 9 is a flowchart illustrating a process for object based avatar tracking using object avatar rendering tables in accordance with an illustrative embodiment.
  • FIG. 10 is a flowchart illustrating a process for modifying the style of an object in a virtual universe using contextual style templates in accordance with an illustrative embodiment.
  • the present invention may be embodied as a system, method or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
  • the computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium.
  • the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CDROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device.
  • a computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
  • a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the computer-usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave.
  • the computer usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc.
  • Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIGS. 1-2 exemplary diagrams of data processing environments are provided in which illustrative embodiments may be implemented. It should be appreciated that FIGS. 1-2 are only exemplary and are not intended to assert or imply any limitation with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made.
  • FIG. 1 depicts a pictorial representation of a network of data processing systems in which illustrative embodiments may be implemented.
  • Network data processing system 100 is a network of computers in which the illustrative embodiments may be implemented.
  • Network data processing system 100 contains network 102 , which is the medium used to provide communications links between various devices and computers connected together within network data processing system 100 .
  • Network 102 may include connections, such as wire, wireless communication links, or fiber optic cables.
  • server 104 and server 106 connect to network 102 along with storage unit 108 .
  • Servers 104 and 106 are servers associated with a virtual universe. Users of the virtual universe have agents on servers 104 and 106 .
  • An agent is a user's account.
  • a user uses an agent to build an avatar representing the user.
  • the agent is tied to the inventory of assets or possessions the user owns in the virtual universe.
  • a region in a virtual universe typically resides on a single server, such as, without limitation, server 104 .
  • a region is a virtual area of land within the virtual universe.
  • Clients 110 , 112 , and 114 connect to network 102 .
  • Clients 110 , 112 , and 114 may be, for example, personal computers or network computers.
  • server 104 provides data, such as boot files, operating system images, and applications to clients 110 , 112 , and 114 .
  • Clients 110 , 112 , and 114 are clients to server 104 in this example.
  • Geometric data is distributed to a user's client computer as textual coordinates.
  • Textures are distributed to a user's client computer as graphics files, such as Joint Photographic Experts Group (JPEG) files.
  • Effects data is typically rendered by the user's client according to the user's preferences and the user's client device capabilities.
  • network data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • network data processing system 100 also may be implemented as a number of different types of networks, such as for example, an intranet, a local area network (LAN), or a wide area network (WAN).
  • FIG. 1 is intended as an example, and not as an architectural limitation for the different illustrative embodiments.
  • Network data processing system 100 may include additional servers, clients, and other devices not shown.
  • data processing system 200 includes communications fabric 202 , which provides communications between processor unit 204 , memory 206 , persistent storage 208 , communications unit 210 , input/output (I/O) unit 212 , and display 214 .
  • communications fabric 202 which provides communications between processor unit 204 , memory 206 , persistent storage 208 , communications unit 210 , input/output (I/O) unit 212 , and display 214 .
  • Processor unit 204 serves to execute instructions for software that may be loaded into memory 206 .
  • Processor unit 204 may be a set of one or more processors or may be a multi-processor core, depending on the particular implementation. Further, processor unit 204 may be implemented using one or more heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, processor unit 204 may be a symmetric multi-processor system containing multiple processors of the same type.
  • Memory 206 may be, for example, a random access memory or any other suitable volatile or non-volatile storage device.
  • Persistent storage 208 may take various forms depending on the particular implementation.
  • persistent storage 208 may contain one or more components or devices.
  • persistent storage 208 may be a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above.
  • the media used by persistent storage 208 also may be removable.
  • a removable hard drive may be used for persistent storage 208 .
  • Communications unit 210 in these examples, provides for communications with other data processing systems or devices.
  • communications unit 210 is a network interface card.
  • Communications unit 210 may provide communications through the use of either or both physical and wireless communications links.
  • Input/output unit 212 allows for input and output of data with other devices that may be connected to data processing system 200 .
  • input/output unit 212 may provide a connection for user input through a keyboard and mouse. Further, input/output unit 212 may send output to a printer.
  • Display 214 provides a mechanism to display information to a user.
  • Instructions for the operating system and applications or programs are located on persistent storage 208 . These instructions may be loaded into memory 206 for execution by processor unit 204 .
  • the processes of the different embodiments may be performed by processor unit 204 using computer implemented instructions, which may be located in a memory, such as memory 206 .
  • These instructions are referred to as program code, computer usable program code, or computer readable program code that may be read and executed by a processor in processor unit 204 .
  • the program code in the different embodiments may be embodied on different physical or tangible computer readable media, such as memory 206 or persistent storage 208 .
  • Program code 226 is located in a functional form on computer readable media 228 that is selectively removable and may be loaded onto or transferred to data processing system 200 for execution by processor unit 204 .
  • Program code 226 and computer readable media 228 form computer program product 220 in these examples.
  • computer readable media 228 may be in a tangible form, such as, for example, an optical or magnetic disc that is inserted or placed into a drive or other device that is part of persistent storage 208 for transfer onto a storage device, such as a hard drive that is part of persistent storage 208 .
  • computer readable media 228 also may take the form of a persistent storage, such as a hard drive, a thumb drive, or a flash memory that is connected to data processing system 200 .
  • the tangible form of computer readable media 228 is also referred to as computer recordable storage media. In some instances, computer readable media 228 may not be removable.
  • program code 226 may be transferred to data processing system 200 from computer readable media 228 through a communications link to communications unit 210 and/or through a connection to input/output unit 212 .
  • the communications link and/or the connection may be physical or wireless in the illustrative examples.
  • the computer readable media also may take the form of non-tangible media, such as communications links or wireless transmissions containing the program code.
  • data processing system 200 The different components illustrated for data processing system 200 are not meant to provide architectural limitations to the manner in which different embodiments may be implemented.
  • the different illustrative embodiments may be implemented in a data processing system including components in addition to or in place of those illustrated for data processing system 200 .
  • Other components shown in FIG. 2 can be varied from the illustrative examples shown.
  • a storage device in data processing system 200 is any hardware apparatus that may store data.
  • Memory 206 , persistent storage 208 , and computer readable media 218 are examples of storage devices in a tangible form.
  • a bus system may be used to implement communications fabric 202 and may be comprised of one or more buses, such as a system bus or an input/output bus.
  • the bus system may be implemented using any suitable type of architecture that provides for a transfer of data between different components or devices attached to the bus system.
  • a communications unit may include one or more devices used to transmit and receive data, such as a modem or a network adapter.
  • a memory may be, for example, memory 206 or a cache such as found in an interface and memory controller hub that may be present in communications fabric 202 .
  • a virtual universe is a computer-simulated environment, such as, without limitation, Second Life®, Entropia Universe, The Sims Online®, There, Red Light Center, EverQuest®, Ultima Online®, Lineage®, and World of Warcraft®.
  • a virtual universe is typically represented using three dimensional (3-D) graphics and landscapes.
  • the users in the virtual universe interact, inhabit, and traverse the virtual universe through avatars.
  • Avatars represent users and are controlled or associated with users.
  • a user can view objects and other avatars within a given proximity of the user's avatar.
  • the virtual universe grid software determines which objects and other avatars are within the given proximity of the user's avatar according to the geometries and textures that are currently loaded in the user's virtual universe client.
  • the virtual universe grid determines the length of time that a user views an object or other avatar in proximity of the user based on processing the data sent to each virtual universe client.
  • the illustrative embodiments recognize that objects are prevalent in virtual universes and objects may be used for various purposes, including advertising, product placement, and information dissemination. However, an object created for one particular purpose, such as advertising, may not be appropriate for a different purpose, such as product placement. In addition, a single object style may not be suitable for placement throughout the virtual universe. For example, consider a corporate logo object. For advertising purposes, the corporate logo object should be clearly readable and standout from the background. For other purposes, such as product placement, the logo should blend seamlessly with the background. A single corporate logo style for the object may be inappropriate to cover both these needs. The illustrative embodiments also recognize that the creation and maintenance of multiple, high quality versions of a virtual universe object may be expensive or cost prohibitive.
  • a computer implemented method, apparatus, and computer usable program code for modifying object styles in a virtual universe.
  • An object is rendered in accordance with a first contextual style template from a plurality of contextual style templates.
  • the first contextual style template comprises first geometric and texture data to display the object with a first style.
  • assets, avatars, the environment, and anything visual consists of unique identifiers (UUIDs) tied to geometric data and texture data.
  • Geometric data is data associated with the form or shape of avatars and objects in the virtual universe. Geometric data may be used to construct a wire frame type model of an avatar or object. In other words, geometric data is used to create the shape or skeleton of the object.
  • Texture data refers to the surface detail and surface textures or color that is applied to wire-frame type geometric data to render the object or avatar.
  • the texture that is generated using texture data provides an outer appearance or “skin” of the object.
  • the geometric data and texture data may be created or modified to change the look and feel of the object.
  • geometric data may be modified to change a shape of the object to make the object larger and the texture data may be modified to change the color of the object to darker or bolder colors to give the object increased visibility and a bolder look and a more powerful feel.
  • the geometric data for the object may be changed to soften edges and the texture data may be modified to change the object's colors to pastels or other soft colors to give the object a soft or relaxing look and feel.
  • a second contextual style template is identified from the plurality of contextual style templates.
  • the term “set” refers to one or more.
  • the set of contextual changes may include a single contextual change, as well as two or more contextual changes.
  • a contextual change is a condition or event associated with the object.
  • a contextual change may include, without limitation, a change in a location of an object, a change in a number of avatars within a viewable field of the object, a real world identity of a user controlling an avatar within the viewable field of the object, characteristics of the real world user controlling the avatar within the viewable field of the object, and/or receiving a user selection to change the style of the object.
  • the set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style.
  • the object is rendered in accordance with second geometric and texture data in the second contextual style template to form a modified object.
  • the modified object is displayed with the second style.
  • the geometric data is distributed to a user's client computer as textual coordinates.
  • Texture data may be distributed to a user's client computer as graphics files, such as JPEG files.
  • FIG. 3 is a block diagram illustrating a virtual universe grid server in accordance with an illustrative embodiment.
  • Server 300 is a server associated with a virtual universe.
  • Server 300 may be a single, stand-alone server, a server in a virtual universe grid computing system, or a server in a cluster of two or more servers.
  • server 300 is a server in a grid computing system for rendering and managing a virtual universe.
  • Virtual universe grid database 302 is a database on the grid computing system for storing data used by virtual universe grid software 306 to render objects in the virtual universe and manage the virtual universe.
  • Virtual universe grid database 302 includes object avatar rendering (OAR) table 304 .
  • Object avatar rendering table 304 stores object unique identifiers and avatar unique identifiers.
  • Object avatar rendering table 304 stores a unique identifier (UUID) for each selected object in the virtual universe.
  • a selected object is an object in a plurality of objects in the virtual universe that is tracked, monitored, managed, or associated with object avatar rendering table 304 .
  • object avatar rendering table 304 may be used to track all the objects in the virtual universe or track one or more objects in the virtual universe that are selected.
  • Object avatar rendering table 304 also stores unique identifiers and other data describing avatars within a viewable field of a selected object or within a selected zone or range associated with the selected object.
  • object avatar rendering table 304 stores object A unique identifier, object B unique identifier, avatar unique identifiers and other data for all avatars within the viewable field of object A, and avatar unique identifiers and other data describing all avatars within the viewable field of object B.
  • Virtual universe grid software 306 is software for rendering the virtual universe and the objects within the virtual universe.
  • Virtual universe grid software 306 includes object based avatar tracking controller 308 .
  • Object based avatar tracking controller 308 is software for tracking avatars within the viewable field of each selected object.
  • Object based avatar tracking controller 308 stores tracking data in object avatar rendering table 304 . Tracking data includes the unique identifiers and other data describing avatars within the viewable field of a selected object.
  • object based avatar tracking controller 308 When object based avatar tracking controller 308 needs data from object avatar rendering table 304 to determine the number of avatars within the viewable field of an object, when an avatar enters the viewable field of an object, when an avatar leaves the viewable field of an object, or other tracking information associated with the object, object based avatar tracking controller 308 sends a query to object avatar rendering table 304 . In response to the query, virtual universe grid database 302 sends the tracking data for the object to virtual universe grid software 306 for utilization by object based avatar tracking controller 308 to track avatars and identify contextual style templates.
  • Plurality of contextual templates 310 is a plurality of predefined templates for changing, modifying, or augmenting a style of an object.
  • the object style provides a particular look and feel for an object.
  • contextual style template A 312 may provide a classic style for a soft drink can object.
  • Contextual style template B 314 may provide an athletic or health conscious style.
  • contextual style template B 314 may be implemented to render the soft drink can object with an hour glass shaped soft drink can, colors that are slightly brighter or more vibrant, and add the word “diet” or “natural” to the label to appeal to an athletic or health conscious consumer.
  • Contextual style template C 316 may provide a youth style.
  • contextual style template C 316 may be selected and implemented to render the object to form the shape of a drink bottle with images of popular fictional characters from children's entertainment and employ bolder colors to appeal to children.
  • Plurality of contextual templates 310 may include two or more different contextual style templates.
  • plurality of contextual templates 310 may include a blending style to permit the object to blend inconspicuously into the background, a high visibility style to make the object noticeable or stand out from the background, a corporate style, a professional style, a youth style, a senior style, a classic style, a nostalgic style, a modern style, a holiday style, a trendy style, a movie theme style, a television program theme style, a toy style, a back-to-school style, a political style, a healthy style, a science fiction style, a comedy style, a sport style, a hobby style, a pet style, or any other type of style.
  • a contextual style template is a template comprising instructions for modifying or augmenting the object to display the object with a different style.
  • the modifications in the set of instructions support a particular style.
  • the set of instructions may be implemented as a table.
  • the contextual style template may include, without limitation, geometric data, texture data, instructions to change a font type, instructions to change a font size, and/or instructions to modify a background of the object.
  • the geometric data is used to change the wireframe or shape of the object.
  • Texture data is used to change the color and texture of the outside or “skin” of the object.
  • the font and/or font size instructions are used to change the letters, numbers, symbols, and/or characters that are displayed as part of the object.
  • the contextual style template may include geometric and texture modifications to float the object, move the object along a fixed path, clone the object, change a size of the object, change one or more colors of the object, and/or change a shape of the object.
  • a contextual style template may be selected to modify the object to a holiday themed style in response to a change of date. For example, when the date changes and it is St. Patrick's Day, a contextual style template may be implemented to display the object in green colors and add four leaf clovers to the object to create a St. Patrick's Day style. If it is the month of December, a Christmas style object may be selected to display the object in colors of red and green. In another example, and without limitation, a change in context may include a particular number of avatars entering a viewable field of the object.
  • a contextual style template may be selected and implemented to modify the object to create a “reward” style that will make the object glow and create fireworks type graphics as a reward for having the given number of avatars present.
  • a contextual style template may be selected and implemented to render the object with words like “all-natural,” “diet,” or “low-calorie,” to appeal to the health conscious consumer.
  • a contextual style template may be used on a single object or multiple objects.
  • Plurality of contextual templates 310 may include a set of contextual style templates that are assigned to each object or the contextual style templates may be assigned to multiple objects in the virtual universe.
  • a given contextual style template may only be used to modify a single object or the contextual style template may be capable of being implemented to modify multiple different objects.
  • the contextual style template may be used to modify the multiple different objects simultaneously or substantially at the same time or the contextual style template may be used to modify one or more objects at a given time.
  • the contextual style template may be used to modify a first object during a first time interval, modify a second and third object during a second time interval, and modify a fourth object during a third time interval.
  • Styles include geometric data, textures, fonts, relative font sizes, restrictions, and background.
  • Each style type created is mapped to a style supported by the virtual universe server 300 .
  • a virtual universe may support a blend style and a high visibility style. If the newly defined style maps to the virtual universe's blend style, the associated textures, fonts, and relative font sizes are configured to permit the object to blend with most environments in which the object would be placed. Permitting multiple styles within a single object lowers associated costs with maintaining the object. Maintenance may include minor updates to textures, fonts, geometric data, and other components of the object.
  • the creation of contextual style templates for an object may be performed with virtual universe specific objects creation and editing tools.
  • Objects are comprised of components such as fonts, textures, backgrounds, font sizes, and geometry.
  • a user identifies each component within an object that is to be modified by the contextual style template.
  • Component identification may also be used by the rendering system to augment the component based on the selected rendering style. Identification may be completed with any virtual universe specific object creation and editing tool.
  • Contextual style template manager 318 is a software component for creating, managing, identifying, selecting, and/or implementing contextual style templates to modify the styles of the objects.
  • contextual template selection 320 identifies a first contextual template for implementation to display the object with an initial style.
  • the first contextual style template may be a default template.
  • the first contextual style template may be a template selected by a user via a user input/output device.
  • Context discovery 322 is a software component for monitoring changes in context associated with the object. Context changes may include, without limitation, an avatar entering a viewable field of the object, an avatar leaving a viewable field of the object, a real world identity of a user controlling an avatar that is within the viewable field of the object or within a detection area of the object, a real world characteristic of one or more users controlling one or more avatars within the viewable field of the object or the detection area of the object, a time of day, a day of the week, a holiday, or any other change.
  • a real world characteristic may include, without limitation, a profession of a user, a residence of a user, a citizenship of a user, a current location of a user, an age of the user, a sex of the user, a hobby of the user, a past purchase history of the user, a membership of the user in an organization, club, loyal customer group, or any other membership, a marital status of the user, an educational or employment status, a credit history or credit rating, or any other characteristic of the user.
  • Context discovery 322 may utilize tracking data and real world user information to determine when an avatar enters the viewable field or detection area of the object, when an avatar leaves the viewable field or detection area of the object, and/or determine real world characteristics of the users controlling the avatars that are within the viewable field or the detection area of the object.
  • context discovery 322 samples and averages the colors in the environment immediately surrounding the object. For example, if an object is moving or changing location, the object may need to be modified in accordance with a blend style to blend in with the new environment. In such a case, context discovery 322 averages individual color values for the area surrounding the objects location.
  • context discovery 322 may average the values for red, green and blue color values surrounding the object. Additionally, the designation of the surrounding area to be sampled may be a predefined area or an area that is proportional to the size of the object being placed. For example, if the object is two centimeters in width and two centimeters in height, context discovery 322 samples an area that is equal to the object area squared, for this example, context discovery 322 samples sixteen centimeters around the location where the object will be placed or is placed.
  • the contextual style templates may require restrictions on all or parts of the object to restrict the automatic modification of the object in accordance with the instructions in a given contextual style template. Restrictions may be required to maintain corporate identity or personal taste. For example, some corporations only permit the corporate logo to be rendered in a specific font or colors. The restrictions placed on the object or components of the object affect the available object mutations for each renderable style.
  • Restriction controller 324 is software for restricting the changes or modifications made to a particular object in accordance with restrictions table 326 .
  • Restrictions table 326 is a table of limitations on the changes that are allowed to be made on a particular object.
  • an object that is a soft drink logo may include a restriction that limits color changes on the logo to a range of red colors and white colors. The restrictions may prohibit the logo being displayed in the color black, brown, purple, orange, or any color other than the approved range of reds and whites.
  • restriction controller 324 checks restrictions table 326 to determine whether the changes implemented by the contextual style template are permitted.
  • contextual style template C 316 is a Christmas style that includes instructions to change the letter “l” to a red and white candy cane and a restriction in restrictions table 326 does not permit modification of the font style or font color
  • restriction controller 324 will not permit that change to the letter “l” to be made.
  • the selection of restrictions for an object may be performed with virtual universe specific object creation and editing tools to create restrictions table 326 .
  • the rendering restrictions may impact the color selection, textures, and other modifications to an object for each supported style that is available in plurality of contextual templates 310 .
  • rendering an object the choice of colors, textures, font colors, font sizes, and other components, are selected so as to not conflict with any of the objects rendering restrictions.
  • rendering restrictions in restrictions table 326 for a particular object component may have a cascading effect on the rendering selections for one or more other components of the object. For example, if a background component is restricted to one color which is not appropriate for the selected style, another unrestricted component may be modified to maintain adherence to the selected style.
  • the user when a contextual object is initially placed in a virtual universe, the user chooses an initial style for the object by manually selecting a contextual style template from a set of contextual templates in plurality of contextual templates 310 that are available for utilization on the object.
  • the user may select the contextual style template using a context menu or responding to prompts presented by contextual style template manager 318 via a user input/output interface.
  • Context updates include, without limitation, changing the wall color an object is placed on, moving the object to a new location, or any other context change.
  • Virtual universe server 400 is a server that includes virtual universe grid software for creating and managing a virtual universe and objects in a virtual universe.
  • Virtual universe server 400 includes user input/output 402 , which may be implemented in any type of user interface for receiving user input and providing output to the user, such as, without limitation, a graphical user interface, a command line interface, a menu driven interface, a keyboard, a mouse, a touch screen, a voice-recognition system, or any other type of input/output device.
  • a user may use user input/output 402 to select a default contextual style template for use in rendering an object and/or select a contextual style template for implementation to change a style of the object. For example, if the object is being displayed with a modern style, the user may manually select a classic or retro style for the object, rather than waiting for the contextual style template to be automatically selected when a context change is detected.
  • Real world user identifier 404 is software for determining a real world identity of a user controlling an avatar and/or determining real world characteristics of the user controlling the avatar.
  • Real world user identifier 404 sends query 406 to set of sources of user information 408 to obtain an identity and/or other information describing the real world identity and real world characteristics of the user.
  • Query 406 may be a single query to a single source of user information, as well as two or more queries to one or more different sources of user information.
  • Set of sources of user information 408 is a set of one or more sources of user information.
  • Set of sources 408 may include both online sources of information and offline sources of information.
  • Offline sources of information include, without limitation, information manually provided by a user and/or information retrieved from one or more local data storage devices.
  • Online sources of information may include information stored on one or more remote data storage devices that are accessed via a network connection and/or information provided by a user from a remote location.
  • Virtual universe server 400 obtains information from one or more remote information sources in set of sources of user information 408 via a network connection provided by network interface 410 .
  • Network interface 410 is any type of network access software known or available for allowing virtual universe server 400 to access a network.
  • Network interface 410 connects to a network connection, such as network 102 in FIG. 1 .
  • the network connection permits access to any type of network, such as a local area network (LAN), a wide area network (WAN), the Internet, an Ethernet, an intranet, a virtual private network (VPN), or any other type of network.
  • LAN local area network
  • WAN wide area network
  • the Internet an Ethernet
  • intranet an intranet
  • VPN virtual private network
  • Virtual universe server 400 receives real world user information 412 from set of sources of user information 408 in response to query 406 .
  • Real world user information 412 may include a real world identity of the user and/or characteristics of the user.
  • the contextual style template manager in the virtual universe grid software may use real world user information 412 to select contextual style templates for implementation to change the style of one or more objects being displayed in the virtual universe.
  • FIG. 5 is a block diagram of an object avatar rendering table in accordance with an illustrative embodiment.
  • Object avatar rendering table 500 is an example of data in an object avatar rendering table, such as object avatar rendering table 305 in FIG. 3 .
  • RenderingUUID 502 is a primary key for object avatar rendering table 500 .
  • ObjectUUID 504 is a unique identifier for a selected object in a virtual universe.
  • Object UUID 504 is a foreign key to the existing object table.
  • AvatarUUID 506 is a foreign key to an existing avatar table.
  • AvatarUUID 506 includes a unique identifier for each avatar in the viewable field of the object associated with objectUUID 504 .
  • Zone 1 EnterTime 508 is a field of a date and/or time when an avatar enters a first zone within the viewable field of an object.
  • the zone 1 enter time is a time when an avatar entered the first zone, assuming a model with two or more zones.
  • Zone 1 LeaveTime 510 is a field for a date and/or time when the avatar leaves the first zone.
  • Zone 2 EnterTime 512 is a field in object avatar rendering table 500 for storing a date and/or time when an avatar enters a second zone.
  • the second zone may be an area that is outside the viewable field. In other words, the second zone is an area in which an avatar cannot see the selected object, but the area is in close proximity to the viewable field in which the avatar will be able to see the object.
  • the object avatar tracking controller software may begin preparing to display the object to the avatar when the avatar does eventually enter the viewable field.
  • Zone 2 LeaveTime 514 is a field for storing the date and/or time when a given avatar leaves the second zone.
  • NumberofZone 1 Enters 516 is a field for storing the number of times a particular avatar has entered the first zone. This information may be useful to determine whether the user has never viewed the object. If the user has never before viewed the object, it may be determined that content associated with an object should be displayed in full to the user associated with the avatar, rather than in an abbreviated or summarized form.
  • the information in NumberofZone 1 Enters 516 is also used to determine whether the user has viewed the object one or more times in the past, and therefore, the content associated with the object should be displayed in part, skip introductory material, be modified or abbreviated, or otherwise altered so that the exact same content is not displayed to the user every time the user is within the viewable field of the object.
  • NumberofZone 2 Enters 518 is a field for storing the number of times an avatar has entered the second zone.
  • LastCoordinates 520 is a field for storing the coordinate data describing where a given avatar is within the first zone or the second zone of a selected object. The coordinate data is typically given in xyz type coordinate data.
  • the illustrative embodiments recognize that a user may wish to change the style or look and feel of an object based on circumstances, such as, the number of avatars viewing an object, the real world identities or characteristics of users controlling the avatars within the viewable field of the object, whether the object is being displayed to avatars on a holiday, weekend, or workday, whether the object is being displayed during daylight hours or during the evening or night, and other information describing the context in which the object is being viewed.
  • FIG. 6 is a block diagram of a viewable area for an object in accordance with an illustrative embodiment.
  • Range 600 is a viewable field 604 and detection area 606 associated with object 602 in a virtual universe.
  • An object such as object 602 , is an element in a virtual universe that is not directly controlled by a user or associated with a user's account.
  • An object may be, for example and without limitation, a building, a statue, a billboard, a sign, a drink bottle, a logo, a box, a container, or an advertisement in the virtual universe.
  • object 602 is an advertisement, such as a billboard or a sign.
  • Viewable field 604 has a focal point or center at a location that is the same as the location of object 602 .
  • Viewable field 604 may also be referred to as zone 1 or a first zone.
  • An avatar in viewable field 604 is able to see or view object 602 and/or content associated with object 602 .
  • object 602 may be associated with video and/or audio content.
  • Object 602 may also optionally be capable of some movement or animation.
  • object 602 is substantially limited to a single location in the virtual universe.
  • Object 602 is rendered on a user's screen when an avatar associated with the user is within viewable field 604 .
  • Object 602 is rendered using any perspective mode, including but not limited to, a first person perspective, a third person perspective, a bird's eye view perspective, or a map view perspective.
  • a map view perspective renders objects with labels rather than with extensive details and/or texturing.
  • Detection area 606 is an area adjacent to viewable field 604 within range 600 . Detection area 606 may also be referred to as a second zone or zone 2 . An avatar in detection area 606 cannot see object 602 or view content associated with object 602 . However, when an avatar enters detection area 606 , the object avatar tracking controller software can begin preparing to display object 602 and content associated with object 602 to the avatar when the avatar enters viewable field 604 .
  • avatar A 610 is within viewable field 604 . Therefore, avatar A 610 is able to view or see object 602 .
  • Avatar C 614 is within detection area 606 . Avatar C 614 is not able to see or view object 602 . However, the presence of avatar C 614 indicates that avatar C 614 may be about to enter viewable field 604 or that avatar C 614 has just left viewable field 604 .
  • Avatar B 612 is outside range 600 . Avatar B 612 is not able to see or view object 602 . In addition, avatar B 612 is not close enough to viewable field 604 to indicate that avatar B 612 may be preparing to enter viewable field 604 .
  • an object avatar tracking table for object 602 includes entries for avatar A 610 in zone 1 and avatar C 614 in zone 2 .
  • the record associated with object 602 in the object avatar rendering table does not include an avatar unique identifier or data for avatar B 612 because avatar B 612 is outside both viewable field 604 and detection area 606 .
  • FIG. 7 is a block diagram of a viewable area for an object having a focal point at a location other than the location of the object in accordance with an illustrative embodiment.
  • Viewable field 700 is a viewable field for object 702 .
  • object 702 is an advertisement in front of object 706 .
  • Viewable field 700 is a range in which an avatar, such as avatars 610 - 614 , may view object 706 .
  • An avatar can see object 702 if the avatar is within viewable field 700 .
  • Viewable field 700 has focal point 704 .
  • Focal point 704 is a point from which the range or area of viewable field 700 for object 702 is determined.
  • viewable field 700 is an area that is identified based on a predetermined radius or distance from focal point 704 .
  • focal point 704 is a location that is different from the location of object 702 because object 702 is adjacent to an obstructing object.
  • the obstructing object is object 706 .
  • object based avatar tracking controller such as object based avatar tracking controller 308 in FIG. 3 , makes a determination as to whether there is an existing session associated with the unique identifier of object 702 and the unique identifier of avatar C 614 .
  • This step may be implemented by making a query to the object avatar rendering table to determine if avatar C 614 has ever entered zone 2 or zone 1 previously. If there is not an existing session for avatar C 614 , the object based avatar tracking controller creates a record in the object avatar rendering table with the unique identifier of object 702 and the unique identifier of avatar C 614 .
  • the record in the object avatar rendering table may optionally include additional information, such as, without limitation, a date and time when avatar C 614 entered zone 2 , a date and time when avatar C 614 leaves zone 2 , a date and time when avatar C 614 enters zone 1 , a number of zone 2 enters, a number of zone 1 enters, coordinates of avatar C 614 , and any other data describing avatar C 614 .
  • This data is used by the virtual universe grid software for analysis, reporting, and billing purposes.
  • Object 702 may have an initiation process associated with object 702 .
  • an initiation process may include buffering the audio and/or video content, checking a cache for the audio and/or video content, caching the audio and/or video content, or any other initiation process.
  • the object-based avatar tracking controller triggers any object initiation process defined by object 702 .
  • the object based avatar tracking controller displays the buffered or cached content. If a user is viewing the object for the first time and object 702 has a video or audio file associated with viewing the object, the process starts playing the video or audio from the beginning.
  • the object based avatar tracking controller triggers any object re-initiation process defined by object 702 . For example, if the user is not viewing an object with an associated video for the first time, the process starts playing the video at a point in the video after the beginning, such as after an introduction, in a middle part, or near the end of the video to avoid replaying introductory material.
  • the object based avatar tracking controller makes a determination as to whether the position of avatar C 614 has changed. Changing position may include traveling, turning, walking, or disappearing, such as teleporting, logging off, or disconnecting.
  • the object based avatar tracking controller adds the user position data to the object avatar rendering table, such as at a field for LastCoordinates 420 in FIG. 4 .
  • the user position data includes angle of view coordinate data of the avatar relative to object 702 and the distance of avatar C 614 to object 702 .
  • the object based avatar tracking controller performs an analysis of the position data and modifies object 702 according to one or more geometric and texture modification methods (GTMs) to improve visibility of the object.
  • GTMs geometric and texture modification methods
  • the object based avatar tracking controller logs a session pause for the session associated with avatar C 614 .
  • the log may include the date and time of the session pause.
  • the object based avatar tracking controller terminates the session associated with avatar C 614 .
  • the process termination may include, without limitation, removing the records and data associated with avatar C 614 from the object avatar rendering table.
  • the object based avatar tracking controller determines that an existing session associated with the unique identifier of object 702 and a unique identifier of avatar C 614 already exists. In such a case, a new record for avatar C 614 will not be created. Instead, the data in the object based avatar rendering table will be updated with new data regarding avatar C 614 in the range of object 702 .
  • FIG. 8 is a block diagram illustrating a restrictions table in accordance with an illustrative embodiment.
  • Restrictions table 800 is a table of restrictions that limit the changes that are made to a particular object when each contextual style template is implemented to modify the object, such as restrictions table 326 in FIG. 3 .
  • Restrictions table 800 in this example limits changes to font colors white 802 , blue 804 , and red 806 based on the style rendered in accordance with the selected contextual style template. For example, if a first contextual style template is implemented to create a blend style, restriction table 800 limits changes of the font color white 802 to either white or grey. If a second contextual style template is selected to create a high visibility style, restriction table 800 limits changes of the font color white to blue or red.
  • the style modifications and the restriction data is represented with colors as opposed to number ranges corresponding with the colors for illustrative purposes only. In one embodiment, restrictions table 800 comprises number ranges corresponding to the colors.
  • restriction table 800 lists textual associations for styles and colors, it should be noted that mathematical functions may be derived for color ranges permitting selection of colors with explicit declaration.
  • Rendering restrictions in restrictions table 800 for a particular object component may have a cascading effect on the rendering selections for one or more other components of the object. For example, if a background component is restricted to one color which is not appropriate for the selected style, another unrestricted component may be modified to maintain adherence to the selected style. In this example, the object may be restricted to a blue background. If the object is placed in a predominately blue context and the high visibility style is selected, the restrictions on blue context color 804 may prevent utilization of the red or yellow color as a background selection. Therefore, the background may remain blue but the font color may be augmented to red or yellow to create a high visibility style rather than changing the background color.
  • Rendering restrictions table 800 may include absolute restrictions that enforce restrictions regardless of context. For example, a restriction record may prevent any context and any style from changing an object's color to green.
  • FIG. 9 is a flowchart illustrating a process for object based avatar tracking using object avatar rendering tables in accordance with an illustrative embodiment.
  • the process in FIG. 9 is implemented by software for tracking avatars in a range of an object, such as object based avatar tracking controller 306 in FIG. 3 .
  • the process begins when an avatar comes in range of the object (step 902 ).
  • a determination is made as to whether there is an existing session associated with the unique identifier of the object and the unique identifier of the avatar (step 904 ). This step may be implemented by making a query to the object avatar rendering table for the object. If there is not an existing session, the process creates a record in the object avatar rendering table with the unique identifier of the object and the unique identifier of the avatar (step 906 ).
  • the record in the object avatar rendering table may include other information, such as, without limitation, a date and time, which can be used for analysis, reporting, and billing purposes.
  • the process triggers any object initiation process defined by the object (step 908 ). For example, if a user is viewing the object for the first time and the object has a video associated with viewing the object, the process starts playing the video from the beginning.
  • the process triggers any object re-initiation process defined by the object (step 910 ). For example, if the user is not viewing an object with an associated video for the first time, the process starts playing the video at a point in the video after the beginning, such as after an introduction, in a middle part, or near the end of the video to avoid replaying introductory material.
  • the process makes a determination as to whether the user's position has changed (step 912 ). Changing position may include traveling, turning, or disappearing, such as teleporting, logging off, or disconnecting. If the user's position has not changed, the process returns to step 912 . The process may return to step 912 if the user's position does not change within a specified amount of time.
  • the specified amount of time may be configured by the virtual universe grid administrator or object owner. The specified amount of time may occur very frequently, such as, without limitation, after a specified number of seconds or after a specified number of milliseconds.
  • the process adds the user position data to the object avatar rendering table (step 914 ).
  • the user position data includes angle of view coordinate data of the avatar relative to the object and distance of the avatar to the object.
  • the process then performs an analysis of the position data and modifies the object according to one or more geometric and texture modification methods (GTMs) (step 916 ) to improve visibility of the object.
  • GTMs geometric and texture modification methods
  • the process then makes a determination as to whether the user is out of view (step 918 ).
  • the user may be out of view if the user or the user's avatar has disappeared or is no longer facing the object. If the user is not out of view, after a specified amount of time the process returns to step 912 .
  • the specified amount of time may be configured by the virtual universe grid administrator or object owner.
  • the specified amount of time may be, without limitation, a specified number of seconds or a specified number of milliseconds.
  • the process logs a session pause (step 920 ).
  • the log may include the date and time.
  • the process makes a determination as to whether the session has been paused for an amount of time that exceeds a threshold amount of time (step 922 ).
  • the threshold amount of time may be configured by a virtual universe administrator or object owner. If the pause does not exceed the threshold, the process returns to step 922 . When the pause exceeds the threshold, the process terminates thereafter.
  • the process termination may include, without limitation, removing the records of the avatar from the object avatar rendering table. If the record is not deleted, when the avatar comes back into range of the object at step 902 , the process will make a determination at step 904 that an existing session associated with the unique identifier of the object and a unique identifier of the avatar already exist.
  • FIG. 10 is a flowchart illustrating a process for modifying the style of an object in a virtual universe using contextual style templates in accordance with an illustrative embodiment.
  • the process in FIG. 10 may be implemented by software for identifying a contextual style template based on a context change associated with the object.
  • the process begins by selecting a first contextual style template from a plurality of contextual style templates (step 1002 ).
  • the first contextual template may be a default contextual template or a contextual template selected by a user.
  • the process renders an object in the virtual universe in accordance with the first contextual style template to display the object with a first style (step 1004 ).
  • the process makes a determination as to whether a context change associated with the object is detected (step 1006 ). If a context change is detected, the process selects a different contextual style template from the plurality of contextual style templates based on the context change (step 1008 ).
  • the process renders the object in accordance with the different contextual style template (step 1010 ). The process then returns to step 1006 and determines if another context change has been detected.
  • the process selects another different contextual style template from the plurality of contextual style templates and renders the object in accordance with the selected different contextual style template. This process iteratively implements steps 1006 - 1010 until a context change is no longer detected. When a context change is not detected, the process terminates. The process may be terminated if a context change is not detected within a predetermined period of time
  • One embodiment provides a computer implemented method, apparatus, and computer program product of modifying object styles in a virtual universe.
  • An object is rendered in accordance with a first contextual style template from a plurality of contextual style templates.
  • the first contextual style template comprises first geometric and texture data to display the object with a first style.
  • a second contextual style template is identified from the plurality of contextual style templates.
  • the set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style.
  • the object is rendered in accordance with second geometric and texture data in the second contextual style template to form a modified object, wherein the modified object is displayed with the second style.
  • the predefined contextual style templates allow a single object to be created that may be used for a plurality of purposes and placements.
  • the object is modified in accordance with a contextual style template to automatically alter the object's properties to create a different style and appearance of the object to enable the object to be used for multiple purposes and placements within the virtual universe.
  • a restriction table provides restrictions on object alteration to prevent a contextual style template from altering the object in a manner that may conflict with corporate identity, trademark and trade dress, personal preferences, or other aesthetic goals.
  • the contextual style templates reduce the cost of creating virtual universe objects and lower the cost of maintaining virtual universe objects.
  • the predefined plurality of contextual style templates that are created for objects in a virtual universe reduces the costs of creating and maintaining virtual objects by supporting multiple object styles and modifying the rendering characteristics of objects based on selected style and object context.
  • the modifications may be performed dynamically as the context is changing or statically.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements.
  • the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer readable medium can be any tangible apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
  • Current examples of optical disks include compact disk—read only memory (CD-ROM), compact disk—read/write (CD-R/W) and DVD.
  • a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices including but not limited to keyboards, displays, pointing devices, etc.
  • I/O controllers can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
  • Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.

Abstract

A computer implemented method, apparatus, and computer program product for modifying object styles in a virtual universe. An object is rendered in accordance with a first contextual style template from a plurality of contextual style templates. The first contextual style template comprises first geometric and texture data to display the object with a first style. In response to detecting a set of contextual changes associated with the object, a second contextual style template is identified from the plurality of contextual style templates. The set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style. The object is rendered in accordance with second geometric and texture data in the second contextual style template to form a modified object, wherein the modified object is displayed with the second style.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention is related generally to a data processing system and in particular to a method and apparatus for managing objects in a virtual universe. More particularly, the present invention is directed to a computer implemented method, apparatus, and computer usable program code for modifying a style of an object using predefined contextual templates.
  • 2. Description of the Related Art
  • A virtual universe (VU), also referred to as a metaverse or “3-D Internet”, is a computer-based simulated environment. Examples of virtual universes include Second Life®, Entropia Universe, The Sims Online®, There, and Red Light Center. Other examples of virtual universes include multiplayer online games, such as EverQuest®, Ultima Online®, Lineage®, and World of Warcraft® (WoW).
  • Many virtual universes are represented using three dimensional (3-D) graphics and landscapes. The properties and elements of the virtual universe often resemble the properties of the real world, such as in terms of physics, houses, and landscapes. Virtual universes may be populated by thousands of users simultaneously. In a virtual universe, users are sometimes referred to as “residents.”
  • The users in a virtual universe can interact, inhabit, and traverse the virtual universe through the use of avatars. An avatar is a graphical representation of a user that other users in the virtual universe can see and interact with. The avatar's appearance is typically selected by the user and often takes the form of a cartoon-like representation of a human. However, avatars may also have non-human appearances, such as animals, elves, trolls, orcs, fairies, and other fantasy creatures.
  • A viewable field is the field of view for a particular user. The viewable field for a particular user may include objects, as well as avatars belonging to other users. An object is an element in a virtual universe that does not represent a user. An object may be, for example, buildings, statues, billboards, signs, and advertisements in the virtual universe. Objects are prevalent in virtual universes and may be used for various purposes. However, the creation and maintenance of high quality virtual universe objects is frequently expensive and time consuming.
  • SUMMARY OF THE INVENTION
  • According to one embodiment of the present invention, a computer implemented method, apparatus, and computer usable program code is provided for modifying object styles in a virtual universe. An object is rendered in accordance with a first contextual style template from a plurality of contextual style templates. The first contextual style template comprises first geometric and texture data to display the object with a first style. In response to detecting a set of contextual changes associated with the object, a second contextual style template is identified from the plurality of contextual style templates. The set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style. The object is rendered in accordance with second geometric and texture data in the second contextual style template to form a modified object, wherein the modified object is displayed with the second style.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a pictorial representation of a network of data processing systems in which illustrative embodiments may be implemented;
  • FIG. 2 is a diagram of a data processing system in accordance with an illustrative embodiment of the present invention;
  • FIG. 3 is a block diagram illustrating a virtual universe grid server in accordance with an illustrative embodiment;
  • FIG. 4 is a block diagram illustrating a real world user identifier in accordance with an illustrative embodiment;
  • FIG. 5 is a block diagram of an object avatar rendering table in accordance with an illustrative embodiment;
  • FIG. 6 is a block diagram of a viewable area for an object in accordance with an illustrative embodiment;
  • FIG. 7 is a block diagram of a viewable area for an object having a focal point at a location other than the location of the object in accordance with an illustrative embodiment;
  • FIG. 8 is a block diagram of a restrictions table in accordance with an illustrative embodiment;
  • FIG. 9 is a flowchart illustrating a process for object based avatar tracking using object avatar rendering tables in accordance with an illustrative embodiment; and
  • FIG. 10 is a flowchart illustrating a process for modifying the style of an object in a virtual universe using contextual style templates in accordance with an illustrative embodiment.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • As will be appreciated by one skilled in the art, the present invention may be embodied as a system, method or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
  • Any combination of one or more computer usable or computer readable medium(s) may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CDROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device. Note that the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc.
  • Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • The present invention is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.
  • These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • With reference now to the figures and in particular with reference to FIGS. 1-2, exemplary diagrams of data processing environments are provided in which illustrative embodiments may be implemented. It should be appreciated that FIGS. 1-2 are only exemplary and are not intended to assert or imply any limitation with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made.
  • FIG. 1 depicts a pictorial representation of a network of data processing systems in which illustrative embodiments may be implemented. Network data processing system 100 is a network of computers in which the illustrative embodiments may be implemented. Network data processing system 100 contains network 102, which is the medium used to provide communications links between various devices and computers connected together within network data processing system 100. Network 102 may include connections, such as wire, wireless communication links, or fiber optic cables.
  • In the depicted example, server 104 and server 106 connect to network 102 along with storage unit 108. Servers 104 and 106 are servers associated with a virtual universe. Users of the virtual universe have agents on servers 104 and 106. An agent is a user's account. A user uses an agent to build an avatar representing the user. The agent is tied to the inventory of assets or possessions the user owns in the virtual universe. In addition, a region in a virtual universe typically resides on a single server, such as, without limitation, server 104. A region is a virtual area of land within the virtual universe.
  • Clients 110, 112, and 114 connect to network 102. Clients 110, 112, and 114 may be, for example, personal computers or network computers. In the depicted example, server 104 provides data, such as boot files, operating system images, and applications to clients 110, 112, and 114. Clients 110, 112, and 114 are clients to server 104 in this example.
  • In a virtual universe, assets, avatars, the environment, and anything visual consists of unique identifiers (UUIDs) tied to geometric data, textures, and effects data. Geometric data is distributed to a user's client computer as textual coordinates. Textures are distributed to a user's client computer as graphics files, such as Joint Photographic Experts Group (JPEG) files. Effects data is typically rendered by the user's client according to the user's preferences and the user's client device capabilities.
  • In the depicted example, network data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages. Of course, network data processing system 100 also may be implemented as a number of different types of networks, such as for example, an intranet, a local area network (LAN), or a wide area network (WAN). FIG. 1 is intended as an example, and not as an architectural limitation for the different illustrative embodiments. Network data processing system 100 may include additional servers, clients, and other devices not shown.
  • Turning now to FIG. 2, a diagram of a data processing system is depicted in accordance with an illustrative embodiment of the present invention. In this illustrative example, data processing system 200 includes communications fabric 202, which provides communications between processor unit 204, memory 206, persistent storage 208, communications unit 210, input/output (I/O) unit 212, and display 214.
  • Processor unit 204 serves to execute instructions for software that may be loaded into memory 206. Processor unit 204 may be a set of one or more processors or may be a multi-processor core, depending on the particular implementation. Further, processor unit 204 may be implemented using one or more heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, processor unit 204 may be a symmetric multi-processor system containing multiple processors of the same type.
  • Memory 206, in these examples, may be, for example, a random access memory or any other suitable volatile or non-volatile storage device. Persistent storage 208 may take various forms depending on the particular implementation. For example, persistent storage 208 may contain one or more components or devices. For example, persistent storage 208 may be a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by persistent storage 208 also may be removable. For example, a removable hard drive may be used for persistent storage 208.
  • Communications unit 210, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit 210 is a network interface card. Communications unit 210 may provide communications through the use of either or both physical and wireless communications links.
  • Input/output unit 212 allows for input and output of data with other devices that may be connected to data processing system 200. For example, input/output unit 212 may provide a connection for user input through a keyboard and mouse. Further, input/output unit 212 may send output to a printer. Display 214 provides a mechanism to display information to a user.
  • Instructions for the operating system and applications or programs are located on persistent storage 208. These instructions may be loaded into memory 206 for execution by processor unit 204. The processes of the different embodiments may be performed by processor unit 204 using computer implemented instructions, which may be located in a memory, such as memory 206. These instructions are referred to as program code, computer usable program code, or computer readable program code that may be read and executed by a processor in processor unit 204. The program code in the different embodiments may be embodied on different physical or tangible computer readable media, such as memory 206 or persistent storage 208.
  • Program code 226 is located in a functional form on computer readable media 228 that is selectively removable and may be loaded onto or transferred to data processing system 200 for execution by processor unit 204. Program code 226 and computer readable media 228 form computer program product 220 in these examples. In one example, computer readable media 228 may be in a tangible form, such as, for example, an optical or magnetic disc that is inserted or placed into a drive or other device that is part of persistent storage 208 for transfer onto a storage device, such as a hard drive that is part of persistent storage 208. In a tangible form, computer readable media 228 also may take the form of a persistent storage, such as a hard drive, a thumb drive, or a flash memory that is connected to data processing system 200. The tangible form of computer readable media 228 is also referred to as computer recordable storage media. In some instances, computer readable media 228 may not be removable.
  • Alternatively, program code 226 may be transferred to data processing system 200 from computer readable media 228 through a communications link to communications unit 210 and/or through a connection to input/output unit 212. The communications link and/or the connection may be physical or wireless in the illustrative examples. The computer readable media also may take the form of non-tangible media, such as communications links or wireless transmissions containing the program code.
  • The different components illustrated for data processing system 200 are not meant to provide architectural limitations to the manner in which different embodiments may be implemented. The different illustrative embodiments may be implemented in a data processing system including components in addition to or in place of those illustrated for data processing system 200. Other components shown in FIG. 2 can be varied from the illustrative examples shown.
  • As one example, a storage device in data processing system 200 is any hardware apparatus that may store data. Memory 206, persistent storage 208, and computer readable media 218 are examples of storage devices in a tangible form.
  • In another example, a bus system may be used to implement communications fabric 202 and may be comprised of one or more buses, such as a system bus or an input/output bus. Of course, the bus system may be implemented using any suitable type of architecture that provides for a transfer of data between different components or devices attached to the bus system. Additionally, a communications unit may include one or more devices used to transmit and receive data, such as a modem or a network adapter. Further, a memory may be, for example, memory 206 or a cache such as found in an interface and memory controller hub that may be present in communications fabric 202.
  • A virtual universe is a computer-simulated environment, such as, without limitation, Second Life®, Entropia Universe, The Sims Online®, There, Red Light Center, EverQuest®, Ultima Online®, Lineage®, and World of Warcraft®. A virtual universe is typically represented using three dimensional (3-D) graphics and landscapes.
  • The users in the virtual universe interact, inhabit, and traverse the virtual universe through avatars. Avatars represent users and are controlled or associated with users. A user can view objects and other avatars within a given proximity of the user's avatar. The virtual universe grid software determines which objects and other avatars are within the given proximity of the user's avatar according to the geometries and textures that are currently loaded in the user's virtual universe client. The virtual universe grid determines the length of time that a user views an object or other avatar in proximity of the user based on processing the data sent to each virtual universe client.
  • The illustrative embodiments recognize that objects are prevalent in virtual universes and objects may be used for various purposes, including advertising, product placement, and information dissemination. However, an object created for one particular purpose, such as advertising, may not be appropriate for a different purpose, such as product placement. In addition, a single object style may not be suitable for placement throughout the virtual universe. For example, consider a corporate logo object. For advertising purposes, the corporate logo object should be clearly readable and standout from the background. For other purposes, such as product placement, the logo should blend seamlessly with the background. A single corporate logo style for the object may be inappropriate to cover both these needs. The illustrative embodiments also recognize that the creation and maintenance of multiple, high quality versions of a virtual universe object may be expensive or cost prohibitive.
  • Therefore, according to one embodiment of the present invention, a computer implemented method, apparatus, and computer usable program code is provided for modifying object styles in a virtual universe. An object is rendered in accordance with a first contextual style template from a plurality of contextual style templates. The first contextual style template comprises first geometric and texture data to display the object with a first style. In a virtual universe, assets, avatars, the environment, and anything visual consists of unique identifiers (UUIDs) tied to geometric data and texture data. Geometric data is data associated with the form or shape of avatars and objects in the virtual universe. Geometric data may be used to construct a wire frame type model of an avatar or object. In other words, geometric data is used to create the shape or skeleton of the object. Texture data refers to the surface detail and surface textures or color that is applied to wire-frame type geometric data to render the object or avatar. The texture that is generated using texture data provides an outer appearance or “skin” of the object.
  • The geometric data and texture data may be created or modified to change the look and feel of the object. For example, and without limitation, geometric data may be modified to change a shape of the object to make the object larger and the texture data may be modified to change the color of the object to darker or bolder colors to give the object increased visibility and a bolder look and a more powerful feel. In another example, the geometric data for the object may be changed to soften edges and the texture data may be modified to change the object's colors to pastels or other soft colors to give the object a soft or relaxing look and feel.
  • In response to detecting a set of contextual changes associated with the object, a second contextual style template is identified from the plurality of contextual style templates. As used herein, the term “set” refers to one or more. Thus, the set of contextual changes may include a single contextual change, as well as two or more contextual changes. A contextual change is a condition or event associated with the object. A contextual change may include, without limitation, a change in a location of an object, a change in a number of avatars within a viewable field of the object, a real world identity of a user controlling an avatar within the viewable field of the object, characteristics of the real world user controlling the avatar within the viewable field of the object, and/or receiving a user selection to change the style of the object.
  • The set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style. The object is rendered in accordance with second geometric and texture data in the second contextual style template to form a modified object. The modified object is displayed with the second style. In one embodiment, the geometric data is distributed to a user's client computer as textual coordinates. Texture data may be distributed to a user's client computer as graphics files, such as JPEG files.
  • FIG. 3 is a block diagram illustrating a virtual universe grid server in accordance with an illustrative embodiment. Server 300 is a server associated with a virtual universe. Server 300 may be a single, stand-alone server, a server in a virtual universe grid computing system, or a server in a cluster of two or more servers. In this example, server 300 is a server in a grid computing system for rendering and managing a virtual universe.
  • Virtual universe grid database 302 is a database on the grid computing system for storing data used by virtual universe grid software 306 to render objects in the virtual universe and manage the virtual universe. Virtual universe grid database 302 includes object avatar rendering (OAR) table 304. Object avatar rendering table 304 stores object unique identifiers and avatar unique identifiers.
  • Object avatar rendering table 304 stores a unique identifier (UUID) for each selected object in the virtual universe. A selected object is an object in a plurality of objects in the virtual universe that is tracked, monitored, managed, or associated with object avatar rendering table 304. Thus, object avatar rendering table 304 may be used to track all the objects in the virtual universe or track one or more objects in the virtual universe that are selected. Object avatar rendering table 304 also stores unique identifiers and other data describing avatars within a viewable field of a selected object or within a selected zone or range associated with the selected object. For example, if the selected objects include object A and object B, then object avatar rendering table 304 stores object A unique identifier, object B unique identifier, avatar unique identifiers and other data for all avatars within the viewable field of object A, and avatar unique identifiers and other data describing all avatars within the viewable field of object B.
  • Virtual universe grid software 306 is software for rendering the virtual universe and the objects within the virtual universe. Virtual universe grid software 306 includes object based avatar tracking controller 308. Object based avatar tracking controller 308 is software for tracking avatars within the viewable field of each selected object. Object based avatar tracking controller 308 stores tracking data in object avatar rendering table 304. Tracking data includes the unique identifiers and other data describing avatars within the viewable field of a selected object. When object based avatar tracking controller 308 needs data from object avatar rendering table 304 to determine the number of avatars within the viewable field of an object, when an avatar enters the viewable field of an object, when an avatar leaves the viewable field of an object, or other tracking information associated with the object, object based avatar tracking controller 308 sends a query to object avatar rendering table 304. In response to the query, virtual universe grid database 302 sends the tracking data for the object to virtual universe grid software 306 for utilization by object based avatar tracking controller 308 to track avatars and identify contextual style templates.
  • Plurality of contextual templates 310 is a plurality of predefined templates for changing, modifying, or augmenting a style of an object. The object style provides a particular look and feel for an object. For example, and without limitation, contextual style template A 312 may provide a classic style for a soft drink can object. When contextual style template A 312 is implemented to render the soft drink can object, the object may be displayed with the soft drink manufacturer's classic corporate logo to provide a classic or corporate look and feel. Contextual style template B 314 may provide an athletic or health conscious style. For example, contextual style template B 314 may be implemented to render the soft drink can object with an hour glass shaped soft drink can, colors that are slightly brighter or more vibrant, and add the word “diet” or “natural” to the label to appeal to an athletic or health conscious consumer. Contextual style template C 316 may provide a youth style. For example, contextual style template C 316 may be selected and implemented to render the object to form the shape of a drink bottle with images of popular fictional characters from children's entertainment and employ bolder colors to appeal to children.
  • Plurality of contextual templates 310 may include two or more different contextual style templates. For example, and without limitation, plurality of contextual templates 310 may include a blending style to permit the object to blend inconspicuously into the background, a high visibility style to make the object noticeable or stand out from the background, a corporate style, a professional style, a youth style, a senior style, a classic style, a nostalgic style, a modern style, a holiday style, a trendy style, a movie theme style, a television program theme style, a toy style, a back-to-school style, a political style, a healthy style, a science fiction style, a comedy style, a sport style, a hobby style, a pet style, or any other type of style.
  • A contextual style template is a template comprising instructions for modifying or augmenting the object to display the object with a different style. The modifications in the set of instructions support a particular style. The set of instructions may be implemented as a table. The contextual style template may include, without limitation, geometric data, texture data, instructions to change a font type, instructions to change a font size, and/or instructions to modify a background of the object. The geometric data is used to change the wireframe or shape of the object. Texture data is used to change the color and texture of the outside or “skin” of the object. The font and/or font size instructions are used to change the letters, numbers, symbols, and/or characters that are displayed as part of the object. The contextual style template may include geometric and texture modifications to float the object, move the object along a fixed path, clone the object, change a size of the object, change one or more colors of the object, and/or change a shape of the object.
  • A contextual style template may be selected to modify the object to a holiday themed style in response to a change of date. For example, when the date changes and it is St. Patrick's Day, a contextual style template may be implemented to display the object in green colors and add four leaf clovers to the object to create a St. Patrick's Day style. If it is the month of December, a Christmas style object may be selected to display the object in colors of red and green. In another example, and without limitation, a change in context may include a particular number of avatars entering a viewable field of the object. When the given number of avatars is present, a contextual style template may be selected and implemented to modify the object to create a “reward” style that will make the object glow and create fireworks type graphics as a reward for having the given number of avatars present. In still another example, when a user having a real world identity that indicates the user is a health conscious consumer, a contextual style template may be selected and implemented to render the object with words like “all-natural,” “diet,” or “low-calorie,” to appeal to the health conscious consumer.
  • A contextual style template may be used on a single object or multiple objects. Plurality of contextual templates 310 may include a set of contextual style templates that are assigned to each object or the contextual style templates may be assigned to multiple objects in the virtual universe. In other words, a given contextual style template may only be used to modify a single object or the contextual style template may be capable of being implemented to modify multiple different objects. The contextual style template may be used to modify the multiple different objects simultaneously or substantially at the same time or the contextual style template may be used to modify one or more objects at a given time. For example, the contextual style template may be used to modify a first object during a first time interval, modify a second and third object during a second time interval, and modify a fourth object during a third time interval.
  • Thus, objects are modified to support multiple styles. Styles include geometric data, textures, fonts, relative font sizes, restrictions, and background. Each style type created is mapped to a style supported by the virtual universe server 300. For example, a virtual universe may support a blend style and a high visibility style. If the newly defined style maps to the virtual universe's blend style, the associated textures, fonts, and relative font sizes are configured to permit the object to blend with most environments in which the object would be placed. Permitting multiple styles within a single object lowers associated costs with maintaining the object. Maintenance may include minor updates to textures, fonts, geometric data, and other components of the object. The creation of contextual style templates for an object may be performed with virtual universe specific objects creation and editing tools.
  • When a user creates a contextual style template, the user may identify particular components of the object for modification by the contextual style template. Objects are comprised of components such as fonts, textures, backgrounds, font sizes, and geometry. In one embodiment, a user identifies each component within an object that is to be modified by the contextual style template. Component identification may also be used by the rendering system to augment the component based on the selected rendering style. Identification may be completed with any virtual universe specific object creation and editing tool.
  • Contextual style template manager 318 is a software component for creating, managing, identifying, selecting, and/or implementing contextual style templates to modify the styles of the objects. When an object is being rendered for the first time, contextual template selection 320 identifies a first contextual template for implementation to display the object with an initial style. The first contextual style template may be a default template. In another embodiment, the first contextual style template may be a template selected by a user via a user input/output device.
  • For style rendering, contextual style template manager 318 discovers the context of the object placement and selects contextual style templates for implementation to modify the style of one or more objects based on the discovered context. Context discovery 322 is a software component for monitoring changes in context associated with the object. Context changes may include, without limitation, an avatar entering a viewable field of the object, an avatar leaving a viewable field of the object, a real world identity of a user controlling an avatar that is within the viewable field of the object or within a detection area of the object, a real world characteristic of one or more users controlling one or more avatars within the viewable field of the object or the detection area of the object, a time of day, a day of the week, a holiday, or any other change. A real world characteristic may include, without limitation, a profession of a user, a residence of a user, a citizenship of a user, a current location of a user, an age of the user, a sex of the user, a hobby of the user, a past purchase history of the user, a membership of the user in an organization, club, loyal customer group, or any other membership, a marital status of the user, an educational or employment status, a credit history or credit rating, or any other characteristic of the user.
  • Context discovery 322 may utilize tracking data and real world user information to determine when an avatar enters the viewable field or detection area of the object, when an avatar leaves the viewable field or detection area of the object, and/or determine real world characteristics of the users controlling the avatars that are within the viewable field or the detection area of the object.
  • In another embodiment, if the context change is associated with environment surrounding the object, context discovery 322 samples and averages the colors in the environment immediately surrounding the object. For example, if an object is moving or changing location, the object may need to be modified in accordance with a blend style to blend in with the new environment. In such a case, context discovery 322 averages individual color values for the area surrounding the objects location.
  • For example context discovery 322 may average the values for red, green and blue color values surrounding the object. Additionally, the designation of the surrounding area to be sampled may be a predefined area or an area that is proportional to the size of the object being placed. For example, if the object is two centimeters in width and two centimeters in height, context discovery 322 samples an area that is equal to the object area squared, for this example, context discovery 322 samples sixteen centimeters around the location where the object will be placed or is placed.
  • The contextual style templates may require restrictions on all or parts of the object to restrict the automatic modification of the object in accordance with the instructions in a given contextual style template. Restrictions may be required to maintain corporate identity or personal taste. For example, some corporations only permit the corporate logo to be rendered in a specific font or colors. The restrictions placed on the object or components of the object affect the available object mutations for each renderable style.
  • Restriction controller 324 is software for restricting the changes or modifications made to a particular object in accordance with restrictions table 326. Restrictions table 326 is a table of limitations on the changes that are allowed to be made on a particular object. For example, an object that is a soft drink logo may include a restriction that limits color changes on the logo to a range of red colors and white colors. The restrictions may prohibit the logo being displayed in the color black, brown, purple, orange, or any color other than the approved range of reds and whites. When a contextual style template is implemented to modify an object, restriction controller 324 checks restrictions table 326 to determine whether the changes implemented by the contextual style template are permitted. For example, if contextual style template C 316 is a Christmas style that includes instructions to change the letter “l” to a red and white candy cane and a restriction in restrictions table 326 does not permit modification of the font style or font color, restriction controller 324 will not permit that change to the letter “l” to be made. The selection of restrictions for an object may be performed with virtual universe specific object creation and editing tools to create restrictions table 326.
  • The rendering restrictions may impact the color selection, textures, and other modifications to an object for each supported style that is available in plurality of contextual templates 310. When rendering an object, the choice of colors, textures, font colors, font sizes, and other components, are selected so as to not conflict with any of the objects rendering restrictions. It should be noted that rendering restrictions in restrictions table 326 for a particular object component may have a cascading effect on the rendering selections for one or more other components of the object. For example, if a background component is restricted to one color which is not appropriate for the selected style, another unrestricted component may be modified to maintain adherence to the selected style.
  • In one embodiment, when a contextual object is initially placed in a virtual universe, the user chooses an initial style for the object by manually selecting a contextual style template from a set of contextual templates in plurality of contextual templates 310 that are available for utilization on the object. The user may select the contextual style template using a context menu or responding to prompts presented by contextual style template manager 318 via a user input/output interface.
  • In another embodiment, if the context surrounding an object placement is modified, a contextual style template may be invoked to update the object. Context updates include, without limitation, changing the wall color an object is placed on, moving the object to a new location, or any other context change.
  • Turning now to FIG. 4, a block diagram illustrating a real world user identifier is shown in accordance with an illustrative embodiment. Virtual universe server 400 is a server that includes virtual universe grid software for creating and managing a virtual universe and objects in a virtual universe. Virtual universe server 400 includes user input/output 402, which may be implemented in any type of user interface for receiving user input and providing output to the user, such as, without limitation, a graphical user interface, a command line interface, a menu driven interface, a keyboard, a mouse, a touch screen, a voice-recognition system, or any other type of input/output device.
  • A user may use user input/output 402 to select a default contextual style template for use in rendering an object and/or select a contextual style template for implementation to change a style of the object. For example, if the object is being displayed with a modern style, the user may manually select a classic or retro style for the object, rather than waiting for the contextual style template to be automatically selected when a context change is detected.
  • Real world user identifier 404 is software for determining a real world identity of a user controlling an avatar and/or determining real world characteristics of the user controlling the avatar. Real world user identifier 404 sends query 406 to set of sources of user information 408 to obtain an identity and/or other information describing the real world identity and real world characteristics of the user. Query 406 may be a single query to a single source of user information, as well as two or more queries to one or more different sources of user information.
  • Set of sources of user information 408 is a set of one or more sources of user information. Set of sources 408 may include both online sources of information and offline sources of information. Offline sources of information include, without limitation, information manually provided by a user and/or information retrieved from one or more local data storage devices. Online sources of information may include information stored on one or more remote data storage devices that are accessed via a network connection and/or information provided by a user from a remote location. Virtual universe server 400 obtains information from one or more remote information sources in set of sources of user information 408 via a network connection provided by network interface 410.
  • Network interface 410 is any type of network access software known or available for allowing virtual universe server 400 to access a network. Network interface 410 connects to a network connection, such as network 102 in FIG. 1. The network connection permits access to any type of network, such as a local area network (LAN), a wide area network (WAN), the Internet, an Ethernet, an intranet, a virtual private network (VPN), or any other type of network.
  • Virtual universe server 400 receives real world user information 412 from set of sources of user information 408 in response to query 406. Real world user information 412 may include a real world identity of the user and/or characteristics of the user. The contextual style template manager in the virtual universe grid software may use real world user information 412 to select contextual style templates for implementation to change the style of one or more objects being displayed in the virtual universe.
  • FIG. 5 is a block diagram of an object avatar rendering table in accordance with an illustrative embodiment. Object avatar rendering table 500 is an example of data in an object avatar rendering table, such as object avatar rendering table 305 in FIG. 3.
  • RenderingUUID 502 is a primary key for object avatar rendering table 500. ObjectUUID 504 is a unique identifier for a selected object in a virtual universe. Object UUID 504 is a foreign key to the existing object table. AvatarUUID 506 is a foreign key to an existing avatar table. AvatarUUID 506 includes a unique identifier for each avatar in the viewable field of the object associated with objectUUID 504.
  • Zone1EnterTime 508 is a field of a date and/or time when an avatar enters a first zone within the viewable field of an object. In this example, the zone 1 enter time is a time when an avatar entered the first zone, assuming a model with two or more zones. Zone1LeaveTime 510 is a field for a date and/or time when the avatar leaves the first zone. Zone2EnterTime 512 is a field in object avatar rendering table 500 for storing a date and/or time when an avatar enters a second zone. The second zone may be an area that is outside the viewable field. In other words, the second zone is an area in which an avatar cannot see the selected object, but the area is in close proximity to the viewable field in which the avatar will be able to see the object. Thus, when an avatar enters the second zone, the object avatar tracking controller software may begin preparing to display the object to the avatar when the avatar does eventually enter the viewable field.
  • Zone2LeaveTime 514 is a field for storing the date and/or time when a given avatar leaves the second zone. NumberofZone1Enters 516 is a field for storing the number of times a particular avatar has entered the first zone. This information may be useful to determine whether the user has never viewed the object. If the user has never before viewed the object, it may be determined that content associated with an object should be displayed in full to the user associated with the avatar, rather than in an abbreviated or summarized form. The information in NumberofZone1Enters 516 is also used to determine whether the user has viewed the object one or more times in the past, and therefore, the content associated with the object should be displayed in part, skip introductory material, be modified or abbreviated, or otherwise altered so that the exact same content is not displayed to the user every time the user is within the viewable field of the object.
  • NumberofZone2Enters 518 is a field for storing the number of times an avatar has entered the second zone. LastCoordinates 520 is a field for storing the coordinate data describing where a given avatar is within the first zone or the second zone of a selected object. The coordinate data is typically given in xyz type coordinate data.
  • The illustrative embodiments recognize that a user may wish to change the style or look and feel of an object based on circumstances, such as, the number of avatars viewing an object, the real world identities or characteristics of users controlling the avatars within the viewable field of the object, whether the object is being displayed to avatars on a holiday, weekend, or workday, whether the object is being displayed during daylight hours or during the evening or night, and other information describing the context in which the object is being viewed.
  • FIG. 6 is a block diagram of a viewable area for an object in accordance with an illustrative embodiment. Range 600 is a viewable field 604 and detection area 606 associated with object 602 in a virtual universe. An object, such as object 602, is an element in a virtual universe that is not directly controlled by a user or associated with a user's account. An object may be, for example and without limitation, a building, a statue, a billboard, a sign, a drink bottle, a logo, a box, a container, or an advertisement in the virtual universe. In this example, object 602 is an advertisement, such as a billboard or a sign.
  • Viewable field 604 has a focal point or center at a location that is the same as the location of object 602. Viewable field 604 may also be referred to as zone 1 or a first zone. An avatar in viewable field 604 is able to see or view object 602 and/or content associated with object 602. For example, object 602 may be associated with video and/or audio content. Object 602 may also optionally be capable of some movement or animation. However, in this example, object 602 is substantially limited to a single location in the virtual universe.
  • Object 602 is rendered on a user's screen when an avatar associated with the user is within viewable field 604. Object 602 is rendered using any perspective mode, including but not limited to, a first person perspective, a third person perspective, a bird's eye view perspective, or a map view perspective. A map view perspective renders objects with labels rather than with extensive details and/or texturing.
  • Detection area 606 is an area adjacent to viewable field 604 within range 600. Detection area 606 may also be referred to as a second zone or zone 2. An avatar in detection area 606 cannot see object 602 or view content associated with object 602. However, when an avatar enters detection area 606, the object avatar tracking controller software can begin preparing to display object 602 and content associated with object 602 to the avatar when the avatar enters viewable field 604.
  • In this example, avatar A 610 is within viewable field 604. Therefore, avatar A 610 is able to view or see object 602. Avatar C 614 is within detection area 606. Avatar C 614 is not able to see or view object 602. However, the presence of avatar C 614 indicates that avatar C 614 may be about to enter viewable field 604 or that avatar C 614 has just left viewable field 604. Avatar B 612 is outside range 600. Avatar B 612 is not able to see or view object 602. In addition, avatar B 612 is not close enough to viewable field 604 to indicate that avatar B 612 may be preparing to enter viewable field 604. Therefore, an object avatar tracking table for object 602 includes entries for avatar A 610 in zone 1 and avatar C 614 in zone 2. However, in this example, the record associated with object 602 in the object avatar rendering table does not include an avatar unique identifier or data for avatar B 612 because avatar B 612 is outside both viewable field 604 and detection area 606.
  • FIG. 7 is a block diagram of a viewable area for an object having a focal point at a location other than the location of the object in accordance with an illustrative embodiment. Viewable field 700 is a viewable field for object 702. In this example, object 702 is an advertisement in front of object 706. Viewable field 700 is a range in which an avatar, such as avatars 610-614, may view object 706. An avatar can see object 702 if the avatar is within viewable field 700.
  • Viewable field 700 has focal point 704. Focal point 704 is a point from which the range or area of viewable field 700 for object 702 is determined. In other words, viewable field 700 is an area that is identified based on a predetermined radius or distance from focal point 704. Here, focal point 704 is a location that is different from the location of object 702 because object 702 is adjacent to an obstructing object. In this example, the obstructing object is object 706.
  • In this example, when avatar C 614 comes in range of detection area 708 of object 702, object based avatar tracking controller, such as object based avatar tracking controller 308 in FIG. 3, makes a determination as to whether there is an existing session associated with the unique identifier of object 702 and the unique identifier of avatar C 614. This step may be implemented by making a query to the object avatar rendering table to determine if avatar C 614 has ever entered zone 2 or zone 1 previously. If there is not an existing session for avatar C 614, the object based avatar tracking controller creates a record in the object avatar rendering table with the unique identifier of object 702 and the unique identifier of avatar C 614.
  • The record in the object avatar rendering table may optionally include additional information, such as, without limitation, a date and time when avatar C 614 entered zone 2, a date and time when avatar C 614 leaves zone 2, a date and time when avatar C 614 enters zone 1, a number of zone 2 enters, a number of zone 1 enters, coordinates of avatar C 614, and any other data describing avatar C 614. This data is used by the virtual universe grid software for analysis, reporting, and billing purposes.
  • Object 702 may have an initiation process associated with object 702. For example, if object 702 is an advertisement with an audio and video content associated with viewing object 702, an initiation process may include buffering the audio and/or video content, checking a cache for the audio and/or video content, caching the audio and/or video content, or any other initiation process.
  • When avatar C 614 enters detection area 708, the object-based avatar tracking controller triggers any object initiation process defined by object 702. When avatar C 614 enters viewable field (zone 1) 700, the object based avatar tracking controller displays the buffered or cached content. If a user is viewing the object for the first time and object 702 has a video or audio file associated with viewing the object, the process starts playing the video or audio from the beginning.
  • If a session already exists, the object based avatar tracking controller triggers any object re-initiation process defined by object 702. For example, if the user is not viewing an object with an associated video for the first time, the process starts playing the video at a point in the video after the beginning, such as after an introduction, in a middle part, or near the end of the video to avoid replaying introductory material.
  • The object based avatar tracking controller makes a determination as to whether the position of avatar C 614 has changed. Changing position may include traveling, turning, walking, or disappearing, such as teleporting, logging off, or disconnecting. When avatar C's 614 position changes, the object based avatar tracking controller adds the user position data to the object avatar rendering table, such as at a field for LastCoordinates 420 in FIG. 4. The user position data includes angle of view coordinate data of the avatar relative to object 702 and the distance of avatar C 614 to object 702.
  • The object based avatar tracking controller performs an analysis of the position data and modifies object 702 according to one or more geometric and texture modification methods (GTMs) to improve visibility of the object.
  • When avatar C 614 is out of range of viewable field 700 and detection area 706, the object based avatar tracking controller logs a session pause for the session associated with avatar C 614. The log may include the date and time of the session pause. When the session has been paused for an amount of time that exceeds a threshold amount of time, the object based avatar tracking controller terminates the session associated with avatar C 614. The process termination may include, without limitation, removing the records and data associated with avatar C 614 from the object avatar rendering table. If the record is not deleted, when avatar C 614 comes back into range of zone 1 or zone 2 of object 702, the object based avatar tracking controller determines that an existing session associated with the unique identifier of object 702 and a unique identifier of avatar C 614 already exists. In such a case, a new record for avatar C 614 will not be created. Instead, the data in the object based avatar rendering table will be updated with new data regarding avatar C 614 in the range of object 702.
  • FIG. 8 is a block diagram illustrating a restrictions table in accordance with an illustrative embodiment. Restrictions table 800 is a table of restrictions that limit the changes that are made to a particular object when each contextual style template is implemented to modify the object, such as restrictions table 326 in FIG. 3.
  • Restrictions table 800 in this example limits changes to font colors white 802, blue 804, and red 806 based on the style rendered in accordance with the selected contextual style template. For example, if a first contextual style template is implemented to create a blend style, restriction table 800 limits changes of the font color white 802 to either white or grey. If a second contextual style template is selected to create a high visibility style, restriction table 800 limits changes of the font color white to blue or red. In this example, the style modifications and the restriction data is represented with colors as opposed to number ranges corresponding with the colors for illustrative purposes only. In one embodiment, restrictions table 800 comprises number ranges corresponding to the colors. In addition, while restriction table 800 lists textual associations for styles and colors, it should be noted that mathematical functions may be derived for color ranges permitting selection of colors with explicit declaration.
  • Rendering restrictions in restrictions table 800 for a particular object component may have a cascading effect on the rendering selections for one or more other components of the object. For example, if a background component is restricted to one color which is not appropriate for the selected style, another unrestricted component may be modified to maintain adherence to the selected style. In this example, the object may be restricted to a blue background. If the object is placed in a predominately blue context and the high visibility style is selected, the restrictions on blue context color 804 may prevent utilization of the red or yellow color as a background selection. Therefore, the background may remain blue but the font color may be augmented to red or yellow to create a high visibility style rather than changing the background color. In another example, a combination of restrictions may exist such that the selected style is not possible, the user is notified that the selected style cannot be implemented in the placed context due to the restrictions. Rendering restrictions table 800 may include absolute restrictions that enforce restrictions regardless of context. For example, a restriction record may prevent any context and any style from changing an object's color to green.
  • FIG. 9 is a flowchart illustrating a process for object based avatar tracking using object avatar rendering tables in accordance with an illustrative embodiment. The process in FIG. 9 is implemented by software for tracking avatars in a range of an object, such as object based avatar tracking controller 306 in FIG. 3.
  • The process begins when an avatar comes in range of the object (step 902). A determination is made as to whether there is an existing session associated with the unique identifier of the object and the unique identifier of the avatar (step 904). This step may be implemented by making a query to the object avatar rendering table for the object. If there is not an existing session, the process creates a record in the object avatar rendering table with the unique identifier of the object and the unique identifier of the avatar (step 906). The record in the object avatar rendering table may include other information, such as, without limitation, a date and time, which can be used for analysis, reporting, and billing purposes.
  • The process triggers any object initiation process defined by the object (step 908). For example, if a user is viewing the object for the first time and the object has a video associated with viewing the object, the process starts playing the video from the beginning.
  • Returning to step 904, if a session already exists, the process triggers any object re-initiation process defined by the object (step 910). For example, if the user is not viewing an object with an associated video for the first time, the process starts playing the video at a point in the video after the beginning, such as after an introduction, in a middle part, or near the end of the video to avoid replaying introductory material.
  • The process makes a determination as to whether the user's position has changed (step 912). Changing position may include traveling, turning, or disappearing, such as teleporting, logging off, or disconnecting. If the user's position has not changed, the process returns to step 912. The process may return to step 912 if the user's position does not change within a specified amount of time. The specified amount of time may be configured by the virtual universe grid administrator or object owner. The specified amount of time may occur very frequently, such as, without limitation, after a specified number of seconds or after a specified number of milliseconds.
  • When the user's position changes at step 912, the process adds the user position data to the object avatar rendering table (step 914). The user position data includes angle of view coordinate data of the avatar relative to the object and distance of the avatar to the object. The process then performs an analysis of the position data and modifies the object according to one or more geometric and texture modification methods (GTMs) (step 916) to improve visibility of the object.
  • The process then makes a determination as to whether the user is out of view (step 918). The user may be out of view if the user or the user's avatar has disappeared or is no longer facing the object. If the user is not out of view, after a specified amount of time the process returns to step 912. The specified amount of time may be configured by the virtual universe grid administrator or object owner. The specified amount of time may be, without limitation, a specified number of seconds or a specified number of milliseconds.
  • If the user is out of view at step 918, the process logs a session pause (step 920). The log may include the date and time. Next, the process makes a determination as to whether the session has been paused for an amount of time that exceeds a threshold amount of time (step 922). The threshold amount of time may be configured by a virtual universe administrator or object owner. If the pause does not exceed the threshold, the process returns to step 922. When the pause exceeds the threshold, the process terminates thereafter.
  • The process termination may include, without limitation, removing the records of the avatar from the object avatar rendering table. If the record is not deleted, when the avatar comes back into range of the object at step 902, the process will make a determination at step 904 that an existing session associated with the unique identifier of the object and a unique identifier of the avatar already exist.
  • FIG. 10 is a flowchart illustrating a process for modifying the style of an object in a virtual universe using contextual style templates in accordance with an illustrative embodiment. The process in FIG. 10 may be implemented by software for identifying a contextual style template based on a context change associated with the object.
  • The process begins by selecting a first contextual style template from a plurality of contextual style templates (step 1002). The first contextual template may be a default contextual template or a contextual template selected by a user. The process renders an object in the virtual universe in accordance with the first contextual style template to display the object with a first style (step 1004). The process makes a determination as to whether a context change associated with the object is detected (step 1006). If a context change is detected, the process selects a different contextual style template from the plurality of contextual style templates based on the context change (step 1008). The process renders the object in accordance with the different contextual style template (step 1010). The process then returns to step 1006 and determines if another context change has been detected. If another context change occurs, the process selects another different contextual style template from the plurality of contextual style templates and renders the object in accordance with the selected different contextual style template. This process iteratively implements steps 1006-1010 until a context change is no longer detected. When a context change is not detected, the process terminates. The process may be terminated if a context change is not detected within a predetermined period of time
  • One embodiment provides a computer implemented method, apparatus, and computer program product of modifying object styles in a virtual universe. An object is rendered in accordance with a first contextual style template from a plurality of contextual style templates. The first contextual style template comprises first geometric and texture data to display the object with a first style. In response to detecting a set of contextual changes associated with the object, a second contextual style template is identified from the plurality of contextual style templates. The set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style. The object is rendered in accordance with second geometric and texture data in the second contextual style template to form a modified object, wherein the modified object is displayed with the second style.
  • The predefined contextual style templates allow a single object to be created that may be used for a plurality of purposes and placements. The object is modified in accordance with a contextual style template to automatically alter the object's properties to create a different style and appearance of the object to enable the object to be used for multiple purposes and placements within the virtual universe. Furthermore, a restriction table provides restrictions on object alteration to prevent a contextual style template from altering the object in a manner that may conflict with corporate identity, trademark and trade dress, personal preferences, or other aesthetic goals. The contextual style templates reduce the cost of creating virtual universe objects and lower the cost of maintaining virtual universe objects. In other words, the predefined plurality of contextual style templates that are created for objects in a virtual universe reduces the costs of creating and maintaining virtual objects by supporting multiple object styles and modifying the rendering characteristics of objects based on selected style and object context. The modifications may be performed dynamically as the context is changing or statically.
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,”when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
  • The invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any tangible apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk—read only memory (CD-ROM), compact disk—read/write (CD-R/W) and DVD.
  • A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (20)

1. A computer implemented method of modifying object styles in a virtual universe, the computer implemented method comprising:
rendering an object in accordance with a first contextual style template from a plurality of contextual style templates, wherein the first contextual style template comprises first geometric and texture data to display the object with a first style;
responsive to detecting a set of contextual changes associated with the object, identifying a second contextual style template from the plurality of contextual style templates, wherein the set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style; and
rendering the object in accordance with second geometric and texture data in the second contextual style template to form a modified object, wherein the modified object is displayed with the second style.
2. The computer implemented method of claim 1 wherein the set of contextual changes comprises a user selection of the second contextual style template.
3. The computer implemented method of claim 1 further comprising: detecting a set of avatars within a viewable field of the object to form the set of contextual changes, wherein the set of avatars within the viewable field of the object triggers implementation of the second style template to change the first style of the object to the second style.
4. The computer implemented method of claim 1 wherein the second contextual style template comprises at least one of floating the object, moving the object along a fixed path, cloning the object, changing a size of the object, changing texture of the object, changing a background of the object, changing a font size associated with the object, and changing a color of the object.
5. The computer implemented method of claim 1 further comprising:
determining a real world identity of a set of users controlling a set of avatars within a viewable field of the object, wherein the set of contextual changes comprises a real world identity of a user controlling an avatar that is within a given proximity of the object.
6. The computer implemented method of claim 1 further comprising:
determining a real world identity of a set of users controlling a set of avatars within a viewable field of the object; and
identifying characteristics of the set of users based on the real world identity of each user, wherein the set of contextual changes comprises a characteristic of a user controlling an avatar that is within a given proximity of the object.
7. The computer implemented method of claim 1 wherein the second contextual style is selected from a group consisting of a blend style, a high visibility style, a corporate style, a youth style, a modern style, a classic style, and a holiday style.
8. The computer implemented method of claim 1 further comprising:
retrieving a set of restrictions for the object; and
applying the set of restrictions to the second contextual style to limit modifications to the object.
9. The computer implemented method of claim 1 wherein the object comprises a plurality of components, and further comprising:
modifying a set of components in the plurality of components using the second geometric and texture data to form the modified object.
10. The computer implemented method of claim 1 wherein the set of contextual changes comprises at least one of a time of day, a day of the week, a day of the month, a day of the year, and a holiday.
11. The computer implemented method of claim 1 wherein the set of contextual changes comprises a number of avatars within a viewable field of the object.
12. A computer program product for modifying object styles in a virtual universe, the computer program product comprising:
a computer usable medium having computer usable program code embodied therewith, the computer usable program code comprising:
computer usable program code configured to render an object in accordance with a first contextual style template from a plurality of contextual style templates, wherein the first contextual style template comprises first geometric and texture data to display the object with a first style;
computer usable program code configured to identify a second contextual style template from the plurality of contextual style templates in response to detecting a set of contextual changes associated with the object, wherein the set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style; and
computer usable program code configured to render the object in accordance with second geometric and texture data in the second contextual style template to form a modified object, wherein the modified object is displayed with the second style.
13. The computer program product of claim 12 further comprising:
computer usable program code configured to detect a set of avatars within a viewable field of the object to form the set of contextual changes, wherein the set of avatars within the viewable field of the object triggers implementation of the second style template to change the first style of the object to the second style.
14. The computer program product of claim 12 further comprising:
computer usable program code configured to determine a real world identity of a set of users controlling a set of avatars within a viewable field of the object, wherein the set of contextual changes comprises a real world identity of a user controlling an avatar that is within a given proximity of the object.
15. The computer program product of claim 12 further comprising:
computer usable program code configured to determine a real world identity of a set of users controlling a set of avatars within a viewable field of the object; and
computer usable program code configured to identify characteristics of the set of users based on the real world identity of each user, wherein the set of contextual changes comprises a characteristic of a user controlling an avatar that is within a given proximity of the object.
16. The computer program product of claim 12 further comprising:
computer usable program code configured to retrieve a set of restrictions for the object; and
computer usable program code configured to apply the set of restrictions to the second contextual style to limit modifications to the object.
17. An apparatus comprising:
a bus system;
a communications system coupled to the bus system;
a memory connected to the bus system, wherein the memory includes computer usable program code; and
a processing unit coupled to the bus system, wherein the processing unit executes the computer usable program code to render an object in accordance with a first contextual style template from a plurality of contextual style templates, wherein the first contextual style template comprises first geometric and texture data to display the object with a first style; identify a second contextual style template from the plurality of contextual style templates in response to detecting a set of contextual changes associated with the object, wherein the set of contextual changes triggers implementation of the second contextual style template to change the first style of the object to a second style; and render the object in accordance with second geometric and texture data in the second contextual style template to form a modified object, wherein the modified object is displayed with the second style.
18. The apparatus of claim 17 wherein the processor unit further executes the computer usable program code to detect a set of avatars within a viewable field of the object to form the set of contextual changes, wherein the set of avatars within the viewable field of the object triggers implementation of the second style template to change the first style of the object to the second style.
19. The apparatus of claim 17 wherein the processor unit further executes the computer usable program code to determine a real world identity of a set of users controlling a set of avatars within a viewable field of the object, wherein the set of contextual changes comprises a real world identity of a user controlling an avatar that is within a given proximity of the object.
20. The apparatus of claim 17 wherein the processor unit further executes the computer usable program code to determine a real world identity of a set of users controlling a set of avatars within a viewable field of the object; and identify characteristics of the set of users based on the real world identity of each user, wherein the set of contextual changes comprises a characteristic of a user controlling an avatar that is within a given proximity of the object.
US12/353,656 2009-01-14 2009-01-14 Contextual templates for modifying objects in a virtual universe Abandoned US20100177117A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/353,656 US20100177117A1 (en) 2009-01-14 2009-01-14 Contextual templates for modifying objects in a virtual universe
US13/531,265 US8458603B2 (en) 2009-01-14 2012-06-22 Contextual templates for modifying objects in a virtual universe

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/353,656 US20100177117A1 (en) 2009-01-14 2009-01-14 Contextual templates for modifying objects in a virtual universe

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/531,265 Continuation US8458603B2 (en) 2009-01-14 2012-06-22 Contextual templates for modifying objects in a virtual universe

Publications (1)

Publication Number Publication Date
US20100177117A1 true US20100177117A1 (en) 2010-07-15

Family

ID=42318742

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/353,656 Abandoned US20100177117A1 (en) 2009-01-14 2009-01-14 Contextual templates for modifying objects in a virtual universe
US13/531,265 Expired - Fee Related US8458603B2 (en) 2009-01-14 2012-06-22 Contextual templates for modifying objects in a virtual universe

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/531,265 Expired - Fee Related US8458603B2 (en) 2009-01-14 2012-06-22 Contextual templates for modifying objects in a virtual universe

Country Status (1)

Country Link
US (2) US20100177117A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090267948A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Object based avatar tracking
US20090267950A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Fixed path transitions
US20090267937A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Floating transitions
US20090267960A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Color Modification of Objects in a Virtual Universe
US20090271422A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Object Size Modifications Based on Avatar Distance
US20100005423A1 (en) * 2008-07-01 2010-01-07 International Business Machines Corporation Color Modifications of Objects in a Virtual Universe Based on User Display Settings
US20100001993A1 (en) * 2008-07-07 2010-01-07 International Business Machines Corporation Geometric and texture modifications of objects in a virtual universe based on real world user characteristics
US20100318898A1 (en) * 2009-06-11 2010-12-16 Hewlett-Packard Development Company, L.P. Rendering definitions
US20120127198A1 (en) * 2010-11-22 2012-05-24 Microsoft Corporation Selection of foreground characteristics based on background
US20130055116A1 (en) * 2011-08-25 2013-02-28 Microsoft Corporation Theme variation engine
US8458603B2 (en) 2009-01-14 2013-06-04 International Business Machines Corporation Contextual templates for modifying objects in a virtual universe
US20160300388A1 (en) * 2015-04-10 2016-10-13 Sony Computer Entertainment Inc. Filtering And Parental Control Methods For Restricting Visual Activity On A Head Mounted Display
US20180122147A1 (en) * 2016-10-31 2018-05-03 Dg Holdings, Inc. Transferrable between styles virtual identity systems and methods
CN109313509A (en) * 2016-04-21 2019-02-05 奇跃公司 The vision ring of light around the visual field
US10282551B2 (en) 2017-05-12 2019-05-07 Linden Research, Inc. Systems and methods to control publication of user content in a virtual world
WO2019172541A1 (en) * 2018-03-05 2019-09-12 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
US10491919B2 (en) 2009-02-19 2019-11-26 Sony Corporation Image processing apparatus and method
CN111866597A (en) * 2019-04-30 2020-10-30 百度在线网络技术(北京)有限公司 Method, system and storage medium for controlling layout of page elements in video
US10963931B2 (en) * 2017-05-12 2021-03-30 Wookey Search Technologies Corporation Systems and methods to control access to components of virtual objects
CN114185428A (en) * 2021-11-09 2022-03-15 北京百度网讯科技有限公司 Method and device for switching virtual image style, electronic equipment and storage medium
US11631229B2 (en) 2016-11-01 2023-04-18 Dg Holdings, Inc. Comparative virtual asset adjustment systems and methods
CN117170734A (en) * 2023-11-03 2023-12-05 成都数智创新精益科技有限公司 Method for limiting style scope

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9516088B2 (en) * 2012-08-29 2016-12-06 Ebay Inc. Systems and methods to consistently generate web content
US10335688B2 (en) 2016-06-03 2019-07-02 Microsoft Technology Licensing, Llc Administrative control features for hosted sessions
US11766224B2 (en) * 2018-09-27 2023-09-26 Mymeleon Ag Visualized virtual agent

Citations (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6023270A (en) * 1997-11-17 2000-02-08 International Business Machines Corporation Delivery of objects in a virtual world using a descriptive container
US6036601A (en) * 1999-02-24 2000-03-14 Adaboy, Inc. Method for advertising over a computer network utilizing virtual environments of games
US6349301B1 (en) * 1998-02-24 2002-02-19 Microsoft Corporation Virtual environment bystander updating in client server architecture
US20020024532A1 (en) * 2000-08-25 2002-02-28 Wylci Fables Dynamic personalization method of creating personalized user profiles for searching a database of information
US20020056091A1 (en) * 2000-09-13 2002-05-09 Bala Ravi Narayan Software agent for facilitating electronic commerce transactions through display of targeted promotions or coupons
US6421047B1 (en) * 1996-09-09 2002-07-16 De Groot Marc Multi-user virtual reality system for simulating a three-dimensional environment
US20020107072A1 (en) * 2001-02-07 2002-08-08 Giobbi John J. Centralized gaming system with modifiable remote display terminals
US20020169644A1 (en) * 2000-05-22 2002-11-14 Greene William S. Method and system for implementing a management operations center in a global ecosystem of interrelated services
US6532007B1 (en) * 1998-09-30 2003-03-11 Sony Corporation Method, apparatus and presentation medium for multiple auras in a virtual shared space
US20040034561A1 (en) * 2000-04-07 2004-02-19 Smith Glen David Interactive marketing system
US20040053690A1 (en) * 2000-12-26 2004-03-18 Fogel David B. Video game characters having evolving traits
US20040166935A1 (en) * 2001-10-10 2004-08-26 Gavin Andrew Scott Providing game information via characters in a game environment
US6788946B2 (en) * 2001-04-12 2004-09-07 Qualcomm Inc Systems and methods for delivering information within a group communications system
US6798407B1 (en) * 2000-11-28 2004-09-28 William J. Benman System and method for providing a functional virtual environment with real time extracted and transplanted images
US20040210634A1 (en) * 2002-08-23 2004-10-21 Miguel Ferrer Method enabling a plurality of computer users to communicate via a set of interconnected terminals
US20040220850A1 (en) * 2002-08-23 2004-11-04 Miguel Ferrer Method of viral marketing using the internet
US6868389B1 (en) * 1999-01-19 2005-03-15 Jeffrey K. Wilkins Internet-enabled lead generation
US20050071306A1 (en) * 2003-02-05 2005-03-31 Paul Kruszewski Method and system for on-screen animation of digital objects or characters
US20050086112A1 (en) * 2000-11-28 2005-04-21 Roy Shkedi Super-saturation method for information-media
US20050086605A1 (en) * 2002-08-23 2005-04-21 Miguel Ferrer Method and apparatus for online advertising
US20050114198A1 (en) * 2003-11-24 2005-05-26 Ross Koningstein Using concepts for ad targeting
US20050156928A1 (en) * 2002-03-11 2005-07-21 Microsoft Corporation Efficient scenery object rendering
US20050179685A1 (en) * 2001-12-18 2005-08-18 Sony Computer Entertainment Inc. Object display system in a virtual world
US6954728B1 (en) * 2000-05-15 2005-10-11 Avatizing, Llc System and method for consumer-selected advertising and branding in interactive media
US20050253872A1 (en) * 2003-10-09 2005-11-17 Goss Michael E Method and system for culling view dependent visual data streams for a virtual environment
US6981220B2 (en) * 2000-04-28 2005-12-27 Sony Corporation Information processing apparatus and method, and storage medium
US20050286769A1 (en) * 2001-03-06 2005-12-29 Canon Kabushiki Kaisha Specific point detecting method and device
US20060168143A1 (en) * 2005-01-24 2006-07-27 John Moetteli Automated method for executing a service order directed to a particular beneficiary, initiated after query requiring minimal response
US20060195462A1 (en) * 2005-02-28 2006-08-31 Yahoo! Inc. System and method for enhanced media distribution
US20060194632A1 (en) * 2005-02-25 2006-08-31 Microsoft Corporation Computerized method and system for generating a gaming experience in a networked environment
US20060258462A1 (en) * 2005-04-12 2006-11-16 Long Cheng System and method of seamless game world based on server/client
US20070035561A1 (en) * 2005-04-11 2007-02-15 Systems Technology, Inc. System for combining virtual and real-time environments
US20070191104A1 (en) * 2006-02-14 2007-08-16 Leviathan Entertainment, Llc Online Game Environment that Facilitates Sponsorship Contracts
US20070247979A1 (en) * 2002-09-16 2007-10-25 Francois Brillon Jukebox with customizable avatar
US20070252841A1 (en) * 2004-06-23 2007-11-01 Nhn Corporation Image Resource Loading System and Method Which Carries Out Loading of Object for Renewal of Game Screen
US20070261109A1 (en) * 2006-05-04 2007-11-08 Martin Renaud Authentication system, such as an authentication system for children and teenagers
US7305691B2 (en) * 2001-05-07 2007-12-04 Actv, Inc. System and method for providing targeted programming outside of the home
US20080004119A1 (en) * 2006-06-30 2008-01-03 Leviathan Entertainment, Llc System for the Creation and Registration of Ideas and Concepts in a Virtual Environment
US7320031B2 (en) * 1999-12-28 2008-01-15 Utopy, Inc. Automatic, personalized online information and product services
US20080252716A1 (en) * 2007-04-10 2008-10-16 Ntt Docomo, Inc. Communication Control Device and Communication Terminal
US20080281622A1 (en) * 2007-05-10 2008-11-13 Mary Kay Hoal Social Networking System
US7454056B2 (en) * 2004-03-30 2008-11-18 Seiko Epson Corporation Color correction device, color correction method, and color correction program
US20090005423A1 (en) * 1999-07-16 2009-01-01 Aradigm Corporation Systems and methods for effecting cessation of tobacco use
US20090063168A1 (en) * 2007-08-29 2009-03-05 Finn Peter G Conducting marketing activity in relation to a virtual world based on monitored virtual world activity
US20090089157A1 (en) * 2007-09-27 2009-04-02 Rajesh Narayanan Method and apparatus for controlling an avatar's landing zone in a virtual environment
US20090227368A1 (en) * 2008-03-07 2009-09-10 Arenanet, Inc. Display of notational object in an interactive online environment
US20090254417A1 (en) * 2006-06-29 2009-10-08 Relevancenow Pty Limited, Acn 117411953 Hydrate-based desalination using compound permeable restraint panels and vaporization-based cooling
US20090267950A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Fixed path transitions
US20090267948A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Object based avatar tracking
US20090271422A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Object Size Modifications Based on Avatar Distance
US20090267937A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Floating transitions
US20090267960A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Color Modification of Objects in a Virtual Universe
US20090299960A1 (en) * 2007-12-21 2009-12-03 Lineberger William B Methods, systems, and computer program products for automatically modifying a virtual environment based on user profile information
US20090327219A1 (en) * 2008-04-24 2009-12-31 International Business Machines Corporation Cloning Objects in a Virtual Universe
US20100001993A1 (en) * 2008-07-07 2010-01-07 International Business Machines Corporation Geometric and texture modifications of objects in a virtual universe based on real world user characteristics
US7720835B2 (en) * 2006-05-05 2010-05-18 Visible Technologies Llc Systems and methods for consumer-generated media reputation management
US20100205179A1 (en) * 2006-10-26 2010-08-12 Carson Anthony R Social networking system and method
US7805680B2 (en) * 2001-01-03 2010-09-28 Nokia Corporation Statistical metering and filtering of content via pixel-based metadata

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7895076B2 (en) 1995-06-30 2011-02-22 Sony Computer Entertainment Inc. Advertisement insertion, profiling, impression, and feedback
CA2180899A1 (en) 1995-07-12 1997-01-13 Yasuaki Honda Synchronous updating of sub objects in a three dimensional virtual reality space sharing system and method therefore
AU3639699A (en) 1998-04-13 1999-11-01 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
US20030091229A1 (en) 2000-03-31 2003-05-15 Imation Corp. Color image display accuracy using comparison of complex shapes to reference background
JP2002197376A (en) 2000-12-27 2002-07-12 Fujitsu Ltd Method and device for providing virtual world customerized according to user
US20020138607A1 (en) 2001-03-22 2002-09-26 There System, method and computer program product for data mining in a three-dimensional multi-user environment
US6394301B1 (en) 2001-06-18 2002-05-28 James S. Koch Shipping and display container for chain and bulk goods
KR20050004817A (en) 2002-03-28 2005-01-12 노키아 코포레이션 Method and device for displaying images
US8965771B2 (en) 2003-12-08 2015-02-24 Kurzweil Ainetworks, Inc. Use of avatar with event processing
WO2006020846A2 (en) 2004-08-11 2006-02-23 THE GOVERNMENT OF THE UNITED STATES OF AMERICA as represented by THE SECRETARY OF THE NAVY Naval Research Laboratory Simulated locomotion method and apparatus
US8435113B2 (en) 2004-12-15 2013-05-07 Google Inc. Method and system for displaying of transparent ads
US8203503B2 (en) 2005-09-08 2012-06-19 Aechelon Technology, Inc. Sensor and display-independent quantitative per-pixel stimulation system
US20070100650A1 (en) 2005-09-14 2007-05-03 Jorey Ramer Action functionality for mobile content search results
US8990705B2 (en) 2008-07-01 2015-03-24 International Business Machines Corporation Color modifications of objects in a virtual universe based on user display settings
US20100177117A1 (en) 2009-01-14 2010-07-15 International Business Machines Corporation Contextual templates for modifying objects in a virtual universe

Patent Citations (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6421047B1 (en) * 1996-09-09 2002-07-16 De Groot Marc Multi-user virtual reality system for simulating a three-dimensional environment
US6023270A (en) * 1997-11-17 2000-02-08 International Business Machines Corporation Delivery of objects in a virtual world using a descriptive container
US6349301B1 (en) * 1998-02-24 2002-02-19 Microsoft Corporation Virtual environment bystander updating in client server architecture
US6532007B1 (en) * 1998-09-30 2003-03-11 Sony Corporation Method, apparatus and presentation medium for multiple auras in a virtual shared space
US6868389B1 (en) * 1999-01-19 2005-03-15 Jeffrey K. Wilkins Internet-enabled lead generation
US6036601A (en) * 1999-02-24 2000-03-14 Adaboy, Inc. Method for advertising over a computer network utilizing virtual environments of games
US20090005423A1 (en) * 1999-07-16 2009-01-01 Aradigm Corporation Systems and methods for effecting cessation of tobacco use
US7320031B2 (en) * 1999-12-28 2008-01-15 Utopy, Inc. Automatic, personalized online information and product services
US20040034561A1 (en) * 2000-04-07 2004-02-19 Smith Glen David Interactive marketing system
US6981220B2 (en) * 2000-04-28 2005-12-27 Sony Corporation Information processing apparatus and method, and storage medium
US6954728B1 (en) * 2000-05-15 2005-10-11 Avatizing, Llc System and method for consumer-selected advertising and branding in interactive media
US20030004774A1 (en) * 2000-05-22 2003-01-02 Greene William S. Method and system for realizing an avatar in a management operations center implemented in a global ecosystem of interrelated services
US20020169644A1 (en) * 2000-05-22 2002-11-14 Greene William S. Method and system for implementing a management operations center in a global ecosystem of interrelated services
US6895406B2 (en) * 2000-08-25 2005-05-17 Seaseer R&D, Llc Dynamic personalization method of creating personalized user profiles for searching a database of information
US20020024532A1 (en) * 2000-08-25 2002-02-28 Wylci Fables Dynamic personalization method of creating personalized user profiles for searching a database of information
US20020056091A1 (en) * 2000-09-13 2002-05-09 Bala Ravi Narayan Software agent for facilitating electronic commerce transactions through display of targeted promotions or coupons
US20050086112A1 (en) * 2000-11-28 2005-04-21 Roy Shkedi Super-saturation method for information-media
US6798407B1 (en) * 2000-11-28 2004-09-28 William J. Benman System and method for providing a functional virtual environment with real time extracted and transplanted images
US20040053690A1 (en) * 2000-12-26 2004-03-18 Fogel David B. Video game characters having evolving traits
US7025675B2 (en) * 2000-12-26 2006-04-11 Digenetics, Inc. Video game characters having evolving traits
US7805680B2 (en) * 2001-01-03 2010-09-28 Nokia Corporation Statistical metering and filtering of content via pixel-based metadata
US20020107072A1 (en) * 2001-02-07 2002-08-08 Giobbi John J. Centralized gaming system with modifiable remote display terminals
US6749510B2 (en) * 2001-02-07 2004-06-15 Wms Gaming Inc. Centralized gaming system with modifiable remote display terminals
US20050286769A1 (en) * 2001-03-06 2005-12-29 Canon Kabushiki Kaisha Specific point detecting method and device
US7454065B2 (en) * 2001-03-06 2008-11-18 Canon Kabushiki Kaisha Specific point detecting method and device
US6788946B2 (en) * 2001-04-12 2004-09-07 Qualcomm Inc Systems and methods for delivering information within a group communications system
US7305691B2 (en) * 2001-05-07 2007-12-04 Actv, Inc. System and method for providing targeted programming outside of the home
US20040166935A1 (en) * 2001-10-10 2004-08-26 Gavin Andrew Scott Providing game information via characters in a game environment
US20050179685A1 (en) * 2001-12-18 2005-08-18 Sony Computer Entertainment Inc. Object display system in a virtual world
US7158135B2 (en) * 2002-03-11 2007-01-02 Microsoft Corporation Efficient scenery object rendering
US20050156928A1 (en) * 2002-03-11 2005-07-21 Microsoft Corporation Efficient scenery object rendering
US20040220850A1 (en) * 2002-08-23 2004-11-04 Miguel Ferrer Method of viral marketing using the internet
US20050086605A1 (en) * 2002-08-23 2005-04-21 Miguel Ferrer Method and apparatus for online advertising
US20040210634A1 (en) * 2002-08-23 2004-10-21 Miguel Ferrer Method enabling a plurality of computer users to communicate via a set of interconnected terminals
US7822687B2 (en) * 2002-09-16 2010-10-26 Francois Brillon Jukebox with customizable avatar
US20070247979A1 (en) * 2002-09-16 2007-10-25 Francois Brillon Jukebox with customizable avatar
US20050071306A1 (en) * 2003-02-05 2005-03-31 Paul Kruszewski Method and system for on-screen animation of digital objects or characters
US20050253872A1 (en) * 2003-10-09 2005-11-17 Goss Michael E Method and system for culling view dependent visual data streams for a virtual environment
US20050114198A1 (en) * 2003-11-24 2005-05-26 Ross Koningstein Using concepts for ad targeting
US7454056B2 (en) * 2004-03-30 2008-11-18 Seiko Epson Corporation Color correction device, color correction method, and color correction program
US20070252841A1 (en) * 2004-06-23 2007-11-01 Nhn Corporation Image Resource Loading System and Method Which Carries Out Loading of Object for Renewal of Game Screen
US20060168143A1 (en) * 2005-01-24 2006-07-27 John Moetteli Automated method for executing a service order directed to a particular beneficiary, initiated after query requiring minimal response
US20060194632A1 (en) * 2005-02-25 2006-08-31 Microsoft Corporation Computerized method and system for generating a gaming experience in a networked environment
US7685204B2 (en) * 2005-02-28 2010-03-23 Yahoo! Inc. System and method for enhanced media distribution
US20060195462A1 (en) * 2005-02-28 2006-08-31 Yahoo! Inc. System and method for enhanced media distribution
US20070035561A1 (en) * 2005-04-11 2007-02-15 Systems Technology, Inc. System for combining virtual and real-time environments
US7479967B2 (en) * 2005-04-11 2009-01-20 Systems Technology Inc. System for combining virtual and real-time environments
US20060258462A1 (en) * 2005-04-12 2006-11-16 Long Cheng System and method of seamless game world based on server/client
US20070191104A1 (en) * 2006-02-14 2007-08-16 Leviathan Entertainment, Llc Online Game Environment that Facilitates Sponsorship Contracts
US20070261109A1 (en) * 2006-05-04 2007-11-08 Martin Renaud Authentication system, such as an authentication system for children and teenagers
US7720835B2 (en) * 2006-05-05 2010-05-18 Visible Technologies Llc Systems and methods for consumer-generated media reputation management
US20090254417A1 (en) * 2006-06-29 2009-10-08 Relevancenow Pty Limited, Acn 117411953 Hydrate-based desalination using compound permeable restraint panels and vaporization-based cooling
US20080004119A1 (en) * 2006-06-30 2008-01-03 Leviathan Entertainment, Llc System for the Creation and Registration of Ideas and Concepts in a Virtual Environment
US20100205179A1 (en) * 2006-10-26 2010-08-12 Carson Anthony R Social networking system and method
US20080252716A1 (en) * 2007-04-10 2008-10-16 Ntt Docomo, Inc. Communication Control Device and Communication Terminal
US20080281622A1 (en) * 2007-05-10 2008-11-13 Mary Kay Hoal Social Networking System
US20090063168A1 (en) * 2007-08-29 2009-03-05 Finn Peter G Conducting marketing activity in relation to a virtual world based on monitored virtual world activity
US20090089157A1 (en) * 2007-09-27 2009-04-02 Rajesh Narayanan Method and apparatus for controlling an avatar's landing zone in a virtual environment
US20090299960A1 (en) * 2007-12-21 2009-12-03 Lineberger William B Methods, systems, and computer program products for automatically modifying a virtual environment based on user profile information
US20090227368A1 (en) * 2008-03-07 2009-09-10 Arenanet, Inc. Display of notational object in an interactive online environment
US20090267948A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Object based avatar tracking
US20090327219A1 (en) * 2008-04-24 2009-12-31 International Business Machines Corporation Cloning Objects in a Virtual Universe
US20090267960A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Color Modification of Objects in a Virtual Universe
US20090267937A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Floating transitions
US20090271422A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Object Size Modifications Based on Avatar Distance
US20090267950A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Fixed path transitions
US8001161B2 (en) * 2008-04-24 2011-08-16 International Business Machines Corporation Cloning objects in a virtual universe
US20100001993A1 (en) * 2008-07-07 2010-01-07 International Business Machines Corporation Geometric and texture modifications of objects in a virtual universe based on real world user characteristics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Gladestrider; "ZAM Everquest Classes: The Ranger - Tracking-Help"; Pages 1-2; 10/26/2004; http://everquest.allakhazam.com/db/classes.html?class=10&mid=1098807428716491276 *
Riddikulus; "Dungeons and Dragons Online Eberron Unlimited Forums: Repeating quests-limit?"; 10/09/2007; Pages 1-6; http://forums.ddo.com/showthread.php?t=123676 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8466931B2 (en) 2008-04-24 2013-06-18 International Business Machines Corporation Color modification of objects in a virtual universe
US8233005B2 (en) 2008-04-24 2012-07-31 International Business Machines Corporation Object size modifications based on avatar distance
US20090267937A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Floating transitions
US20090267960A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Color Modification of Objects in a Virtual Universe
US20090271422A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Object Size Modifications Based on Avatar Distance
US8184116B2 (en) 2008-04-24 2012-05-22 International Business Machines Corporation Object based avatar tracking
US20090267948A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Object based avatar tracking
US8212809B2 (en) 2008-04-24 2012-07-03 International Business Machines Corporation Floating transitions
US8259100B2 (en) 2008-04-24 2012-09-04 International Business Machines Corporation Fixed path transitions
US20090267950A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Fixed path transitions
US8990705B2 (en) 2008-07-01 2015-03-24 International Business Machines Corporation Color modifications of objects in a virtual universe based on user display settings
US20100005423A1 (en) * 2008-07-01 2010-01-07 International Business Machines Corporation Color Modifications of Objects in a Virtual Universe Based on User Display Settings
US20100001993A1 (en) * 2008-07-07 2010-01-07 International Business Machines Corporation Geometric and texture modifications of objects in a virtual universe based on real world user characteristics
US8471843B2 (en) 2008-07-07 2013-06-25 International Business Machines Corporation Geometric and texture modifications of objects in a virtual universe based on real world user characteristics
US9235319B2 (en) 2008-07-07 2016-01-12 International Business Machines Corporation Geometric and texture modifications of objects in a virtual universe based on real world user characteristics
US8458603B2 (en) 2009-01-14 2013-06-04 International Business Machines Corporation Contextual templates for modifying objects in a virtual universe
US10491919B2 (en) 2009-02-19 2019-11-26 Sony Corporation Image processing apparatus and method
US20100318898A1 (en) * 2009-06-11 2010-12-16 Hewlett-Packard Development Company, L.P. Rendering definitions
US20120127198A1 (en) * 2010-11-22 2012-05-24 Microsoft Corporation Selection of foreground characteristics based on background
US20130055116A1 (en) * 2011-08-25 2013-02-28 Microsoft Corporation Theme variation engine
WO2016164212A1 (en) * 2015-04-10 2016-10-13 Sony Computer Entertainment Inc. Filtering and parental control methods for restricting visual activity on a head mounted display
US20180018827A1 (en) * 2015-04-10 2018-01-18 Sony Interactive Entertainment Inc. Filtering and Parental Control Methods for Restricting Visual Activity on a Head Mounted Display
US10210666B2 (en) * 2015-04-10 2019-02-19 Sony Interactive Entertainment Inc. Filtering and parental control methods for restricting visual activity on a head mounted display
US20160300388A1 (en) * 2015-04-10 2016-10-13 Sony Computer Entertainment Inc. Filtering And Parental Control Methods For Restricting Visual Activity On A Head Mounted Display
US9779554B2 (en) * 2015-04-10 2017-10-03 Sony Interactive Entertainment Inc. Filtering and parental control methods for restricting visual activity on a head mounted display
EP3888764A1 (en) * 2015-04-10 2021-10-06 Sony Computer Entertainment Inc. Filtering and parental control methods for restricting visual activity on a head mounted display
US10838484B2 (en) 2016-04-21 2020-11-17 Magic Leap, Inc. Visual aura around field of view
CN109313509A (en) * 2016-04-21 2019-02-05 奇跃公司 The vision ring of light around the visual field
US11340694B2 (en) 2016-04-21 2022-05-24 Magic Leap, Inc. Visual aura around field of view
EP3446168A4 (en) * 2016-04-21 2019-10-23 Magic Leap, Inc. Visual aura around field of view
US20180122147A1 (en) * 2016-10-31 2018-05-03 Dg Holdings, Inc. Transferrable between styles virtual identity systems and methods
US10769860B2 (en) * 2016-10-31 2020-09-08 Dg Holdings, Inc. Transferrable between styles virtual identity systems and methods
US11631229B2 (en) 2016-11-01 2023-04-18 Dg Holdings, Inc. Comparative virtual asset adjustment systems and methods
US10776496B2 (en) 2017-05-12 2020-09-15 Wookey Search Technologies Inc. Systems and methods to control publication of user content in a virtual world
US10963931B2 (en) * 2017-05-12 2021-03-30 Wookey Search Technologies Corporation Systems and methods to control access to components of virtual objects
US11501003B2 (en) 2017-05-12 2022-11-15 Tilia, Inc. Systems and methods to control publication of user content in a virtual world
US10282551B2 (en) 2017-05-12 2019-05-07 Linden Research, Inc. Systems and methods to control publication of user content in a virtual world
US11727123B2 (en) 2017-05-12 2023-08-15 Tilia Llc Systems and methods to control access to components of virtual objects
WO2019172541A1 (en) * 2018-03-05 2019-09-12 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
US11636515B2 (en) 2018-03-05 2023-04-25 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
CN111866597A (en) * 2019-04-30 2020-10-30 百度在线网络技术(北京)有限公司 Method, system and storage medium for controlling layout of page elements in video
CN114185428A (en) * 2021-11-09 2022-03-15 北京百度网讯科技有限公司 Method and device for switching virtual image style, electronic equipment and storage medium
CN117170734A (en) * 2023-11-03 2023-12-05 成都数智创新精益科技有限公司 Method for limiting style scope

Also Published As

Publication number Publication date
US8458603B2 (en) 2013-06-04
US20120266088A1 (en) 2012-10-18

Similar Documents

Publication Publication Date Title
US8458603B2 (en) Contextual templates for modifying objects in a virtual universe
US8233005B2 (en) Object size modifications based on avatar distance
US8184116B2 (en) Object based avatar tracking
US11583766B2 (en) Add-on management systems
US9235319B2 (en) Geometric and texture modifications of objects in a virtual universe based on real world user characteristics
US8001161B2 (en) Cloning objects in a virtual universe
US9727995B2 (en) Alternative representations of virtual content in a virtual universe
CN106717010B (en) User interaction analysis module
US10719192B1 (en) Client-generated content within a media universe
US8990705B2 (en) Color modifications of objects in a virtual universe based on user display settings
US8466931B2 (en) Color modification of objects in a virtual universe
Walsh et al. core WEB3D
US9256896B2 (en) Virtual universe rendering based on prioritized metadata terms
Marr Extended reality in practice: 100+ amazing ways virtual, augmented and mixed reality are changing business and society
US20100251337A1 (en) Selective distribution of objects in a virtual universe
Thorn Learn unity for 2d game development
US20230059361A1 (en) Cross-franchise object substitutions for immersive media
Wodaski Web Graphics Bible
KR20020008497A (en) Method For Advertising Culture Contents On-line
KR20090052639A (en) Application method of virtual reality using the 3d background screen and direct advertisement

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FINN, PETER GEORGE;HAMILTION, RICK ALLEN;O'CONNELL, BRIAN MARSHALL;AND OTHERS;SIGNING DATES FROM 20080903 TO 20090105;REEL/FRAME:022119/0965

AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FINN, PETER GEORGE;HAMILTON, RICK ALLEN, II;O'CONNELL, BRIAN MARSHALL;AND OTHERS;SIGNING DATES FROM 20080903 TO 20090105;REEL/FRAME:022154/0953

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION