US20050268247A1 - System and method for controlling a user interface - Google Patents
System and method for controlling a user interface Download PDFInfo
- Publication number
- US20050268247A1 US20050268247A1 US10/855,812 US85581204A US2005268247A1 US 20050268247 A1 US20050268247 A1 US 20050268247A1 US 85581204 A US85581204 A US 85581204A US 2005268247 A1 US2005268247 A1 US 2005268247A1
- Authority
- US
- United States
- Prior art keywords
- user input
- user
- focus
- objects
- regions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
Definitions
- the present invention relates generally to computing devices, and particularly to user interfaces for computing devices.
- the present invention provides an interface that permits a user to control navigation and selection of objects within a graphical user interface (GUI).
- GUI graphical user interface
- an electronic device is connected to a hands-free user input device. The user operates the hands-free input device to generate user input signals, which are sent to the electronic device.
- a controller within the electronic device is configured to navigate between interface objects responsive to the user input signals, and select interface objects in the absence of the user input signals.
- the controller is configured to partition the GUI into a plurality of selectable objects.
- the controller is also configured to set a focus to one of the objects, and navigate between the objects whenever a user generates an input signal before a timer expires. Each time the controller navigates to a new object, it changes the focus to the new object. The controller will automatically select the object having the current focus if the user does not provide input signals before the expiration of the timer.
- the GUI may be conceptually viewed as having multiple levels with each level including its own objects. At the lowest level, selection of an object causes an associated function to be executed. Further, each level may be viewed as containing a subset of objects of the level immediately above. The user can navigate between objects at the same level, or between levels, in the GUI.
- FIG. 1 illustrates a possible system in which one embodiment of the present invention may operate.
- FIG. 2 illustrates a front view of a hands-free input device that may be used with one embodiment of the present invention.
- FIGS. 3A-3D illustrate a method of navigating a display using one embodiment of the present invention.
- FIGS. 4A-4D further illustrate the method of navigating the display using one embodiment of the present invention.
- FIGS. 5A-5C further illustrate the method of navigating the display using one embodiment of the present invention.
- FIGS. 6A-6B illustrate some exemplary on-screen navigation aids that may be displayed to a user according to the present invention.
- FIGS. 7A-7C illustrate some exemplary on-screen keyboards that may be displayed to a user according to the present invention.
- FIGS. 8A-8E illustrate a possible function executed according to one embodiment of the present invention.
- FIG. 9 illustrates an alternate on-screen keyboard layout that may be displayed to the user according to an alternate embodiment of the present invention.
- FIG. 10 illustrates another alternate on-screen keyboard layout that may be displayed to the user according to an alternate embodiment of the present invention.
- FIG. 11 illustrates alternative user input devices that may be used with the present invention.
- FIG. 12 illustrates one embodiment of the present invention as used with a Personal Digital Assistant (PDA).
- PDA Personal Digital Assistant
- FIG. 13 illustrates a wireless embodiment of the present invention.
- System 10 comprises a hands-free user input device 12 and a computing device 24 interconnected by a cable 22 .
- Input device 12 operates similarly to the hands-free device disclosed in the '065 patent and permits hands-free control of a computer for both persons with physical limitations, and workers needing their hands for other tasks.
- hands-free input device 12 the interested reader is directed to the '065 patent. However, a brief description is included herein for completeness and clarity.
- Input device 12 comprises a housing 14 , and a plurality of cells 18 a - 18 e arranged in tiers.
- each cell 18 includes a pressure transducer that generates output signals responsive to sensed pressure.
- a sound transducer such as musical instrument digital interface (MIDI) transducer 16 , detects sounds generated when the user inhales or exhales through a cell 18 .
- MIDI musical instrument digital interface
- each cell 18 a - 18 e produces a unique sound depending on whether the user inhales or exhales through the cell 18 .
- the MIDI transducer 16 converts these unique sounds into digital signals and transmits them to computing device 24 via communications interface 20 .
- software running on the computing device 24 uses the digital signals to navigate about the graphical user interface, and to control various computer functions and software available in popular systems.
- the input device 12 comprises five cells 18 a - 18 e .
- the functionality of each of these cells 18 will be described later in more detail.
- four cells 18 a - 18 d provide navigational and selection control to the user
- cell 18 e centered generally in the middle of the face of device 12
- this includes providing control over various software programs, such as on-screen keyboards for example.
- Input device 12 may include more or fewer cells 18 , and may be arranged and/or spaced according to specific needs or desires. As seen in the figures, cells 18 a - 18 e are generally arranged in three rows. Cells 18 a and 18 b comprise row 1 , cell 18 e comprises row 2 , and cells 18 c and 18 d comprise row 3 . Typically, cells 18 are offset vertically and/or horizontally from each other such that no two cells are positioned directly over or under each other. For example, cell 18 a is not aligned directly above cell 18 d , and cell 18 b is not aligned directly above cell 18 c.
- Cells 18 are also proximally spaced and arranged in a general concave arc, with the lower cells 18 c and 18 e being slightly longer than the upper cells 18 a , 18 b , and 18 c .
- the length of cell 18 a is equivalent to that of cell 18 b
- the length of cell 18 c is equivalent to that of 18 d .
- cells 18 may be the same or similar lengths as needed or desired.
- the optimized spacing minimizes the need for head and neck movement by the user, and the arc-like arrangement, combined with the cell offset, makes it less likely that the user's nose will interfere with the upper rows of cells when using the lower rows of cells.
- cells 18 are symmetrically placed permitting users to easily become familiar with the cell layout.
- Computing device 24 represents desktop computers, portable computers, wearable computers, wheelchair-mounted computers, bedside computing devices, nurse-call systems, personal digital assistants (PDAs), and any other electronic device that may use the interface of the present invention.
- Device 24 comprises a display 26 , a controller 28 , memory 30 , and a port 32 .
- Display 26 displays a graphical user interface and other data to the user.
- Controller 28 may comprise one or more microprocessors known in the art, and controls the operation of computing device 24 according to program instructions stored in memory 30 . This includes instructions that permit a user to navigate and control the functionality of computing device 24 via input device 12 .
- Memory 30 represents the entire hierarchy of memory used in electronic devices, including RAM, ROM, hard disks, compact disks, flash memory, and other memory not specifically mentioned herein. Memory 30 stores the program instructions that permit controller 28 to control the operation of computing device 24 , and stores data such as user data.
- Port 32 receives user input from input device 12 , and may be a universal serial bus (USB)
- GUI graphical user interface
- the GUI 40 is conceptually viewed as having multiple levels. At the lowest level, selection of an interface object causes an associated function to be executed. Each higher level may be viewed as containing a subset of objects in the level immediately below. The user can navigate between objects at the same level, or navigate between levels, in the GUI 40 .
- a multi-level grid is superimposed over a conventional GUI 40 .
- the grid includes pairs of vertical and horizontal intersecting lines 42 and 44 .
- the first level of the grid corresponds to the first level of the GUI 40 , and divides the area of the GUI 40 into four regions or quadrants. For clarity, these four regions correspond to the directions of a compass, and are referred to herein as the northwest region (NW), northeast region (NE), southeast region (SE), and southwest region (SW).
- the second level of the grid corresponds to the second level of the GUI 40 , and divides the area of the GUI 40 into sixteen regions.
- Each region NW, NE, SE, and SW at the first level encompasses four regions at the second level, which may be referred to as sub-regions.
- Each of the sub-regions covers an area equal to approximately one-fourth of a region at the first level.
- each sub-region encompasses objects on the desktop level of the GUI 40 .
- Objects in the desktop level correspond to a third level of the GUI 40 , and include pulldown menu items, buttons, icons, and application text.
- Desktop objects may also encompass other objects at a lower level.
- a pulldown menu item may contain a sub-menu with several lower level menu items.
- the sub-menu or other lower level object corresponds to a fourth level in the GUI 40 . It should be noted that the present invention does not require a minimum or maximum number of levels.
- the user navigates clockwise or counterclockwise between objects at a given level.
- the user can also navigate between levels by selecting or deselecting objects at the current level.
- the user is then allowed to navigate among objects contained within the selected object.
- FIGS. 3A-3D illustrate navigation around the NW, NE, SE, and SW regions at the first level of GUI 40 .
- the solid arrows on the figures show the direction of navigation as being a clockwise direction. However, as described in more detail below, navigation might also be in a counter-clockwise direction.
- controller 28 partitions GUI 40 into the NW, NE, SE, and SW regions by overlaying a pair of intersecting grid lines 42 a , 42 b on GUI 40 . Controller 28 further partitions each region into the plurality of sub-regions by overlaying additional pairs of grid lines 44 a and 44 b over GUI 40 .
- region NW in FIG. 3A is partitioned into four sub-regions 50 a - 50 d.
- a focus is given to one of the regions, which in FIG. 3A is region NW.
- the user sets the focus to a desired region simply by inhaling or exhaling into one of the cells 18 .
- controller 28 sets the focus to a particular default region. Whatever the method of setting the initial focus, controller 28 indicates which region receives the focus by highlighting the region's border. This is shown in FIG. 3A by the solid box around region NW, however, the present invention also contemplates other methods of indicating which region receives the focus, such as various forms of shading or opacity.
- a second exhale causes controller 28 to change the focus from region NW in FIG. 3A to region NE in FIG. 3B .
- third and fourth exhales change the focus to regions SE and SW in FIGS. 3C and 3D , respectively.
- controller 28 indicates the focus change by highlighting the border around the region receiving the focus.
- the ability to navigate between regions (or sub-regions and/or GUI objects as described below) by changing focus responsive to successive user input signals is referred to herein as “cycling.”
- the arrows in FIGS. 3A-3D illustrate clockwise cycling each time the user exhales into one of the cells 18 . However, the user could also cycle in a counter-clockwise direction by inhaling on one of the cells 18 .
- the user may also exhale into any cell 18 to give the focus directly to a region without cycling.
- the present invention may be configured such that cells 18 a , 18 b , 18 c , and 18 d correspond to the NW, NE, SE, and SW regions, respectively. Exhaling into any of these cells 18 a - 18 d would change the focus directly to its corresponding region.
- the user may change the focus from the NW region in FIG. 3A directly to the SW region in FIG. 3D simply by exhaling into cell 18 d .
- controller 28 would indicate the focus change by highlighting the border surrounding SW region.
- controller 28 starts and manages a timer responsive to the user input signals.
- This timer is a user-configurable threshold that determines the amount of time a user may “dwell” (i.e., remain) on an object having the current focus before providing a subsequent input signal via input device 12 . If the timer expires before the user provides a subsequent input signal, controller 28 may take some predetermined action. In one embodiment, controller 28 selects the region, sub-region, or desktop level object having the current focus. This time delay selection of a particular object having the current focus, as opposed to direct user selection, is referred to herein as an “autoclick.”
- controller 28 started a timer when the user initially set the focus to the NW region by exhaling into cell 18 a ( FIG. 3A ).
- the user now has a predetermined amount of time (e.g., 2 seconds) in which to dwell in region NW (i.e., remain in region NW).
- controller 28 cycles to region NE ( FIG. 3B ), indicates the focus change by highlighting region NE, and restarts the timer.
- Region NE now has the current focus, and the user has two more seconds in which to cycle to the SE region ( FIG. 3C ) by providing a successive input signal.
- controller 28 autoclicks (i.e., selects) the NE region ( FIG. 3B ) because that region has the current focus.
- controller 28 autoclicks (i.e., selects) the NE region ( FIG. 3B ) because that region has the current focus.
- a region is selected, subsequent user input signals will be directed to cycling between or selecting the objects within the scope of the selected region.
- the NW region in FIGS. 4A-4D has been selected, and the user may cycle through sub-regions 50 a - 50 d in the same manner as described above. That is, successive exhales into one of the cells 18 cause successive sub-regions 50 b - 50 d to receive the change in focus.
- controller 28 indicates the sub-region receiving the focus by highlighting the appropriate sub-region border.
- the present invention permits various mappings of cells 18 to objects on the GUI 40 (i.e. regions, sub-regions, and desktop level objects) depending upon which object or level (region, sub-region, or desktop level object) is currently selected.
- FIG. 5A illustrates such a selected sub-region 50 a .
- sub-regions typically contain one or more desktop level objects, such as drop down menus 52 , control buttons 54 , text fields 56 , hyperlinks 58 , and icons.
- Other controls known in the art such as radio buttons (not shown), check boxes (not shown) list boxes (not shown), and combination boxes (not shown), may also be included.
- the present invention permits the user to cycle through each of these desktop level objects, and allows the user to select a particular desktop level object via the autoclick functionality. The response to the user selection or autoclick of a particular desktop level object will depend upon the type of control or object selected.
- FIGS. 5A-5C illustrate cycling and automatic selection according to the present invention within the selected sub-region 50 a .
- the “File” menu item 52 a has the current focus.
- the user simply exhales into one of the cells 18 before the timer expires.
- controller 28 indicates the focus change to the user by highlighting the “Edit” menu item 52 b . If the user wishes to select “Edit” 52 b , the user simply dwells on the “Edit” menu item 52 b until the predetermined timer expires. In response, controller 28 expands the “Edit” menu 60 ( FIG.
- the present invention also permits the user to “undo” selections and “back out” of selected levels to previous selected levels.
- the present invention may be configured such that exhaling into a specific cell 18 automatically resets the interface to its initial state. For example, the user may reset the GUI 40 to the first level as shown in FIG. 3A by inhaling or exhaling into cell 18 e regardless of the currently-selected level, and regardless of what region, sub-region, and desktop level object has the current focus. This would produce a special user input signal that would cause controller 28 to re-initialize the GUI 40 to the first level, and indicate that the NW region received the change in focus. The user would then be able to cycle through and select the regions, sub-regions, and desktop level objects as previously described.
- the present invention is configured to “back out” of a selected level to a previously selected level.
- a user who selected sub-region 50 a in FIG. 4A by exhaling into one cell 18 would be able to “undo” or “back out” of the selection to the NW region by inhaling on the same cell 18 .
- inhaling into a cell 18 also permits a user to change the cycling direction.
- the present invention may be configured to distinguish between a user input signal to “back out” to a previous level, and a user input signal to change the cycling direction. More specifically, controller 28 keeps track of the number of full cycles executed by the user. One full cycle equates to one complete rotation around each region, sub-region, or set of desktop level objects.
- controller 28 will cause controller 28 to “back out” of the currently selected level (e.g., sub-region 50 a in FIG. 4A ) and return the GUI 40 to the previous level (e.g., region NW in FIG. 3A ). Controller 28 would then indicate the region, sub-region, or desktop level object having the current focus, and the user may continue as detailed above. If, however, the user has not completed one complete cycle, inhaling through one of the cells 18 would simply change the cycling direction and move the focus (e.g., from clockwise to counterclockwise direction).
- the hands-free input device 12 described so far has been a two-state device.
- two-state input devices generate two input signals per cell 18 depending upon whether the user inhales or exhales.
- the present invention may overlay labels 64 over GUI 40 to aid the user in determining which cell 18 to choose and whether to inhale or exhale.
- labels 64 in FIG. 6A indicate the NW, NE, SE, and SW regions. Similar labels could be placed on the input device 12 to identify which cell 18 corresponds to which region. Labels 64 may further include a directional arrow to indicate whether the user should inhale or exhale into the region's corresponding cell.
- a more complex input device 12 such as a four-state device, may also be used with the present invention.
- pressure sensors (not shown) are disposed in or near each of the cells 18 to determine the force with which the user aspirates through the cells 18 .
- FIG. 6B this permits the user to directly select a particular sub-region without first having to select a region.
- labels 64 may be overlaid on GUI 40 to assist the user in determining which cell 18 to use, whether to inhale or exhale, and the force with which to inhale or exhale.
- directional arrows on labels 64 indicate whether to inhale or exhale, while the arrow thickness indicates the force with which the user should aspirate. Thinner arrows indicate a less forceful aspiration, while thicker arrows indicate a more forceful aspiration.
- a four-state device may be useful, for example, in a “freeform” embodiment of the present invention, wherein the user moves a conventional cursor “freely” around the display.
- a forceful exhale i.e., a hard exhale
- a less forceful exhale i.e., a soft exhale
- a hard exhale may move the cursor vertically a distance of ten pixels
- a soft exhale moves the cursor a distance of five pixels.
- This may be useful in using in software packages that provide Computer Aided Drawing (CAD) functionality, for example, or permit the user to play games.
- CAD Computer Aided Drawing
- the present invention also allows the user to interact with web pages and other software packages, such as QPOINTER KEYBOARD from COMMIDIO.
- QPOINTER KEYBOARD permits navigation of a display-based device using only a keyboard, and provides “object-tagging” functionality, in which helpful hints regarding an object are displayed to the user when a cursor is placed over the object.
- FIGS. 7-9 illustrate another third-party application with which the present invention is compatible. Particularly, FIGS. 7-9 show an on-screen keyboard (OSK) application provided by APPLIED HUMAN FACTORS, INC.
- OSK on-screen keyboard
- FIGS. 7A OSK 70 comprises a full compliment of keys 72 typically available with known keyboards.
- OSK 70 also includes menu section 74 with which the user may invoke frequently used programs, and a word-prediction section 76 to allow the user to build complete sentences.
- OSK 70 When OSK 70 is visible on the display, it may overlay the lower two regions SW and SE.
- the user may control the transparency/opacity of OSK 70 , or hide it altogether, simply by aspirating through one of cells 18 . Varying the transparency of OSK 70 permits the user to navigate and use the OSK 70 while still allowing the user to see the underlying GUI 40 .
- the present invention does not require displaying a full-size on-screen OSK 70 such as the one shown in FIG. 7A .
- the present invention may be configured to display OSK 70 in portions as seen in FIGS. 7B-7C .
- the present invention partitions OSK 70 into groupings of selectable objects having one or more keys 72 .
- FIG. 7B for example, only the number keys, the control keys used in editing a document (e.g., PgUp, PgDn, Home, End, Ins, Backsp, Shift, etc.), and the function keys (e.g., Control, F1-F12, Escape, etc.) are displayed.
- FIG. 7B for example, only the number keys, the control keys used in editing a document (e.g., PgUp, PgDn, Home, End, Ins, Backsp, Shift, etc.), and the function keys (e.g., Control, F1-F12, Escape, etc.) are displayed.
- FIG. 7B
- OSK 70 displays the alphabetical keys, punctuation keys, and control keys (e.g., Escape, Enter, Shift, Del, Caps Lock, etc.).
- Displaying OSK 70 in portions frees the user from a conventional QWERTY arrangement, and allows the user to customize the position of keys 72 as desired. It also permits the user to add special purpose controls 78 .
- Controls 78 may be preconfigured to contain a link to a designated website, launch a program, or execute specific functionality within the program, such as a spell checker, for example.
- the user may cycle through the various available OSK portions simply by aspirating through one of the cells 18 .
- exhaling through cell 18 e a first time may cause the full-size OSK to be displayed on-screen, as in FIG. 7A .
- Successive exhales through cell 18 e prior to the expiration of the predetermined timer would cause successive portions of the OSK 70 to be displayed, as in FIGS. 7B and 7C .
- subsequent exhales through cell 18 e might hide OSK 70 altogether, while successive inhales through cell 18 e may allow the user to alter the placement on display 26 and/or the transparency/opacity of OSK 70 .
- dwelling on any one OSK until the expiration of the predetermined timer may select that particular OSK.
- FIGS. 8A-8E illustrate how a user may edit documents or letters by building words using the selected OSK 70 . More specifically, the user may build words and control the interface by cycling to the desired key 72 or control button and autoclicking. As each letter is selected, controller 28 searches a database dictionary and displays possible matching words in control boxes 84 . The user can scroll backwards and forwards through the words by selecting control buttons 80 or 82 , or refresh (i.e., start again) by selecting control button 86 . The user may add new words to the dictionary database by entering the letters that spell the word and selecting control button 88 .
- FIGS. 8A-8E illustrates how the user might build the word “harmonica.”
- the user aspirates through cells 18 to cycle to the “h” key in keyboard 70 and autoclicks.
- Controller 28 displays frequently used words beginning with the letter “h” gleaned from the dictionary database.
- Controller 28 might also hide those keys containing letters that cannot be used directly after the letter “h.”
- FIG. 8B for example, only vowels may appear directly after the letter “h,” and thus, the interface displays those keys 72 associated with vowels. Keys 72 associated with consonants, for example, are not displayed to the user.
- Each successive key 72 selection results in a better-defined, narrower word list in word-section 76 .
- the desired word appears in the word-section 76 , the user cycles to, and autoclicks, to select the desired word. Controller 28 would then insert the selected word, which in this case is “harmonica,” into the current document.
- controller 28 provides substantially simultaneous focus to both OSK 70 and an application object, such as a text field, on GUI 40 .
- This permits the user to cycle to and select a desired key 72 from OSK 70 for entry into the selected field on GUI 40 , or create and/or edit documents using third party software such as MICROSOFT WORD, for example.
- the present invention also permits users to interact with OSK 70 having a bank of controls 92 .
- Bank 92 may include, for example, control buttons 94 and 96 that act as “quick-links” to a predetermined webpage or other functionality.
- the user simply aspirates through cells 18 to cycle to the desired control, and autoclicks to select.
- FIG. 10 illustrates an alternate embodiment wherein OSK 70 integrates both keyboard and word prediction functionality.
- the position of OSK 70 as well as its key configuration and opacity, are configurable by the user.
- OSK 70 of FIG. 10 may also be displayed in portions, each having their own set of keys 72 .
- OSK 70 includes all 26 alphabetical keys and some additional keys that provide special functionality.
- the keys 72 on OSK 70 may have any configuration desired, and that alternate OSKs may include numeric keys and/or punctuation keys in place of or in addition to the keys 72 shown herein.
- the MODE (“MO”) key provides functionality similar to that of cell 18 e in that it permits a user to cycle through the various OSKs 70 . Alternatively, the user may also directly select the particular OSK 70 by cycling to and autoclicking one of the “OSK” keys.
- the SPACE (“SP”) key may be configured to enter a space, while the spell check (“CHEK”) key may launch a spell checker.
- the Caps (“CAPS”) key may capitalize a character. It should be noted that the keys on the OSK 70 might be configured to automatically provide automatic entry of space and/or capitalization of words as desired.
- the “Settings” key may permit the user to modify the configuration of the keyboard, and the “Top,” “Bot,” and “Fade” keys allow the user to set the position and control the transparency/opacity of OSK 70 .
- the “Hide” key permits the user to hide the OSK from the GUI.
- OSK 70 of FIG. 10 may include a word prediction section. As the user selects a key 72 , a list of possible or commonly used words will be displayed to the user. With each subsequent selection of a key 72 , the words may be updated. Word prediction minimizes the number of keys 72 that a user must select in order to create full sentences in a document, for example.
- device 24 in terms of a desktop computer.
- the present invention might also interact with other, more portable devices, such as a Personal Digital Assistant (PDA) shown in FIG. 12 .
- PDA Personal Digital Assistant
- device 24 embodied as a PDA, includes a user input device 12 , a GUI 40 , and an OSK 70 overlaying a bottom portion of GUI 40 .
- a grid such as the one shown in FIGS. 3-5 , is overlaid on the GUI 40 of PDA. Cycling and autoclicking permits the user to navigate and select various regions, sub-regions, and/or objects as previously described.
- the interface of the present invention may also receive user input from alternate user input devices, such as those shown in FIG. 11 .
- the user may use a microphone 100 to generate user input signals.
- the interface of the present invention would comprise a voice-recognition engine as is known in the art. The user would cycle through regions, sub-regions, and desktop level controls, for example, by speaking predetermined commands into microphone 100 . Silence for a predetermined amount of time without speaking a recognized command would result in an autoclick.
- the user may wish to use a mouse 102 or a joystick 104 .
- These input devices may be preferable, for example, for users who lack fine motor skills, but retain some gross motor control. However, non-disabled persons could use these input devices as well.
- the interface of the present invention may be configured to reposition an on-screen pointer a specified distance (e.g., a certain number of pixels) in response to the movement of mouse 102 or joystick 104 .
- moving the mouse 102 or joystick 104 would permit the user to cycle through the regions, sub-regions and desktop level controls. As in the previous embodiments, the user would simply not move the mouse 102 or joystick 104 to execute an autoclick.
- an “Electric Skin Resistance” (ESR) input device 106 generates input signals responsive to the user's touch instead of the user's aspiration.
- cells 18 may be constructed of a conductive metal or metal alloy, and would generate a signal whenever the user's lips made contact with cells 18 . For example, contacting the user's upper lip to a top half of cell 18 would generate a first input signal, while touching the user's bottom lip to a lower half of cell 18 would generate a second input signal. This would permit the user to cycle through the regions, sub-regions, and desktop level controls in clockwise and counter-clockwise directions. Of course, not touching any of cells 18 for a predetermined amount of time would result in an autoclick.
- a remote control device 108 may be used to generate input signals responsive to the user's actuation of one or more buttons disposed in a cluster 108 a on remote control device 108 .
- remote control device 108 includes four buttons surrounding a middle button.
- cluster 108 a is illustrated to resemble the buttons on a TV remote, for example, that controls a TV, VCR, DVD, or other device.
- the controls in cluster 108 a may be positioned or sized as desired.
- a television or other display device may be used to display GUI 40 indicating selections relating to a nurse-call system, for example.
- the regions NE, NW, SE, and SW may be marked with informative labels, such as “TEMPERATURE,” “TV,” “LIGHTS,” and “BED.”
- informative labels such as “TEMPERATURE,” “TV,” “LIGHTS,” and “BED.”
- the buttons in cluster 108 a the user could cycle to and autoclick on one or more of these four regions to access the subregions, and/or deeper levels.
- the various levels could be configured to provide functionality specific to the particular selected region.
- a user upon selection of the “TEMPERATURE” region, might be presented with four subregions, each containing labels such as “TEMP COOLER,” “TEMP WARMER,” FAN FASTER,” and “FAN SLOWER.” Selection of one of these four subregions would permit the user to control the temperature. Deeper levels might provide even more granular control.
- the user could also control “TV” functionality (e.g., volume, channel selection), lights, and/or the positioning of a bed simply by actuating on or more of the buttons in cluster 108 a .
- the middle button might permit direct selection of a region, subregion, or other on-screen object, or might be preconfigured to automatically dial 9-1-1, a nurse, or other medical professional as needed.
- any of the devices shown in the figures may be used to generate user input signals to the interface of the present invention for use in other applications as well.
- the interface of the present invention overlays a television screen. The user might navigate the interface to select channels, enter data (e.g., for web-based television applications), or surf the Internet.
- controller 28 may generate input signals responsive to the user's thoughts. This is known as neuro-navigation.
- a small chip comprising circuitry, such as the BRAINGATE chip manufactured by CYBERKINETICS INC., or a web implant system, such as the system manufactured by NEUROLINK, may be implanted in a user's body. Using these chips and/or systems, or others like them, the user might simply “think” commands that are then transmitted from the implant to computing device 24 . Controller 28 may translate these thought commands into movements and actions on the interface of the present invention.
- a user may cycle around GUI 40 simply by thinking, “cycle clockwise,” or “cycle counterclockwise.”
- the user may select a region, sub-region, or desktop level object by thinking the command “autoclick,” for example, or pausing on a selected region, sub-region, or desktop level command without thinking any command for a predetermined period.
- implantable neural interfaces could provide reliable and fast output signals to computing device 24 operating according to the present invention.
- the present invention is in no way limited merely for use in navigating the Internet, creating, and editing documents, or other tasks typically performed using personal computing devices.
- other commands and methods of controlling the interface of the present invention will become possible.
- further developments could generate signals to allow users to control other types of devices, such as environmental controls, medical devices designed to power their own limbs, and robotic equipment such as wheelchairs.
- the present invention would allow users to control and/or operate these types of devices according to the generated input signals.
- GUI 40 may display one or more medical instruments that the surgeon would cycle to and autoclick to select. Progressively deeper levels might display predetermined actions the surgeon might to perform with the selected instrument.
- the surgeon could use his or her hands to control an existing device to perform the surgery from a remote location, while using the interface of the present invention (via, for example, hands-free device 12 ) to control one or more on-site cameras to provide various angles for video feedback, or to access medical information or assistance.
- hands-free device 12 to control one or more on-site cameras to provide various angles for video feedback, or to access medical information or assistance.
- the interface of the present invention permits the user to cycle and autoclick. Additionally, the interface of the present invention is useful for both physically-challenged persons as well as non-physically-challenged persons. The user is always informed of the results of the input signals via on-screen and/or auditory feedback through speaker 110 . Further, the present invention does not require that a particular user input device transmit input signals via cable 22 , but rather, also contemplates the use of a wireless interface, such as the one shown in FIG. 13 .
- input device 12 comprises a wireless transceiver 112 , such as a BLUETOOTH or infrared transceiver that communicates with corresponding wireless port 32 on computing device 24 .
Abstract
Description
- The present invention relates generally to computing devices, and particularly to user interfaces for computing devices.
- The need to control a computer without using one's hands extends to those with physical impairments, those who work in extreme environments, and those who want to increase their productivity with multi-modal input. Effective hands-free computer usage has widespread appeal to the approximately 20 million people within the United States who have some form of mobility impairment. In addition, voice recognition as a primary input mechanism is rife with inconsistency and difficulties that leave the door open to alternative technologies that could be leveraged for anything from underwater tool usage and salvage operations to extra-terrestrial repairs and construction, including futuristic vehicular, drone, and wheelchair control.
- The notion of universal accessibility in which the highest degree of access is proffered to all users has great societal validity. Bringing disabled and disenfranchised citizens into the workforce has positive implications for economic indices. The numbers are larger than expected as movement disabilities can result from severe arthritis, strokes, accidents, neuromuscular dysfunction, deformity, amputation, paralysis, spinal problems, and cumulative trauma disorders. In addition, repetitive motion disorders from prolonged keyboard use and/or mouse usage, such as carpal tunnel syndrome, can result in an inability to perform remunerative employment. In the past, these people have been largely excluded or displaced from the work force, resulting in a tremendous loss of productivity both for society and for them. Despite the sporadic acceptance of telecommuting, the exclusion of physically-challenged persons from the work force is largely a result of high accommodation costs, and is exacerbated by the perception that affected persons are unable to compete effectively in the work force.
- With adaptive devices, it is possible to integrate physically-challenged persons into the work force at a workplace or in their home, and to provide a greater degree of independence for such persons. One such adaptive device is disclosed in U.S. Pat. No. 5,603,065, which is incorporated herein by reference. These devices, coupled with the use of computers, can remove many of the barriers that physically-challenged people face. Typically, the user interacts with a graphical user interface displayed on the computer display to navigate and execute associated functionality. However, navigation using conventional interfaces can still be cumbersome for physically-challenged users. For example, some interfaces may require the direct selection of an icon or menu item to execute its associated functionality. This can be difficult for a person using an aspiration-driven input device, such as the type disclosed in the '065 patent, or a person lacking fine motor skills using a mouse or joystick-type control. Likewise, accurately invoking commands to a computer while simultaneously using one's hands for manual tasks, such as loosening and tightening bolts is equally difficult for the non-physical-challenged. Accordingly, there is a need for an interface that responds efficiently to user input to further ease the burden on physically-challenged persons, and to expand the capabilities for non-disabled persons.
- The present invention provides an interface that permits a user to control navigation and selection of objects within a graphical user interface (GUI). In one embodiment, an electronic device is connected to a hands-free user input device. The user operates the hands-free input device to generate user input signals, which are sent to the electronic device. A controller within the electronic device is configured to navigate between interface objects responsive to the user input signals, and select interface objects in the absence of the user input signals.
- In one embodiment, the controller is configured to partition the GUI into a plurality of selectable objects. The controller is also configured to set a focus to one of the objects, and navigate between the objects whenever a user generates an input signal before a timer expires. Each time the controller navigates to a new object, it changes the focus to the new object. The controller will automatically select the object having the current focus if the user does not provide input signals before the expiration of the timer.
- The GUI may be conceptually viewed as having multiple levels with each level including its own objects. At the lowest level, selection of an object causes an associated function to be executed. Further, each level may be viewed as containing a subset of objects of the level immediately above. The user can navigate between objects at the same level, or between levels, in the GUI.
-
FIG. 1 illustrates a possible system in which one embodiment of the present invention may operate. -
FIG. 2 illustrates a front view of a hands-free input device that may be used with one embodiment of the present invention. -
FIGS. 3A-3D illustrate a method of navigating a display using one embodiment of the present invention. -
FIGS. 4A-4D further illustrate the method of navigating the display using one embodiment of the present invention. -
FIGS. 5A-5C further illustrate the method of navigating the display using one embodiment of the present invention. -
FIGS. 6A-6B illustrate some exemplary on-screen navigation aids that may be displayed to a user according to the present invention. -
FIGS. 7A-7C illustrate some exemplary on-screen keyboards that may be displayed to a user according to the present invention. -
FIGS. 8A-8E illustrate a possible function executed according to one embodiment of the present invention. -
FIG. 9 illustrates an alternate on-screen keyboard layout that may be displayed to the user according to an alternate embodiment of the present invention. -
FIG. 10 illustrates another alternate on-screen keyboard layout that may be displayed to the user according to an alternate embodiment of the present invention. -
FIG. 11 illustrates alternative user input devices that may be used with the present invention. -
FIG. 12 illustrates one embodiment of the present invention as used with a Personal Digital Assistant (PDA). -
FIG. 13 illustrates a wireless embodiment of the present invention. - Referring now to
FIGS. 1 and 2 , a system that uses one embodiment of the present invention is shown therein and generally indicated by thenumber 10.System 10 comprises a hands-freeuser input device 12 and acomputing device 24 interconnected by acable 22.Input device 12 operates similarly to the hands-free device disclosed in the '065 patent and permits hands-free control of a computer for both persons with physical limitations, and workers needing their hands for other tasks. For detailed information regarding hands-free input device 12, the interested reader is directed to the '065 patent. However, a brief description is included herein for completeness and clarity. -
Input device 12 comprises ahousing 14, and a plurality of cells 18 a-18 e arranged in tiers. A physically-challenged person, or worker needing two hands for other tasks, may useinput device 12 to enter commands intocomputing device 24 by selectively aspirating (i.e., exhaling or inhaling) through one or more cells 18 a-18 e. In one embodiment, each cell 18 includes a pressure transducer that generates output signals responsive to sensed pressure. In another embodiment, a sound transducer, such as musical instrument digital interface (MIDI)transducer 16, detects sounds generated when the user inhales or exhales through a cell 18. In this case, each cell 18 a-18 e produces a unique sound depending on whether the user inhales or exhales through the cell 18. TheMIDI transducer 16 converts these unique sounds into digital signals and transmits them to computingdevice 24 viacommunications interface 20. As will be described in more detail below, software running on thecomputing device 24 uses the digital signals to navigate about the graphical user interface, and to control various computer functions and software available in popular systems. - In the embodiment shown in
FIG. 1 , theinput device 12 comprises five cells 18 a-18 e. The functionality of each of these cells 18 will be described later in more detail. However, in one embodiment of the present invention, four cells 18 a-18 d provide navigational and selection control to the user, whilecell 18 e, centered generally in the middle of the face ofdevice 12, provides control over various modes of the interface of the present invention. As seen later in more detail, this includes providing control over various software programs, such as on-screen keyboards for example. -
Input device 12 may include more or fewer cells 18, and may be arranged and/or spaced according to specific needs or desires. As seen in the figures, cells 18 a-18 e are generally arranged in three rows.Cells row 1,cell 18 e comprisesrow 2, andcells 18 c and 18 d compriserow 3. Typically, cells 18 are offset vertically and/or horizontally from each other such that no two cells are positioned directly over or under each other. For example,cell 18 a is not aligned directly abovecell 18 d, andcell 18 b is not aligned directly above cell 18 c. - Cells 18 are also proximally spaced and arranged in a general concave arc, with the
lower cells 18 c and 18 e being slightly longer than theupper cells cell 18 a is equivalent to that ofcell 18 b, and the length of cell 18 c is equivalent to that of 18 d. However, it should be understood that cells 18 may be the same or similar lengths as needed or desired. The optimized spacing minimizes the need for head and neck movement by the user, and the arc-like arrangement, combined with the cell offset, makes it less likely that the user's nose will interfere with the upper rows of cells when using the lower rows of cells. Further, cells 18 are symmetrically placed permitting users to easily become familiar with the cell layout. -
Computing device 24 represents desktop computers, portable computers, wearable computers, wheelchair-mounted computers, bedside computing devices, nurse-call systems, personal digital assistants (PDAs), and any other electronic device that may use the interface of the present invention.Device 24 comprises adisplay 26, acontroller 28,memory 30, and aport 32.Display 26 displays a graphical user interface and other data to the user.Controller 28 may comprise one or more microprocessors known in the art, and controls the operation ofcomputing device 24 according to program instructions stored inmemory 30. This includes instructions that permit a user to navigate and control the functionality ofcomputing device 24 viainput device 12.Memory 30 represents the entire hierarchy of memory used in electronic devices, including RAM, ROM, hard disks, compact disks, flash memory, and other memory not specifically mentioned herein.Memory 30 stores the program instructions that permitcontroller 28 to control the operation ofcomputing device 24, and stores data such as user data.Port 32 receives user input frominput device 12, and may be a universal serial bus (USB) port, for example. - As previously stated, navigation using conventional interfaces can be cumbersome for physically-challenged users and those that need their hands to perform other tasks. Particularly, some interfaces may require the direct ambulatory selection of an icon or menu item to execute its associated functionality. The present invention, while still permitting direct selection, deviates from conventional interfaces in that it does not require direct selection. Rather, the present invention provides a method that allows the user to cyclically navigate through a graphical user interface (GUI) 40 and select objects on the
GUI 40 by instituting a time delay selection technique. - The
GUI 40 is conceptually viewed as having multiple levels. At the lowest level, selection of an interface object causes an associated function to be executed. Each higher level may be viewed as containing a subset of objects in the level immediately below. The user can navigate between objects at the same level, or navigate between levels, in theGUI 40. - In a preferred embodiment of the invention, a multi-level grid is superimposed over a
conventional GUI 40. As seen inFIGS. 3-7 , for example, the grid includes pairs of vertical and horizontal intersecting lines 42 and 44. The first level of the grid corresponds to the first level of theGUI 40, and divides the area of theGUI 40 into four regions or quadrants. For clarity, these four regions correspond to the directions of a compass, and are referred to herein as the northwest region (NW), northeast region (NE), southeast region (SE), and southwest region (SW). The second level of the grid corresponds to the second level of theGUI 40, and divides the area of theGUI 40 into sixteen regions. Each region NW, NE, SE, and SW at the first level encompasses four regions at the second level, which may be referred to as sub-regions. Each of the sub-regions covers an area equal to approximately one-fourth of a region at the first level. Similarly, each sub-region encompasses objects on the desktop level of theGUI 40. Objects in the desktop level correspond to a third level of theGUI 40, and include pulldown menu items, buttons, icons, and application text. Desktop objects may also encompass other objects at a lower level. For example, a pulldown menu item may contain a sub-menu with several lower level menu items. The sub-menu or other lower level object corresponds to a fourth level in theGUI 40. It should be noted that the present invention does not require a minimum or maximum number of levels. - In operation, the user navigates clockwise or counterclockwise between objects at a given level. The user can also navigate between levels by selecting or deselecting objects at the current level. When the user selects an object, the user is then allowed to navigate among objects contained within the selected object.
-
FIGS. 3A-3D , for example, illustrate navigation around the NW, NE, SE, and SW regions at the first level ofGUI 40. For illustrative purposes, the solid arrows on the figures show the direction of navigation as being a clockwise direction. However, as described in more detail below, navigation might also be in a counter-clockwise direction. As seen inFIG. 3A ,controller 28partitions GUI 40 into the NW, NE, SE, and SW regions by overlaying a pair ofintersecting grid lines GUI 40.Controller 28 further partitions each region into the plurality of sub-regions by overlaying additional pairs ofgrid lines GUI 40. By way of example, region NW inFIG. 3A is partitioned into four sub-regions 50 a-50 d. - Initially, a focus is given to one of the regions, which in
FIG. 3A is region NW. In one embodiment, the user sets the focus to a desired region simply by inhaling or exhaling into one of the cells 18. In another embodiment,controller 28 sets the focus to a particular default region. Whatever the method of setting the initial focus,controller 28 indicates which region receives the focus by highlighting the region's border. This is shown inFIG. 3A by the solid box around region NW, however, the present invention also contemplates other methods of indicating which region receives the focus, such as various forms of shading or opacity. - To navigate or change the focus from one region to another, the user inhales or exhales into one of the cells 18 on
input device 12. Thus, a second exhale causescontroller 28 to change the focus from region NW inFIG. 3A to region NE inFIG. 3B . Likewise, third and fourth exhales change the focus to regions SE and SW inFIGS. 3C and 3D , respectively. With each successive exhale,controller 28 indicates the focus change by highlighting the border around the region receiving the focus. The ability to navigate between regions (or sub-regions and/or GUI objects as described below) by changing focus responsive to successive user input signals is referred to herein as “cycling.” The arrows inFIGS. 3A-3D illustrate clockwise cycling each time the user exhales into one of the cells 18. However, the user could also cycle in a counter-clockwise direction by inhaling on one of the cells 18. - Of course, the user may also exhale into any cell 18 to give the focus directly to a region without cycling. In this case, the present invention may be configured such that
cells FIG. 3A directly to the SW region inFIG. 3D simply by exhaling intocell 18 d. As above,controller 28 would indicate the focus change by highlighting the border surrounding SW region. - In addition to cycling, the present invention also provides a method that allows the user to automatically select an object (e.g., a region, sub-region, or desktop level object) having the current focus. In this method, direct selection by clicking, for example, is not required. More particularly,
controller 28 starts and manages a timer responsive to the user input signals. This timer is a user-configurable threshold that determines the amount of time a user may “dwell” (i.e., remain) on an object having the current focus before providing a subsequent input signal viainput device 12. If the timer expires before the user provides a subsequent input signal,controller 28 may take some predetermined action. In one embodiment,controller 28 selects the region, sub-region, or desktop level object having the current focus. This time delay selection of a particular object having the current focus, as opposed to direct user selection, is referred to herein as an “autoclick.” - Continuing with the example of
FIGS. 3A to 3D,controller 28 started a timer when the user initially set the focus to the NW region by exhaling intocell 18 a (FIG. 3A ). The user now has a predetermined amount of time (e.g., 2 seconds) in which to dwell in region NW (i.e., remain in region NW). If the user exhales into one of the cells 18 before the timer expires,controller 28 cycles to region NE (FIG. 3B ), indicates the focus change by highlighting region NE, and restarts the timer. Region NE now has the current focus, and the user has two more seconds in which to cycle to the SE region (FIG. 3C ) by providing a successive input signal. If the timer expires before the user provides the input signal,controller 28 autoclicks (i.e., selects) the NE region (FIG. 3B ) because that region has the current focus. Once a region is selected, subsequent user input signals will be directed to cycling between or selecting the objects within the scope of the selected region. For example, the NW region inFIGS. 4A-4D has been selected, and the user may cycle through sub-regions 50 a-50 d in the same manner as described above. That is, successive exhales into one of the cells 18 causesuccessive sub-regions 50 b-50 d to receive the change in focus. As when cycling through the regions,controller 28 indicates the sub-region receiving the focus by highlighting the appropriate sub-region border. - As above, it is also possible at this second level to configure the present invention to allow the user to set the focus directly to any of the sub-regions 50 a-50 d. In this case, cells 18 a-18 d would be re-mapped to correspond to sub-regions 50 a-50 d, respectively. Thus, the present invention permits various mappings of cells 18 to objects on the GUI 40 (i.e. regions, sub-regions, and desktop level objects) depending upon which object or level (region, sub-region, or desktop level object) is currently selected.
- When a sub-region is selected, subsequent user inputs will be directed to cycling between or selecting the desktop level objects that are within the scope of the selected sub-region.
FIG. 5A illustrates such a selectedsub-region 50 a. As stated above, sub-regions typically contain one or more desktop level objects, such as drop down menus 52,control buttons 54, text fields 56,hyperlinks 58, and icons. Other controls known in the art, such as radio buttons (not shown), check boxes (not shown) list boxes (not shown), and combination boxes (not shown), may also be included. The present invention permits the user to cycle through each of these desktop level objects, and allows the user to select a particular desktop level object via the autoclick functionality. The response to the user selection or autoclick of a particular desktop level object will depend upon the type of control or object selected. -
FIGS. 5A-5C illustrate cycling and automatic selection according to the present invention within the selectedsub-region 50 a. InFIG. 5A , the “File”menu item 52 a has the current focus. To cycle to the “Edit”menu item 52 b (FIG. 5B ), the user simply exhales into one of the cells 18 before the timer expires. As above,controller 28 indicates the focus change to the user by highlighting the “Edit”menu item 52 b. If the user wishes to select “Edit” 52 b, the user simply dwells on the “Edit”menu item 52 b until the predetermined timer expires. In response,controller 28 expands the “Edit” menu 60 (FIG. 5C ), restarts the timer, and directs subsequent user input to “Edit”menu 60. The user may cycle through each of themenu items 62 as above, and autoclick on anyparticular item 62. This will result incontroller 28 executing the functionality associated with the selectedmenu item 62. As stated above, if the selectedmenu item 62 includes a sub-menu, the sub-menu is given the focus. The user may then cycle through any sub-menu items, and autoclick to select a specific sub-menu item. - Just as the user may “drill down” through each level to select sub-regions and desktop level objects, the present invention also permits the user to “undo” selections and “back out” of selected levels to previous selected levels. In one embodiment, the present invention may be configured such that exhaling into a specific cell 18 automatically resets the interface to its initial state. For example, the user may reset the
GUI 40 to the first level as shown inFIG. 3A by inhaling or exhaling intocell 18 e regardless of the currently-selected level, and regardless of what region, sub-region, and desktop level object has the current focus. This would produce a special user input signal that would causecontroller 28 to re-initialize theGUI 40 to the first level, and indicate that the NW region received the change in focus. The user would then be able to cycle through and select the regions, sub-regions, and desktop level objects as previously described. - In another embodiment, the present invention is configured to “back out” of a selected level to a previously selected level. Thus, a user who selected
sub-region 50 a inFIG. 4A by exhaling into one cell 18 would be able to “undo” or “back out” of the selection to the NW region by inhaling on the same cell 18. However, as stated above, inhaling into a cell 18 also permits a user to change the cycling direction. Thus, the present invention may be configured to distinguish between a user input signal to “back out” to a previous level, and a user input signal to change the cycling direction. More specifically,controller 28 keeps track of the number of full cycles executed by the user. One full cycle equates to one complete rotation around each region, sub-region, or set of desktop level objects. Once the user has completed at least one full cycle, inhaling on one of cells 18 will causecontroller 28 to “back out” of the currently selected level (e.g.,sub-region 50 a inFIG. 4A ) and return theGUI 40 to the previous level (e.g., region NW inFIG. 3A ).Controller 28 would then indicate the region, sub-region, or desktop level object having the current focus, and the user may continue as detailed above. If, however, the user has not completed one complete cycle, inhaling through one of the cells 18 would simply change the cycling direction and move the focus (e.g., from clockwise to counterclockwise direction). - It should be noted that the preceding description illustrated exhaling to cycle and select, and inhaling to undo or back out. However, those skilled in the art will readily appreciate that this is not required, and that the process may be reversed. Users can alternatively inhale to cycle and select, and exhale to undo or back out. In fact, because the interface of the present invention is configurable, the user may customize functionality to the type of aspiration provided by the user.
- Additionally, it should also be noted that the hands-
free input device 12 described so far has been a two-state device. Particularly, two-state input devices generate two input signals per cell 18 depending upon whether the user inhales or exhales. As seen inFIG. 6A , the present invention may overlay labels 64 overGUI 40 to aid the user in determining which cell 18 to choose and whether to inhale or exhale. For example, labels 64 inFIG. 6A indicate the NW, NE, SE, and SW regions. Similar labels could be placed on theinput device 12 to identify which cell 18 corresponds to which region.Labels 64 may further include a directional arrow to indicate whether the user should inhale or exhale into the region's corresponding cell. - A more
complex input device 12, such as a four-state device, may also be used with the present invention. In four-state devices, pressure sensors (not shown) are disposed in or near each of the cells 18 to determine the force with which the user aspirates through the cells 18. This permits theinput device 12 to generate four signals per cell—two based on whether the user inhaled or exhaled, and two based on the force with which the user inhaled or exhaled. As seen inFIG. 6B , this permits the user to directly select a particular sub-region without first having to select a region. LikeFIG. 6A , labels 64 may be overlaid onGUI 40 to assist the user in determining which cell 18 to use, whether to inhale or exhale, and the force with which to inhale or exhale. In this embodiment, directional arrows onlabels 64 indicate whether to inhale or exhale, while the arrow thickness indicates the force with which the user should aspirate. Thinner arrows indicate a less forceful aspiration, while thicker arrows indicate a more forceful aspiration. - A four-state device may be useful, for example, in a “freeform” embodiment of the present invention, wherein the user moves a conventional cursor “freely” around the display. In one embodiment, a forceful exhale (i.e., a hard exhale) into one of the cells 18 moves the cursor vertically further and faster than a less forceful exhale (i.e., a soft exhale). That is, a hard exhale may move the cursor vertically a distance of ten pixels, while a soft exhale moves the cursor a distance of five pixels. This may be useful in using in software packages that provide Computer Aided Drawing (CAD) functionality, for example, or permit the user to play games.
- In addition to permitting the user to navigate the Internet, the present invention also allows the user to interact with web pages and other software packages, such as QPOINTER KEYBOARD from COMMIDIO. QPOINTER KEYBOARD permits navigation of a display-based device using only a keyboard, and provides “object-tagging” functionality, in which helpful hints regarding an object are displayed to the user when a cursor is placed over the object.
-
FIGS. 7-9 illustrate another third-party application with which the present invention is compatible. Particularly,FIGS. 7-9 show an on-screen keyboard (OSK) application provided by APPLIED HUMAN FACTORS, INC. The present invention may interact with such conventional OSKs to allow the user to select keys or other controls. InFIG. 7A ,OSK 70 comprises a full compliment ofkeys 72 typically available with known keyboards.OSK 70 also includesmenu section 74 with which the user may invoke frequently used programs, and a word-prediction section 76 to allow the user to build complete sentences. WhenOSK 70 is visible on the display, it may overlay the lower two regions SW and SE. However, the user may control the transparency/opacity ofOSK 70, or hide it altogether, simply by aspirating through one of cells 18. Varying the transparency ofOSK 70 permits the user to navigate and use theOSK 70 while still allowing the user to see theunderlying GUI 40. - However, the present invention does not require displaying a full-size on-
screen OSK 70 such as the one shown inFIG. 7A . Alternatively, the present invention may be configured to displayOSK 70 in portions as seen inFIGS. 7B-7C . In this embodiment, the presentinvention partitions OSK 70 into groupings of selectable objects having one ormore keys 72. InFIG. 7B , for example, only the number keys, the control keys used in editing a document (e.g., PgUp, PgDn, Home, End, Ins, Backsp, Shift, etc.), and the function keys (e.g., Control, F1-F12, Escape, etc.) are displayed.FIG. 7C displays the alphabetical keys, punctuation keys, and control keys (e.g., Escape, Enter, Shift, Del, Caps Lock, etc.). DisplayingOSK 70 in portions frees the user from a conventional QWERTY arrangement, and allows the user to customize the position ofkeys 72 as desired. It also permits the user to add special purpose controls 78.Controls 78 may be preconfigured to contain a link to a designated website, launch a program, or execute specific functionality within the program, such as a spell checker, for example. - The user may cycle through the various available OSK portions simply by aspirating through one of the cells 18. For example, exhaling through
cell 18 e a first time may cause the full-size OSK to be displayed on-screen, as inFIG. 7A . Successive exhales throughcell 18 e prior to the expiration of the predetermined timer would cause successive portions of theOSK 70 to be displayed, as inFIGS. 7B and 7C . Still, subsequent exhales throughcell 18 e might hideOSK 70 altogether, while successive inhales throughcell 18 e may allow the user to alter the placement ondisplay 26 and/or the transparency/opacity ofOSK 70. As before, dwelling on any one OSK until the expiration of the predetermined timer may select that particular OSK. -
FIGS. 8A-8E illustrate how a user may edit documents or letters by building words using the selectedOSK 70. More specifically, the user may build words and control the interface by cycling to the desired key 72 or control button and autoclicking. As each letter is selected,controller 28 searches a database dictionary and displays possible matching words incontrol boxes 84. The user can scroll backwards and forwards through the words by selectingcontrol buttons control button 86. The user may add new words to the dictionary database by entering the letters that spell the word and selectingcontrol button 88. - The example of
FIGS. 8A-8E illustrates how the user might build the word “harmonica.” The user aspirates through cells 18 to cycle to the “h” key inkeyboard 70 and autoclicks.Controller 28 then displays frequently used words beginning with the letter “h” gleaned from the dictionary database.Controller 28 might also hide those keys containing letters that cannot be used directly after the letter “h.” InFIG. 8B , for example, only vowels may appear directly after the letter “h,” and thus, the interface displays thosekeys 72 associated with vowels.Keys 72 associated with consonants, for example, are not displayed to the user. Each successive key 72 selection results in a better-defined, narrower word list in word-section 76. Once the desired word appears in the word-section 76, the user cycles to, and autoclicks, to select the desired word.Controller 28 would then insert the selected word, which in this case is “harmonica,” into the current document. - It should be noted that when
OSK 70 is displayed,controller 28 provides substantially simultaneous focus to bothOSK 70 and an application object, such as a text field, onGUI 40. This permits the user to cycle to and select a desired key 72 fromOSK 70 for entry into the selected field onGUI 40, or create and/or edit documents using third party software such as MICROSOFT WORD, for example. - As seen in
FIG. 9 , the present invention also permits users to interact withOSK 70 having a bank ofcontrols 92.Bank 92 may include, for example,control buttons -
FIG. 10 illustrates an alternate embodiment whereinOSK 70 integrates both keyboard and word prediction functionality. As in the previous embodiments, the position ofOSK 70, as well as its key configuration and opacity, are configurable by the user. In addition,OSK 70 ofFIG. 10 may also be displayed in portions, each having their own set ofkeys 72. InFIG. 10 ,OSK 70 includes all 26 alphabetical keys and some additional keys that provide special functionality. However, it should be understood that thekeys 72 onOSK 70 may have any configuration desired, and that alternate OSKs may include numeric keys and/or punctuation keys in place of or in addition to thekeys 72 shown herein. - The MODE (“MO”) key provides functionality similar to that of
cell 18 e in that it permits a user to cycle through thevarious OSKs 70. Alternatively, the user may also directly select theparticular OSK 70 by cycling to and autoclicking one of the “OSK” keys. The SPACE (“SP”) key may be configured to enter a space, while the spell check (“CHEK”) key may launch a spell checker. The Caps (“CAPS”) key may capitalize a character. It should be noted that the keys on theOSK 70 might be configured to automatically provide automatic entry of space and/or capitalization of words as desired. The “Settings” key may permit the user to modify the configuration of the keyboard, and the “Top,” “Bot,” and “Fade” keys allow the user to set the position and control the transparency/opacity ofOSK 70. The “Hide” key permits the user to hide the OSK from the GUI. - As in the previous embodiment,
OSK 70 ofFIG. 10 may include a word prediction section. As the user selects a key 72, a list of possible or commonly used words will be displayed to the user. With each subsequent selection of a key 72, the words may be updated. Word prediction minimizes the number ofkeys 72 that a user must select in order to create full sentences in a document, for example. - The description has thus far described
device 24 in terms of a desktop computer. However, it should be noted that the present invention might also interact with other, more portable devices, such as a Personal Digital Assistant (PDA) shown inFIG. 12 . InFIG. 12 ,device 24, embodied as a PDA, includes auser input device 12, aGUI 40, and anOSK 70 overlaying a bottom portion ofGUI 40. A grid, such as the one shown inFIGS. 3-5 , is overlaid on theGUI 40 of PDA. Cycling and autoclicking permits the user to navigate and select various regions, sub-regions, and/or objects as previously described. - In addition to the
user input device 12 shown inFIGS. 1 and 2 , the interface of the present invention may also receive user input from alternate user input devices, such as those shown inFIG. 11 . For example, the user may use amicrophone 100 to generate user input signals. In this case, the interface of the present invention would comprise a voice-recognition engine as is known in the art. The user would cycle through regions, sub-regions, and desktop level controls, for example, by speaking predetermined commands intomicrophone 100. Silence for a predetermined amount of time without speaking a recognized command would result in an autoclick. - Alternatively, the user may wish to use a
mouse 102 or ajoystick 104. These input devices may be preferable, for example, for users who lack fine motor skills, but retain some gross motor control. However, non-disabled persons could use these input devices as well. In “freeform” embodiment, the interface of the present invention may be configured to reposition an on-screen pointer a specified distance (e.g., a certain number of pixels) in response to the movement ofmouse 102 orjoystick 104. In other embodiments, moving themouse 102 orjoystick 104 would permit the user to cycle through the regions, sub-regions and desktop level controls. As in the previous embodiments, the user would simply not move themouse 102 orjoystick 104 to execute an autoclick. - In yet another embodiment, an “Electric Skin Resistance” (ESR)
input device 106 generates input signals responsive to the user's touch instead of the user's aspiration. In this embodiment, cells 18 may be constructed of a conductive metal or metal alloy, and would generate a signal whenever the user's lips made contact with cells 18. For example, contacting the user's upper lip to a top half of cell 18 would generate a first input signal, while touching the user's bottom lip to a lower half of cell 18 would generate a second input signal. This would permit the user to cycle through the regions, sub-regions, and desktop level controls in clockwise and counter-clockwise directions. Of course, not touching any of cells 18 for a predetermined amount of time would result in an autoclick. - In another embodiment, a
remote control device 108 may be used to generate input signals responsive to the user's actuation of one or more buttons disposed in acluster 108 a onremote control device 108. As seen inFIG. 11 ,remote control device 108 includes four buttons surrounding a middle button. For clarity,cluster 108 a is illustrated to resemble the buttons on a TV remote, for example, that controls a TV, VCR, DVD, or other device. However, those skilled in the art will appreciate that the controls incluster 108 a may be positioned or sized as desired. - In this embodiment, a television or other display device may be used to display
GUI 40 indicating selections relating to a nurse-call system, for example. The regions NE, NW, SE, and SW may be marked with informative labels, such as “TEMPERATURE,” “TV,” “LIGHTS,” and “BED.” Using the buttons incluster 108 a, the user could cycle to and autoclick on one or more of these four regions to access the subregions, and/or deeper levels. The various levels could be configured to provide functionality specific to the particular selected region. For example, a user, upon selection of the “TEMPERATURE” region, might be presented with four subregions, each containing labels such as “TEMP COOLER,” “TEMP WARMER,” FAN FASTER,” and “FAN SLOWER.” Selection of one of these four subregions would permit the user to control the temperature. Deeper levels might provide even more granular control. Likewise, the user could also control “TV” functionality (e.g., volume, channel selection), lights, and/or the positioning of a bed simply by actuating on or more of the buttons incluster 108 a. The middle button might permit direct selection of a region, subregion, or other on-screen object, or might be preconfigured to automatically dial 9-1-1, a nurse, or other medical professional as needed. - Of course, any of the devices shown in the figures may be used to generate user input signals to the interface of the present invention for use in other applications as well. In one embodiment, for example, the interface of the present invention overlays a television screen. The user might navigate the interface to select channels, enter data (e.g., for web-based television applications), or surf the Internet.
- In another embodiment of the present invention,
controller 28 may generate input signals responsive to the user's thoughts. This is known as neuro-navigation. In this embodiment, a small chip comprising circuitry, such as the BRAINGATE chip manufactured by CYBERKINETICS INC., or a web implant system, such as the system manufactured by NEUROLINK, may be implanted in a user's body. Using these chips and/or systems, or others like them, the user might simply “think” commands that are then transmitted from the implant tocomputing device 24.Controller 28 may translate these thought commands into movements and actions on the interface of the present invention. For example, a user may cycle aroundGUI 40 simply by thinking, “cycle clockwise,” or “cycle counterclockwise.” The user may select a region, sub-region, or desktop level object by thinking the command “autoclick,” for example, or pausing on a selected region, sub-region, or desktop level command without thinking any command for a predetermined period. - These implantable neural interfaces, and others like them, could provide reliable and fast output signals to
computing device 24 operating according to the present invention. However, the present invention is in no way limited merely for use in navigating the Internet, creating, and editing documents, or other tasks typically performed using personal computing devices. As research continues and these devices and systems mature, other commands and methods of controlling the interface of the present invention will become possible. For example, further developments could generate signals to allow users to control other types of devices, such as environmental controls, medical devices designed to power their own limbs, and robotic equipment such as wheelchairs. The present invention would allow users to control and/or operate these types of devices according to the generated input signals. - In another embodiment, surgeons or other medical care professionals could utilize the present invention to select and/or manipulate medical instruments to perform remote surgery. In this embodiment,
GUI 40 may display one or more medical instruments that the surgeon would cycle to and autoclick to select. Progressively deeper levels might display predetermined actions the surgeon might to perform with the selected instrument. Alternatively, the surgeon could use his or her hands to control an existing device to perform the surgery from a remote location, while using the interface of the present invention (via, for example, hands-free device 12) to control one or more on-site cameras to provide various angles for video feedback, or to access medical information or assistance. Those skilled in the art would easily be able to imagine many such other embodiments. - Irrespective of the type of user input device or implant, however, the interface of the present invention permits the user to cycle and autoclick. Additionally, the interface of the present invention is useful for both physically-challenged persons as well as non-physically-challenged persons. The user is always informed of the results of the input signals via on-screen and/or auditory feedback through speaker 110. Further, the present invention does not require that a particular user input device transmit input signals via
cable 22, but rather, also contemplates the use of a wireless interface, such as the one shown inFIG. 13 . In this embodiment,input device 12 comprises awireless transceiver 112, such as a BLUETOOTH or infrared transceiver that communicates withcorresponding wireless port 32 oncomputing device 24. - The present invention may, of course, be carried out in other specific ways than those herein set forth without departing from the spirit and essential characteristics of the invention. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.
Claims (63)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/855,812 US7624355B2 (en) | 2004-05-27 | 2004-05-27 | System and method for controlling a user interface |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/855,812 US7624355B2 (en) | 2004-05-27 | 2004-05-27 | System and method for controlling a user interface |
Publications (2)
Publication Number | Publication Date |
---|---|
US20050268247A1 true US20050268247A1 (en) | 2005-12-01 |
US7624355B2 US7624355B2 (en) | 2009-11-24 |
Family
ID=35426865
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/855,812 Active 2027-12-04 US7624355B2 (en) | 2004-05-27 | 2004-05-27 | System and method for controlling a user interface |
Country Status (1)
Country | Link |
---|---|
US (1) | US7624355B2 (en) |
Cited By (103)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060190833A1 (en) * | 2005-02-18 | 2006-08-24 | Microsoft Corporation | Single-handed approach for navigation of application tiles using panning and zooming |
US20070130522A1 (en) * | 2003-10-01 | 2007-06-07 | Sunrise Medical Hhg Inc. | Control system with customizable menu structure for personal mobility vehicle |
US20080068382A1 (en) * | 2006-09-15 | 2008-03-20 | International Business Machines Corporation | Process data presentation based on process regions |
US20080312903A1 (en) * | 2007-06-12 | 2008-12-18 | At & T Knowledge Ventures, L.P. | Natural language interface customization |
US20080310723A1 (en) * | 2007-06-18 | 2008-12-18 | Microsoft Corporation | Text prediction with partial selection in a variety of domains |
US20080320418A1 (en) * | 2007-06-21 | 2008-12-25 | Cadexterity, Inc. | Graphical User Friendly Interface Keypad System For CAD |
US20090082884A1 (en) * | 2000-02-14 | 2009-03-26 | Pierre Bonnat | Method And System For Processing Signals For A MEMS Detector That Enables Control Of A Device Using Human Breath |
US20090132917A1 (en) * | 2007-11-19 | 2009-05-21 | Landry Robin J | Methods and systems for generating a visual user interface |
US20090138248A1 (en) * | 2007-11-27 | 2009-05-28 | International Business Machines Corporation | Linked decision nodes in a business process model |
US20090157873A1 (en) * | 2007-10-18 | 2009-06-18 | Anthony Kilcoyne | Verifiable online usage monitoring |
US20090244003A1 (en) * | 2008-03-26 | 2009-10-01 | Pierre Bonnat | Method and system for interfacing with an electronic device via respiratory and/or tactual input |
US20090249202A1 (en) * | 2000-02-14 | 2009-10-01 | Pierre Bonnat | Method and System for Processing Signals that Control a Device Using Human Breath |
US7739061B2 (en) | 1999-02-12 | 2010-06-15 | Pierre Bonnat | Method and system for controlling a user interface of a device using human breath |
US20100199215A1 (en) * | 2009-02-05 | 2010-08-05 | Eric Taylor Seymour | Method of presenting a web page for accessibility browsing |
US20100280983A1 (en) * | 2009-04-30 | 2010-11-04 | Samsung Electronics Co., Ltd. | Apparatus and method for predicting user's intention based on multimodal information |
US20100302242A1 (en) * | 2009-05-29 | 2010-12-02 | Siemens Product Lifecycle Management Software Inc. | System and method for selectable display in object models |
US20110063225A1 (en) * | 2009-08-28 | 2011-03-17 | Louis Michon | User Interface for Handheld Electronic Devices |
US20110078611A1 (en) * | 2008-05-22 | 2011-03-31 | Marco Caligari | Method and apparatus for the access to communication and/or to writing using a dedicated interface and a scanning control with advanced visual feedback |
US20110137433A1 (en) * | 2000-02-14 | 2011-06-09 | Pierre Bonnat | Method And System For Processing Signals For A MEMS Detector That Enables Control Of A Device Using Human Breath |
US20110179386A1 (en) * | 2009-03-16 | 2011-07-21 | Shaffer Joshua L | Event Recognition |
US20110201388A1 (en) * | 2010-02-15 | 2011-08-18 | Research In Motion Limited | Prominent selection cues for icons |
WO2011159531A3 (en) * | 2010-06-14 | 2012-02-09 | Apple Inc. | Control selection approximation |
US20120053926A1 (en) * | 2010-08-31 | 2012-03-01 | Red Hat, Inc. | Interactive input method |
US8174502B2 (en) | 2008-03-04 | 2012-05-08 | Apple Inc. | Touch event processing for web pages |
US20120240069A1 (en) * | 2011-03-16 | 2012-09-20 | Honeywell International Inc. | Method for enlarging characters displayed on an adaptive touch screen key pad |
US8285499B2 (en) | 2009-03-16 | 2012-10-09 | Apple Inc. | Event recognition |
WO2012144910A1 (en) * | 2011-04-21 | 2012-10-26 | Opera Software Asa | Method and device for providing easy access in a user agent to data resources related to client-side web applications |
US20130031497A1 (en) * | 2011-07-29 | 2013-01-31 | Nokia Corporation | Method and apparatus for enabling multi-parameter discovery and input |
US20130080945A1 (en) * | 2011-09-27 | 2013-03-28 | Paul Reeves | Reconfigurable user interface elements |
US8416196B2 (en) | 2008-03-04 | 2013-04-09 | Apple Inc. | Touch event model programming interface |
US8429557B2 (en) | 2007-01-07 | 2013-04-23 | Apple Inc. | Application programming interfaces for scrolling operations |
US20130145261A1 (en) * | 2006-04-05 | 2013-06-06 | Research In Motion Limited | Handheld electronic device and method for performing optimized spell checking during text entry by providing a sequentially ordered spell-check algorithms |
US20130263048A1 (en) * | 2010-12-15 | 2013-10-03 | Samsung Electronics Co., Ltd. | Display control apparatus, program and display control method |
US8560975B2 (en) | 2008-03-04 | 2013-10-15 | Apple Inc. | Touch event model |
US8566045B2 (en) | 2009-03-16 | 2013-10-22 | Apple Inc. | Event recognition |
US8701015B2 (en) | 2008-03-26 | 2014-04-15 | Pierre Bonnat | Method and system for providing a user interface that enables control of a device via respiratory and/or tactual input |
US8717305B2 (en) | 2008-03-04 | 2014-05-06 | Apple Inc. | Touch event model for web pages |
US8976046B2 (en) | 2008-03-26 | 2015-03-10 | Pierre Bonnat | Method and system for a MEMS detector that enables control of a device using human breath |
WO2015106013A3 (en) * | 2014-01-09 | 2015-11-12 | AI Squared | Transforming a user interface icon into an enlarged view |
US20150347382A1 (en) * | 2014-05-30 | 2015-12-03 | Apple Inc. | Predictive text input |
CN105183469A (en) * | 2015-09-01 | 2015-12-23 | 深圳Tcl数字技术有限公司 | Method and apparatus for focus location during application switching |
US9298363B2 (en) | 2011-04-11 | 2016-03-29 | Apple Inc. | Region activation for touch sensitive surface |
US9311112B2 (en) | 2009-03-16 | 2016-04-12 | Apple Inc. | Event recognition |
US9411903B2 (en) * | 2007-03-05 | 2016-08-09 | Oracle International Corporation | Generalized faceted browser decision support tool |
US9495144B2 (en) | 2007-03-23 | 2016-11-15 | Apple Inc. | Systems and methods for controlling application updates across a wireless interface |
US9529519B2 (en) | 2007-01-07 | 2016-12-27 | Apple Inc. | Application programming interfaces for gesture operations |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9684521B2 (en) | 2010-01-26 | 2017-06-20 | Apple Inc. | Systems having discrete and continuous gesture recognizers |
US9733716B2 (en) | 2013-06-09 | 2017-08-15 | Apple Inc. | Proxy gesture recognizer |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US20180373421A1 (en) * | 2017-06-27 | 2018-12-27 | International Business Machines Corporation | Smart element filtering method via gestures |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US20190050133A1 (en) * | 2017-08-11 | 2019-02-14 | Autodesk, Inc. | Techniques for transitioning from a first navigation scheme to a second navigation scheme |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US20190102029A1 (en) * | 2016-06-21 | 2019-04-04 | Intel Corporation | Input device for electronic devices |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US20200218441A1 (en) * | 2018-02-08 | 2020-07-09 | Rakuten, Inc. | Selection device, selection method, program, and non-transitory computer-readable information recording medium |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10963142B2 (en) | 2007-01-07 | 2021-03-30 | Apple Inc. | Application programming interfaces for scrolling |
US10997267B2 (en) * | 2012-02-29 | 2021-05-04 | Ebay Inc. | Systems and methods for providing a user interface with grid view |
US11010014B2 (en) | 2017-08-11 | 2021-05-18 | Autodesk, Inc. | Techniques for transitioning from a first navigation scheme to a second navigation scheme |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
CN112997142A (en) * | 2018-12-25 | 2021-06-18 | 佛吉亚歌乐电子有限公司 | Display control device and display control method |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11620042B2 (en) | 2019-04-15 | 2023-04-04 | Apple Inc. | Accelerated scrolling and selection |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4259486B2 (en) * | 2005-03-08 | 2009-04-30 | セイコーエプソン株式会社 | Video source search support method, video display apparatus and projector using the search support method |
US20080172610A1 (en) * | 2005-03-11 | 2008-07-17 | Paul Blair | Customizable User Interface For Electronic Devices |
WO2007082101A2 (en) * | 2006-01-16 | 2007-07-19 | Freedom Scientific, Inc. | Custom summary views for screen reader |
US9471333B2 (en) * | 2006-11-03 | 2016-10-18 | Conceptual Speech, Llc | Contextual speech-recognition user-interface driven system and method |
US20090158190A1 (en) * | 2007-12-13 | 2009-06-18 | Yuvee, Inc. | Computing apparatus including a personal web and application assistant |
US8601385B2 (en) * | 2008-11-25 | 2013-12-03 | General Electric Company | Zero pixel travel systems and methods of use |
EP2494420B1 (en) * | 2009-10-30 | 2020-06-10 | Cale, Richard John | Environmental control method and system |
US20110289436A1 (en) * | 2010-05-19 | 2011-11-24 | Samsung Electronics Co., Ltd. | Display apparatus and control method thereof |
USD674404S1 (en) | 2011-10-26 | 2013-01-15 | Mcafee, Inc. | Computer having graphical user interface |
USD673967S1 (en) | 2011-10-26 | 2013-01-08 | Mcafee, Inc. | Computer having graphical user interface |
USD674403S1 (en) | 2011-10-26 | 2013-01-15 | Mcafee, Inc. | Computer having graphical user interface |
USD677687S1 (en) | 2011-10-27 | 2013-03-12 | Mcafee, Inc. | Computer display screen with graphical user interface |
US20150040008A1 (en) * | 2013-08-02 | 2015-02-05 | Gamer Parents Inc. | Interactive overlay for video applications |
US20230324680A1 (en) * | 2022-04-08 | 2023-10-12 | Mirza Faizan | Apparatus to enable differently abled users to communicate and a method thereof |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5151951A (en) * | 1990-03-15 | 1992-09-29 | Sharp Kabushiki Kaisha | Character recognition device which divides a single character region into subregions to obtain a character code |
US5271068A (en) * | 1990-03-15 | 1993-12-14 | Sharp Kabushiki Kaisha | Character recognition device which divides a single character region into subregions to obtain a character code |
US5603065A (en) * | 1994-02-28 | 1997-02-11 | Baneth; Robin C. | Hands-free input device for operating a computer having mouthpiece with plurality of cells and a transducer for converting sound into electrical control signals |
US5818423A (en) * | 1995-04-11 | 1998-10-06 | Dragon Systems, Inc. | Voice controlled cursor movement |
US6005549A (en) * | 1995-07-24 | 1999-12-21 | Forest; Donald K. | User interface method and apparatus |
US6016478A (en) * | 1996-08-13 | 2000-01-18 | Starfish Software, Inc. | Scheduling system with methods for peer-to-peer scheduling of remote users |
US6075534A (en) * | 1998-03-26 | 2000-06-13 | International Business Machines Corporation | Multiple function graphical user interface minibar for speech recognition |
US6078308A (en) * | 1995-12-13 | 2000-06-20 | Immersion Corporation | Graphical click surfaces for force feedback applications to provide user selection using cursor interaction with a trigger position within a boundary of a graphical object |
US6211856B1 (en) * | 1998-04-17 | 2001-04-03 | Sung M. Choi | Graphical user interface touch screen with an auto zoom feature |
US6219032B1 (en) * | 1995-12-01 | 2001-04-17 | Immersion Corporation | Method for providing force feedback to a user of an interface device based on interactions of a controlled cursor with graphical elements in a graphical user interface |
US6424357B1 (en) * | 1999-03-05 | 2002-07-23 | Touch Controls, Inc. | Voice input system and method of using same |
US6433775B1 (en) * | 1999-03-25 | 2002-08-13 | Monkeymedia, Inc. | Virtual force feedback interface |
US6434547B1 (en) * | 1999-10-28 | 2002-08-13 | Qenm.Com | Data capture and verification system |
US20020126153A1 (en) * | 2000-03-13 | 2002-09-12 | Withers James G. | Apparatus and method for navigating electronic files using an array display |
US6499015B2 (en) * | 1999-08-12 | 2002-12-24 | International Business Machines Corporation | Voice interaction method for a computer graphical user interface |
US6526381B1 (en) * | 1999-09-30 | 2003-02-25 | Intel Corporation | Remote control with speech recognition |
US6885363B2 (en) * | 2002-05-09 | 2005-04-26 | Gateway, Inc. | Pointing device dwell time |
US7125228B2 (en) * | 2000-08-14 | 2006-10-24 | Black & Decker Inc. | Pressure washer having oilless high pressure pump |
US7356775B2 (en) * | 2004-10-22 | 2008-04-08 | Nds Limited | Focus priority in window management |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6526318B1 (en) | 2000-06-16 | 2003-02-25 | Mehdi M. Ansarinia | Stimulation method for the sphenopalatine ganglia, sphenopalatine nerve, or vidian nerve for treatment of medical conditions |
-
2004
- 2004-05-27 US US10/855,812 patent/US7624355B2/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5271068A (en) * | 1990-03-15 | 1993-12-14 | Sharp Kabushiki Kaisha | Character recognition device which divides a single character region into subregions to obtain a character code |
US5151951A (en) * | 1990-03-15 | 1992-09-29 | Sharp Kabushiki Kaisha | Character recognition device which divides a single character region into subregions to obtain a character code |
US5603065A (en) * | 1994-02-28 | 1997-02-11 | Baneth; Robin C. | Hands-free input device for operating a computer having mouthpiece with plurality of cells and a transducer for converting sound into electrical control signals |
US5818423A (en) * | 1995-04-11 | 1998-10-06 | Dragon Systems, Inc. | Voice controlled cursor movement |
US6005549A (en) * | 1995-07-24 | 1999-12-21 | Forest; Donald K. | User interface method and apparatus |
US6219032B1 (en) * | 1995-12-01 | 2001-04-17 | Immersion Corporation | Method for providing force feedback to a user of an interface device based on interactions of a controlled cursor with graphical elements in a graphical user interface |
US6078308A (en) * | 1995-12-13 | 2000-06-20 | Immersion Corporation | Graphical click surfaces for force feedback applications to provide user selection using cursor interaction with a trigger position within a boundary of a graphical object |
US6016478A (en) * | 1996-08-13 | 2000-01-18 | Starfish Software, Inc. | Scheduling system with methods for peer-to-peer scheduling of remote users |
US6075534A (en) * | 1998-03-26 | 2000-06-13 | International Business Machines Corporation | Multiple function graphical user interface minibar for speech recognition |
US6211856B1 (en) * | 1998-04-17 | 2001-04-03 | Sung M. Choi | Graphical user interface touch screen with an auto zoom feature |
US6424357B1 (en) * | 1999-03-05 | 2002-07-23 | Touch Controls, Inc. | Voice input system and method of using same |
US6433775B1 (en) * | 1999-03-25 | 2002-08-13 | Monkeymedia, Inc. | Virtual force feedback interface |
US6499015B2 (en) * | 1999-08-12 | 2002-12-24 | International Business Machines Corporation | Voice interaction method for a computer graphical user interface |
US6526381B1 (en) * | 1999-09-30 | 2003-02-25 | Intel Corporation | Remote control with speech recognition |
US6434547B1 (en) * | 1999-10-28 | 2002-08-13 | Qenm.Com | Data capture and verification system |
US20020126153A1 (en) * | 2000-03-13 | 2002-09-12 | Withers James G. | Apparatus and method for navigating electronic files using an array display |
US7125228B2 (en) * | 2000-08-14 | 2006-10-24 | Black & Decker Inc. | Pressure washer having oilless high pressure pump |
US6885363B2 (en) * | 2002-05-09 | 2005-04-26 | Gateway, Inc. | Pointing device dwell time |
US7356775B2 (en) * | 2004-10-22 | 2008-04-08 | Nds Limited | Focus priority in window management |
Cited By (188)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110010112A1 (en) * | 1999-02-12 | 2011-01-13 | Pierre Bonnat | Method and System for Controlling a User Interface of a Device Using Human Breath |
US7739061B2 (en) | 1999-02-12 | 2010-06-15 | Pierre Bonnat | Method and system for controlling a user interface of a device using human breath |
US20090249202A1 (en) * | 2000-02-14 | 2009-10-01 | Pierre Bonnat | Method and System for Processing Signals that Control a Device Using Human Breath |
US20110178613A9 (en) * | 2000-02-14 | 2011-07-21 | Pierre Bonnat | Method And System For Processing Signals For A MEMS Detector That Enables Control Of A Device Using Human Breath |
US20110137433A1 (en) * | 2000-02-14 | 2011-06-09 | Pierre Bonnat | Method And System For Processing Signals For A MEMS Detector That Enables Control Of A Device Using Human Breath |
US10216259B2 (en) | 2000-02-14 | 2019-02-26 | Pierre Bonnat | Method and system for processing signals that control a device using human breath |
US20090082884A1 (en) * | 2000-02-14 | 2009-03-26 | Pierre Bonnat | Method And System For Processing Signals For A MEMS Detector That Enables Control Of A Device Using Human Breath |
US20070130522A1 (en) * | 2003-10-01 | 2007-06-07 | Sunrise Medical Hhg Inc. | Control system with customizable menu structure for personal mobility vehicle |
US7310776B2 (en) * | 2003-10-01 | 2007-12-18 | Sunrise Medical Hhg Inc. | Control system with customizable menu structure for personal mobility vehicle |
US10282080B2 (en) | 2005-02-18 | 2019-05-07 | Apple Inc. | Single-handed approach for navigation of application tiles using panning and zooming |
US8819569B2 (en) * | 2005-02-18 | 2014-08-26 | Zumobi, Inc | Single-handed approach for navigation of application tiles using panning and zooming |
US9411505B2 (en) | 2005-02-18 | 2016-08-09 | Apple Inc. | Single-handed approach for navigation of application tiles using panning and zooming |
US20060190833A1 (en) * | 2005-02-18 | 2006-08-24 | Microsoft Corporation | Single-handed approach for navigation of application tiles using panning and zooming |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9128922B2 (en) * | 2006-04-05 | 2015-09-08 | Blackberry Limited | Handheld electronic device and method for performing optimized spell checking during text entry by providing a sequentially ordered series of spell-check algorithms |
US20130145261A1 (en) * | 2006-04-05 | 2013-06-06 | Research In Motion Limited | Handheld electronic device and method for performing optimized spell checking during text entry by providing a sequentially ordered spell-check algorithms |
US20080068382A1 (en) * | 2006-09-15 | 2008-03-20 | International Business Machines Corporation | Process data presentation based on process regions |
US10346010B2 (en) | 2006-09-15 | 2019-07-09 | International Business Machines Corporation | Process data presentation based on process regions |
US8947439B2 (en) * | 2006-09-15 | 2015-02-03 | International Business Machines Corporation | Process data presentation based on process regions |
US9575648B2 (en) | 2007-01-07 | 2017-02-21 | Apple Inc. | Application programming interfaces for gesture operations |
US9037995B2 (en) | 2007-01-07 | 2015-05-19 | Apple Inc. | Application programming interfaces for scrolling operations |
US8429557B2 (en) | 2007-01-07 | 2013-04-23 | Apple Inc. | Application programming interfaces for scrolling operations |
US10817162B2 (en) | 2007-01-07 | 2020-10-27 | Apple Inc. | Application programming interfaces for scrolling operations |
US10175876B2 (en) | 2007-01-07 | 2019-01-08 | Apple Inc. | Application programming interfaces for gesture operations |
US10963142B2 (en) | 2007-01-07 | 2021-03-30 | Apple Inc. | Application programming interfaces for scrolling |
US9760272B2 (en) | 2007-01-07 | 2017-09-12 | Apple Inc. | Application programming interfaces for scrolling operations |
US10481785B2 (en) | 2007-01-07 | 2019-11-19 | Apple Inc. | Application programming interfaces for scrolling operations |
US8661363B2 (en) | 2007-01-07 | 2014-02-25 | Apple Inc. | Application programming interfaces for scrolling operations |
US9665265B2 (en) | 2007-01-07 | 2017-05-30 | Apple Inc. | Application programming interfaces for gesture operations |
US9639260B2 (en) | 2007-01-07 | 2017-05-02 | Apple Inc. | Application programming interfaces for gesture operations |
US11449217B2 (en) | 2007-01-07 | 2022-09-20 | Apple Inc. | Application programming interfaces for gesture operations |
US9529519B2 (en) | 2007-01-07 | 2016-12-27 | Apple Inc. | Application programming interfaces for gesture operations |
US10613741B2 (en) | 2007-01-07 | 2020-04-07 | Apple Inc. | Application programming interface for gesture operations |
US9448712B2 (en) | 2007-01-07 | 2016-09-20 | Apple Inc. | Application programming interfaces for scrolling operations |
US9411903B2 (en) * | 2007-03-05 | 2016-08-09 | Oracle International Corporation | Generalized faceted browser decision support tool |
US10360504B2 (en) | 2007-03-05 | 2019-07-23 | Oracle International Corporation | Generalized faceted browser decision support tool |
US9495144B2 (en) | 2007-03-23 | 2016-11-15 | Apple Inc. | Systems and methods for controlling application updates across a wireless interface |
US8417509B2 (en) * | 2007-06-12 | 2013-04-09 | At&T Intellectual Property I, L.P. | Natural language interface customization |
US9239660B2 (en) * | 2007-06-12 | 2016-01-19 | At&T Intellectual Property I, L.P. | Natural language interface customization |
US20130263010A1 (en) * | 2007-06-12 | 2013-10-03 | At&T Intellectual Property I, L.P. | Natural language interface customization |
US20080312903A1 (en) * | 2007-06-12 | 2008-12-18 | At & T Knowledge Ventures, L.P. | Natural language interface customization |
US8504349B2 (en) * | 2007-06-18 | 2013-08-06 | Microsoft Corporation | Text prediction with partial selection in a variety of domains |
US20080310723A1 (en) * | 2007-06-18 | 2008-12-18 | Microsoft Corporation | Text prediction with partial selection in a variety of domains |
US20080320418A1 (en) * | 2007-06-21 | 2008-12-25 | Cadexterity, Inc. | Graphical User Friendly Interface Keypad System For CAD |
US20090157873A1 (en) * | 2007-10-18 | 2009-06-18 | Anthony Kilcoyne | Verifiable online usage monitoring |
US8762516B2 (en) * | 2007-10-18 | 2014-06-24 | 4Everlearning Holdings Ltd. | Verifiable online usage monitoring |
US20090132917A1 (en) * | 2007-11-19 | 2009-05-21 | Landry Robin J | Methods and systems for generating a visual user interface |
US8839123B2 (en) * | 2007-11-19 | 2014-09-16 | Red Hat, Inc. | Generating a visual user interface |
US9129244B2 (en) * | 2007-11-27 | 2015-09-08 | International Business Machines Corporation | Linked decision nodes in a business process model |
US20130179364A1 (en) * | 2007-11-27 | 2013-07-11 | International Business Machines Corporation | Linked decision nodes in a business process model |
US8412548B2 (en) * | 2007-11-27 | 2013-04-02 | International Business Machines Corporation | Linked decision nodes in a business process model |
US20090138248A1 (en) * | 2007-11-27 | 2009-05-28 | International Business Machines Corporation | Linked decision nodes in a business process model |
US9690481B2 (en) | 2008-03-04 | 2017-06-27 | Apple Inc. | Touch event model |
US8836652B2 (en) | 2008-03-04 | 2014-09-16 | Apple Inc. | Touch event model programming interface |
US9389712B2 (en) | 2008-03-04 | 2016-07-12 | Apple Inc. | Touch event model |
US9323335B2 (en) | 2008-03-04 | 2016-04-26 | Apple Inc. | Touch event model programming interface |
US8717305B2 (en) | 2008-03-04 | 2014-05-06 | Apple Inc. | Touch event model for web pages |
US8174502B2 (en) | 2008-03-04 | 2012-05-08 | Apple Inc. | Touch event processing for web pages |
US8723822B2 (en) | 2008-03-04 | 2014-05-13 | Apple Inc. | Touch event model programming interface |
US9720594B2 (en) | 2008-03-04 | 2017-08-01 | Apple Inc. | Touch event model |
US10521109B2 (en) | 2008-03-04 | 2019-12-31 | Apple Inc. | Touch event model |
US9971502B2 (en) | 2008-03-04 | 2018-05-15 | Apple Inc. | Touch event model |
US8416196B2 (en) | 2008-03-04 | 2013-04-09 | Apple Inc. | Touch event model programming interface |
US9798459B2 (en) | 2008-03-04 | 2017-10-24 | Apple Inc. | Touch event model for web pages |
US8560975B2 (en) | 2008-03-04 | 2013-10-15 | Apple Inc. | Touch event model |
US8411061B2 (en) | 2008-03-04 | 2013-04-02 | Apple Inc. | Touch event processing for documents |
US10936190B2 (en) | 2008-03-04 | 2021-03-02 | Apple Inc. | Devices, methods, and user interfaces for processing touch events |
US11740725B2 (en) | 2008-03-04 | 2023-08-29 | Apple Inc. | Devices, methods, and user interfaces for processing touch events |
US8645827B2 (en) | 2008-03-04 | 2014-02-04 | Apple Inc. | Touch event model |
US8976046B2 (en) | 2008-03-26 | 2015-03-10 | Pierre Bonnat | Method and system for a MEMS detector that enables control of a device using human breath |
US9116544B2 (en) | 2008-03-26 | 2015-08-25 | Pierre Bonnat | Method and system for interfacing with an electronic device via respiratory and/or tactual input |
CN102099922A (en) * | 2008-03-26 | 2011-06-15 | 皮埃尔·邦纳特 | Method and system for processing signals for a mems detector that enables control of a device using human breath |
WO2009120865A3 (en) * | 2008-03-26 | 2009-12-23 | Inputive Corporation | Method and system for processing signals for a mems detector that enables control of a device using human breath |
US8701015B2 (en) | 2008-03-26 | 2014-04-15 | Pierre Bonnat | Method and system for providing a user interface that enables control of a device via respiratory and/or tactual input |
US20090244003A1 (en) * | 2008-03-26 | 2009-10-01 | Pierre Bonnat | Method and system for interfacing with an electronic device via respiratory and/or tactual input |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US20110078611A1 (en) * | 2008-05-22 | 2011-03-31 | Marco Caligari | Method and apparatus for the access to communication and/or to writing using a dedicated interface and a scanning control with advanced visual feedback |
US20100199215A1 (en) * | 2009-02-05 | 2010-08-05 | Eric Taylor Seymour | Method of presenting a web page for accessibility browsing |
US9489131B2 (en) * | 2009-02-05 | 2016-11-08 | Apple Inc. | Method of presenting a web page for accessibility browsing |
US20110179386A1 (en) * | 2009-03-16 | 2011-07-21 | Shaffer Joshua L | Event Recognition |
US10719225B2 (en) | 2009-03-16 | 2020-07-21 | Apple Inc. | Event recognition |
US9965177B2 (en) | 2009-03-16 | 2018-05-08 | Apple Inc. | Event recognition |
US9483121B2 (en) | 2009-03-16 | 2016-11-01 | Apple Inc. | Event recognition |
US9311112B2 (en) | 2009-03-16 | 2016-04-12 | Apple Inc. | Event recognition |
US8682602B2 (en) | 2009-03-16 | 2014-03-25 | Apple Inc. | Event recognition |
US8285499B2 (en) | 2009-03-16 | 2012-10-09 | Apple Inc. | Event recognition |
US11755196B2 (en) | 2009-03-16 | 2023-09-12 | Apple Inc. | Event recognition |
US11163440B2 (en) | 2009-03-16 | 2021-11-02 | Apple Inc. | Event recognition |
US8428893B2 (en) | 2009-03-16 | 2013-04-23 | Apple Inc. | Event recognition |
US9285908B2 (en) | 2009-03-16 | 2016-03-15 | Apple Inc. | Event recognition |
US8566044B2 (en) | 2009-03-16 | 2013-10-22 | Apple Inc. | Event recognition |
US8566045B2 (en) | 2009-03-16 | 2013-10-22 | Apple Inc. | Event recognition |
US20100280983A1 (en) * | 2009-04-30 | 2010-11-04 | Samsung Electronics Co., Ltd. | Apparatus and method for predicting user's intention based on multimodal information |
US8606735B2 (en) * | 2009-04-30 | 2013-12-10 | Samsung Electronics Co., Ltd. | Apparatus and method for predicting user's intention based on multimodal information |
US20100302242A1 (en) * | 2009-05-29 | 2010-12-02 | Siemens Product Lifecycle Management Software Inc. | System and method for selectable display in object models |
US8421800B2 (en) | 2009-05-29 | 2013-04-16 | Siemens Product Lifecycle Management Software Inc. | System and method for selectable display in object models |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US20110063225A1 (en) * | 2009-08-28 | 2011-03-17 | Louis Michon | User Interface for Handheld Electronic Devices |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US9684521B2 (en) | 2010-01-26 | 2017-06-20 | Apple Inc. | Systems having discrete and continuous gesture recognizers |
US10732997B2 (en) | 2010-01-26 | 2020-08-04 | Apple Inc. | Gesture recognizers with delegates for controlling and modifying gesture recognition |
EP2360563A1 (en) * | 2010-02-15 | 2011-08-24 | Research In Motion Limited | Prominent selection cues for icons |
US20110201388A1 (en) * | 2010-02-15 | 2011-08-18 | Research In Motion Limited | Prominent selection cues for icons |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
WO2011159531A3 (en) * | 2010-06-14 | 2012-02-09 | Apple Inc. | Control selection approximation |
US8552999B2 (en) | 2010-06-14 | 2013-10-08 | Apple Inc. | Control selection approximation |
US10216408B2 (en) | 2010-06-14 | 2019-02-26 | Apple Inc. | Devices and methods for identifying user interface objects based on view hierarchy |
US8838453B2 (en) * | 2010-08-31 | 2014-09-16 | Red Hat, Inc. | Interactive input method |
US20120053926A1 (en) * | 2010-08-31 | 2012-03-01 | Red Hat, Inc. | Interactive input method |
US20130263048A1 (en) * | 2010-12-15 | 2013-10-03 | Samsung Electronics Co., Ltd. | Display control apparatus, program and display control method |
US20120240069A1 (en) * | 2011-03-16 | 2012-09-20 | Honeywell International Inc. | Method for enlarging characters displayed on an adaptive touch screen key pad |
US8719724B2 (en) * | 2011-03-16 | 2014-05-06 | Honeywell International Inc. | Method for enlarging characters displayed on an adaptive touch screen key pad |
US9298363B2 (en) | 2011-04-11 | 2016-03-29 | Apple Inc. | Region activation for touch sensitive surface |
WO2012144910A1 (en) * | 2011-04-21 | 2012-10-26 | Opera Software Asa | Method and device for providing easy access in a user agent to data resources related to client-side web applications |
US20130031497A1 (en) * | 2011-07-29 | 2013-01-31 | Nokia Corporation | Method and apparatus for enabling multi-parameter discovery and input |
US20130080945A1 (en) * | 2011-09-27 | 2013-03-28 | Paul Reeves | Reconfigurable user interface elements |
US10997267B2 (en) * | 2012-02-29 | 2021-05-04 | Ebay Inc. | Systems and methods for providing a user interface with grid view |
US11409833B2 (en) * | 2012-02-29 | 2022-08-09 | Ebay Inc. | Systems and methods for providing a user interface with grid view |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9733716B2 (en) | 2013-06-09 | 2017-08-15 | Apple Inc. | Proxy gesture recognizer |
US11429190B2 (en) | 2013-06-09 | 2022-08-30 | Apple Inc. | Proxy gesture recognizer |
WO2015106013A3 (en) * | 2014-01-09 | 2015-11-12 | AI Squared | Transforming a user interface icon into an enlarged view |
US20150347382A1 (en) * | 2014-05-30 | 2015-12-03 | Apple Inc. | Predictive text input |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9760559B2 (en) * | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
CN105183469B (en) * | 2015-09-01 | 2019-01-01 | 深圳Tcl数字技术有限公司 | Focus localization method and device when application program switches |
CN105183469A (en) * | 2015-09-01 | 2015-12-23 | 深圳Tcl数字技术有限公司 | Method and apparatus for focus location during application switching |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10817102B2 (en) * | 2016-06-21 | 2020-10-27 | Intel Corporation | Input device for electronic devices |
US20190102029A1 (en) * | 2016-06-21 | 2019-04-04 | Intel Corporation | Input device for electronic devices |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US10956026B2 (en) | 2017-06-27 | 2021-03-23 | International Business Machines Corporation | Smart element filtering method via gestures |
US10521106B2 (en) * | 2017-06-27 | 2019-12-31 | International Business Machines Corporation | Smart element filtering method via gestures |
US20180373421A1 (en) * | 2017-06-27 | 2018-12-27 | International Business Machines Corporation | Smart element filtering method via gestures |
US11474660B2 (en) * | 2017-08-11 | 2022-10-18 | Autodesk, Inc. | Techniques for transitioning from a first navigation scheme to a second navigation scheme |
US11010014B2 (en) | 2017-08-11 | 2021-05-18 | Autodesk, Inc. | Techniques for transitioning from a first navigation scheme to a second navigation scheme |
US20190050133A1 (en) * | 2017-08-11 | 2019-02-14 | Autodesk, Inc. | Techniques for transitioning from a first navigation scheme to a second navigation scheme |
US11320985B2 (en) * | 2018-02-08 | 2022-05-03 | Rakuten Groun. Inc. | Selection device, selection method, program, and non-transitory computer-readable information recording medium |
US20200218441A1 (en) * | 2018-02-08 | 2020-07-09 | Rakuten, Inc. | Selection device, selection method, program, and non-transitory computer-readable information recording medium |
CN112997142A (en) * | 2018-12-25 | 2021-06-18 | 佛吉亚歌乐电子有限公司 | Display control device and display control method |
US11620042B2 (en) | 2019-04-15 | 2023-04-04 | Apple Inc. | Accelerated scrolling and selection |
Also Published As
Publication number | Publication date |
---|---|
US7624355B2 (en) | 2009-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7624355B2 (en) | System and method for controlling a user interface | |
US8125440B2 (en) | Method and device for controlling and inputting data | |
Majaranta et al. | Twenty years of eye typing: systems and design issues | |
MacLean | Designing with haptic feedback | |
Yfantidis et al. | Adaptive blind interaction technique for touchscreens | |
Majaranta et al. | Text entry by gaze: Utilizing eye-tracking | |
US20110209087A1 (en) | Method and device for controlling an inputting data | |
US8013837B1 (en) | Process and apparatus for providing a one-dimensional computer input interface allowing movement in one or two directions to conduct pointer operations usually performed with a mouse and character input usually performed with a keyboard | |
US20060125659A1 (en) | Text input method and apparatus using bio-signals | |
Hinckley et al. | Input/Output Devices and Interaction Techniques. | |
US5603065A (en) | Hands-free input device for operating a computer having mouthpiece with plurality of cells and a transducer for converting sound into electrical control signals | |
KR20130088752A (en) | Multidirectional button, key, and keyboard | |
Caltenco et al. | TongueWise: Tongue-computer interface software for people with tetraplegia | |
Majaranta | Text entry by eye gaze | |
US20050156895A1 (en) | Portable put-on keyboard glove | |
Simpson | Computer Access for People with Disabilities: A Human Factors Approach | |
Rajanna et al. | PressTapFlick: Exploring a gaze and foot-based multimodal approach to gaze typing | |
CN106371756A (en) | Input system and input method | |
JP5205360B2 (en) | Input information determination apparatus, input information determination method, input information determination program, and recording medium recording the input information determination program | |
Azmi et al. | The Wiimote with SAPI: Creating an accessible low-cost, human computer interface for the physically disabled | |
Goodenough-Trepagnier et al. | Towards a method for computer interface design using speech recognition | |
DV et al. | Eye gaze controlled adaptive virtual keyboard for users with SSMI | |
EP2447808B1 (en) | Apparatus for operating a computer using thoughts or facial impressions | |
Pokhariya et al. | Navigo--Accessibility Solutions for Cerebral Palsy Affected | |
Istance | An investigation into gaze-based interaction techniques for people with motor impairments. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: III HOLDINGS 1, LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BANETH, ROBIN C.;REEL/FRAME:032801/0916 Effective date: 20140111 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |