US20030023348A1 - Robot apparatus and motion control method - Google Patents

Robot apparatus and motion control method Download PDF

Info

Publication number
US20030023348A1
US20030023348A1 US10/196,683 US19668302A US2003023348A1 US 20030023348 A1 US20030023348 A1 US 20030023348A1 US 19668302 A US19668302 A US 19668302A US 2003023348 A1 US2003023348 A1 US 2003023348A1
Authority
US
United States
Prior art keywords
motion
information
model
robot apparatus
feeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/196,683
Inventor
Makoto Inoue
Taku Yokoyama
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Priority to US10/196,683 priority Critical patent/US20030023348A1/en
Publication of US20030023348A1 publication Critical patent/US20030023348A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour

Definitions

  • the present invention relates to a robot apparatus and a motion control method and is preferably applied to a robot apparatus which behaves like a quadruped.
  • a so-called pet robot as a robot apparatus having a shape similar to a quadruped like a dog takes a position of “Down” when it receives a command of “Down”, or it always gives a hand whenever a user puts a hand before the mouth of the dog.
  • robot apparatuses reach an aimed position or motion through a predetermined position or motion.
  • motion expression of robot apparatuses become rich if there are prepared a plurality of positions or motions to be carried out during transition to an aimed position or motion.
  • a position or motion which the robot apparatus passes is selected optimally to transit to the aimed position or motion.
  • the present invention has been made in view of the above situation, and has an object of proposing a robot apparatus which autonomously makes natural motions and a motion control method thereof.
  • the present invention has been made in view of the above situation, and has an object of providing a robot apparatus whose positions and motions are optimized during transition, and a motion control method thereof.
  • a robot apparatus makes a motion corresponding to supplied input information, and comprises model change means including a model for causing the motion, for determining the motion by changing the model, based on the input information.
  • the robot apparatus having this structure has a model which causes a motion, and changes the model based on input information thereby to determine a motion. Therefore, if the model is a feeling model or an instinct model, the robot apparatus autonomously acts based on states of the feeling and the instinct of the robot apparatus.
  • a motion control method is to make a motion in accordance with supplied input information, and the motion is determined by changing a model which causes the motion, based on the input information.
  • a motion is determined by changing the model, based on input information. If the model is a feeling model or an instinct model, the robot apparatus can act autonomously based on its own feeling or instinct.
  • Another robot apparatus makes a motion in accordance with supplied input information, and comprises motion determination means for determining a next operation subsequent to a current motion, based on the current motion and the input information supplied next, said current motion corresponding to a history of input information supplied sequentially.
  • the robot apparatus having this structure determines a next motion subsequent to a current motion, based on the current motion corresponding to a history of input information supplied sequentially, and input information to be supplied next. Therefore, the robot apparatus autonomously acts based on the states of its own feeling or instinct.
  • Another motion control method is to make a motion in accordance with supplied input information, and a next motion subsequent to a current motion is determined, based on the current motion and the input information to be supplied next, the current motion corresponding to a history of input information supplied sequentially.
  • a next motion subsequent to a current motion is determined, based on the current motion corresponding to the history of input information sequentially supplied, and input information to be supplied next. Therefore, the robot apparatus can act autonomously, based on the state of its own feeling or instinct.
  • another robot apparatus comprises: graph storage means for storing a graph which registers the positions and the motion and which is constructed by connecting the positions with the motion for letting the positions transit; and control means for searching a route from a current position to an aimed position or motion, on the graph, based on the action command information, and for letting the robot apparatus move, based on a search result, thereby to let the robot apparatus transit from the current position to the aimed position or motion.
  • the robot apparatus having this structure transits to an aimed position or motion instructed by the action command information, based on a graph which registers positions and motions stored in graph storage means and which is constructed by connecting the positions with motions for letting the positions transit. Specifically, the robot apparatus searches a route from the current position to an aimed position or motion, on the graph, based on action command information. Based on the search result, a motion is made to transit to an aimed position or motion from the current position.
  • a route from a current position to an aimed position or motion is searched on a graph which registers positions and motions and which is constructed by connecting the positions with motions for letting the positions transit, and a motion is made, based on a search result, thereby to make transit from the current position to the aimed position or motion.
  • FIG. 1 is a perspective view showing an embodiment of a robot apparatus according to the present invention.
  • FIG. 2 is a block diagram showing a circuit diagram of the robot apparatus.
  • FIG. 3 is a schematic diagram showing data processing in a controller.
  • FIG. 4 is a schematic diagram showing data processing by a feeling/instinct model part.
  • FIG. 5 is a schematic diagram showing data processing by the feeling/instinct model part.
  • FIG. 6 is a schematic diagram showing data processing by the feeling/instinct model part.
  • FIG. 7 is a view showing transit of states according to finite automaton in an action determination mechanism part.
  • FIG. 8 is a block diagram showing a structure of an action determination mechanism part and the like which are used for explaining generation of action command information.
  • FIG. 9 is a view used for explaining a case where a state is determined by probability.
  • FIG. 10 shows a table in which relationships between transit probabilities and states to which the positions transit.
  • FIG. 11 shows a graph of position transit in the position transit mechanism part.
  • FIG. 12 shows a specific example of a graph of position transit.
  • FIG. 13 is a view showing transit of states used for explaining that a neutral position to which the current position is let transit if the current position cannot be grasped is comprised and that recovery from stumble-over is enabled.
  • FIG. 14 is a view used for explaining a route search with the distance used as an index.
  • FIG. 15 is a view used for explaining a case of making a route search by classification.
  • FIG. 16 is a view used for explaining a case where a route search is made by classification.
  • FIG. 17 is a top view of a robot apparatus, used for explaining a case where a walking direction is set as a parameter.
  • FIG. 18 shows a table showing parameters and contents of motions.
  • FIG. 19 is a view used for explaining a case where another motion is synchronized with a motion during transit between positions.
  • FIG. 20 is a view used for explaining a case where similar motions are executed at different positions.
  • FIG. 21 is a perspective view showing a robot apparatus.
  • FIG. 22A and FIG. 22B are views used for explaining a case of positions are let transit between the entire apparatus and parts, with a basic position inserted therebetween.
  • FIG. 23 is a view used for explaining a case of executing an aimed motion after the current position is once let transit to a basic position when the current position relates to the entire apparatus and the aimed motion relates to a part.
  • FIG. 24 is a view used for explaining a case of executing an aimed motion after the current position is once let transit to a basic position when the aimed motion relates to the entire apparatus.
  • FIGS. 25A and 25B are views used for explaining processing of inserting a command.
  • FIG. 26 is a view showing a command storage part which can store commends corresponding to the entire apparatus and componential parts.
  • FIG. 27 is a view used for explaining an example of processing form by a command storage part which can store commands corresponding to the entire apparatus and componential parts.
  • FIG. 28 is a block diagram showing a motion route search part which makes a route search.
  • FIG. 29 is a flowchart showing a series of processing until a motion is executed in accordance with a command.
  • FIG. 30 is a view used for explaining that the current position transits through a plurality of oriented arcs to an aimed motion, on the graph of a head part.
  • FIG. 31 is a view showing another example to which the present invention is applied, wherein a character which moves as a computer graphic.
  • FIG. 32 is a perspective view showing another embodiment of a robot apparatus according to the present invention.
  • the entire robot apparatus 1 is constructed As shown in FIG. 1 and comprises a head part 2 corresponding to a head, a body part 3 corresponding to the trunk, leg parts 4 A, 4 B, 4 C, and 4 D corresponding to legs, and a tail part 5 corresponding to a tail.
  • the robot apparatus 1 moves the head part 2 , leg parts 4 A to 4 D, and tail part 5 in relation to the body part 3 , thereby to move like an actual quadruped.
  • An image recognition part 10 equivalent to eyes and constructed, for example, by a CCD (Charge Coupled Device) camera for picking up images, a microphone 11 equivalent to ears for collecting voices, and a speaker 12 equivalent to a mouth for generating a voice are respectively equipped at predetermined position of the head part 2 .
  • the head part 2 is equipped with a remote controller receiving part 13 for receiving commands transmitted through a remote controller (not shown) from a user, a touch sensor 14 for detecting contact of a hand of the user or the like, and a LED (Light Emitting Diode) 15 constructed by a light emitting means.
  • a remote controller receiving part 13 for receiving commands transmitted through a remote controller (not shown) from a user
  • a touch sensor 14 for detecting contact of a hand of the user or the like
  • a LED (Light Emitting Diode) 15 constructed by a light emitting means.
  • the body part 3 is equipped with a battery 21 at a position corresponding to its abdomen, and an electronic circuit (not shown) or the like is contained inside the body part 3 .
  • Joint parts of the leg parts 4 A to 4 D, connecting parts between the leg parts 4 A to 4 D and the body part 3 , a connecting part between the body part 3 and the head part 2 , a connecting part between the body part 3 and the tail part 5 are connected by their own actuators 23 A to 23 N. These joint and connecting parts are driven on the basis of control by the electronic circuit contained in the body part 3 .
  • the actuators 23 A to 23 N are driven to shake and nod the head part 2 , wag the tail part 5 , and move the leg parts 4 A to 4 D to walk.
  • the robot apparatus 1 thus behaves like an actual quadruped.
  • the robot apparatus 1 constructed as described above has the following features which will be explained in details later.
  • the robot apparatus 1 When the robot apparatus 1 is instructed to changes its position from a position (first position) to another position (second position), it does not directly change to the second position from the first position but it transits a prepared natural position.
  • the robot apparatus when it reaches a given position during the position transition, it can receive a notification.
  • the robot apparatus 1 is constructed by parts of a head, legs, and a tail, and can control independently the positions of these parts. For example, the positions of the head and legs can be controlled independently from each other. Also, the position of the entire apparatus including the head, legs, and tail can be managed separately from the respective parts.
  • the robot apparatus 1 has the above-described features. Much more features will now be explained below including those described above.
  • the circuit configuration of the robot apparatus 1 is as shown in FIG. 2, for example.
  • the head part 2 comprises a command receiving part 30 constructed by a microphone 11 and a remote controller receiving part 13 , an external sensor 31 constructed by an image recognition part 10 and a touch sensor 14 , a speaker 12 , and a LED 15 .
  • the body part 3 has a battery 21 and internally has a inner sensor 35 constructed by a controller 32 for controlling the operation of the entire robot apparatus 1 , a battery sensor 33 for detecting the residue of the battery 21 , and a heat sensor 34 for detecting head generated inside the robot apparatus 1 . Further, actuators 23 A to 23 N are provided at predetermined positions of the robot apparatus 1 , respectively.
  • the command receiving part 30 serves to receive commands such as “Walk”, “Down”, “Chase Ball”, and the like that are supplied to the robot apparatus 1 from the user.
  • This part 30 is constructed by a remote controller receiving part 13 and a microphone 11 .
  • the remote controller receiving part 13 is receives a desired command inputted by a remote controller (not shown) operated by a user. For example, transmission of a command from the remote controller is achieved by infrared light.
  • the remote controller receiving part 13 receives the infrared light, generates a receive signal S 1 A, and sends it to the controller 32 .
  • the remote controller is not limited to the type of using infrared light but may be a type of supplying a command to the robot apparatus 1 by a tone scale.
  • the robot apparatus 1 is arranged to perform processing in correspondence with the tone scale from the remote controller inputted through the microphone 11 .
  • the microphone 11 collects the voice given by the user, generates a voice signal S 1 B, and sends it to the controller 32 .
  • the command receiving part 30 generates a command signal S 1 containing a receiving signal SZ 1 A and a voice signal S 1 B in accordance with the command given to the robot apparatus 1 by the user.
  • the part 30 supplies this command signal to the controller 32 .
  • the touch sensor 14 of the external sensor 31 serves to detect action on the robot apparatus 1 from the user, such as “Rub”, “Hit”, and the like. For example, when a user touches the touch sensor 14 to make a desired action, a contact detection signal S 2 A corresponding to the action is generated and sent to the controller 32 .
  • the image recognition part 10 of the external sensor 31 recognizes the environment around the robot apparatus 1 .
  • the part 10 detects environmental information around the robot apparatus 1 , such as “Dark”, “Presence of a favorite toy”, or the like, or detects an action of another robot apparatus, such as “Another robot is running”, or the like.
  • This image recognition part 10 sends an image signal S 2 B obtained as a result of picking up an environmental image, to the controller 32 .
  • the external sensor 31 generates an external information signal S 2 containing a contact detection signal S 2 A and an image signal S 2 B, in accordance with external information thus supplied from outside of the robot apparatus 1 , and sends it to the controller 32 .
  • the internal sensor 35 serves to detect an inner state of the robot apparatus 1 itself, such as “Hungry” which means a low battery, “Fever”, or the like.
  • This sensor 35 is constructed by a battery sensor 33 and a heat sensor 34 .
  • the battery sensor 33 serves to detect the residue of the battery 21 which supplies the power to respective circuits of the robot apparatus 1 .
  • This battery sensor 33 sends a battery capacitance detection signal S 3 A as a result of detection, to the controller 32 .
  • the heat sensor 34 serves to detect a heat inside the robot apparatus 1 .
  • This heat sensor 34 sends a heat detection signal S 3 B as a result of detection to the controller 32 .
  • the internal sensor 35 thus generates an internal information signal S 3 containing a battery capacitance detection signal S 3 A and a heat detection signal S 3 B in accordance with internal information of the robot apparatus 1 , and sends it to the controller 32 .
  • the controller 32 Based on the command signal S 1 supplied from the command receiving part 30 , the external information signal S 2 supplied from the external sensor 31 , and the internal information signal S 3 supplied from the internal sensor 35 , the controller 32 generates control signals S 5 A to S 5 N for driving the actuators 23 A to 23 N, respectively. The controller 32 sends these signals to the actuators, respectively, to drive them so that the robot apparatus 1 operates.
  • the controller 32 generates a voice signal S 10 and a light emitting signal S 11 to be outputted to the outside if necessary.
  • the voice signal S 10 is outputted to the outside through the speaker 12 and the light emitting signal S 1 is sent to the LED 15 to obtain a desired light emitting output (e.g., flicker, color change, or the like).
  • a desired light emitting output e.g., flicker, color change, or the like.
  • the robot apparatus 1 notifies the user of its own feeling. It is possible to provide an image display part for displaying an image, in place of the LED 15 . Information necessary for a user, such as feeling or the like, can then be notified by displaying a desired image.
  • the controller 32 Based on programs previously stored in a predetermined storage area, the controller 32 performs software processing on the command signal S 1 supplied from the command receiving part 30 , the external information signal S 2 supplied from the external sensor 31 , and the internal information signal S 3 supplied from the internal sensor 35 .
  • the controller 32 supplies a control signal S 5 obtained as a result of the software processing, the actuators 23 .
  • the operations of the actuators 23 at this time are expressed as the motion of the robot apparatus 1 .
  • the present invention has as its object to enrich such expressions.
  • the contents of data processing performed by the controller 32 can be functionally sectioned into a feeling/instinct model part 40 as a feeling/instinct model change means, an action determination mechanism part 41 as an action determination means, a position transit mechanism part 42 as a position transit means, and a control mechanism part 43 .
  • the command signal S 1 , the external information signal S 2 , and internal information signal S 3 supplied from the outside are inputted to the feeling/instinct model part 40 and the action determination mechanism part 41 .
  • the controller 32 schematically functions as follows.
  • the feeling/instinct model part 40 determines states of the feeling and instinct, based on the command signal S 1 , the external information signal S 2 , and the internal information signal S 3 . Further, the action determination mechanism part 41 determines next motion (action), based on feeling/instinct state information S 10 obtained by the feeling/instinct model part 40 in addition to the command signal S 1 , the external information signal S 2 , and the internal information signal S 3 .
  • the position transit mechanism part 42 in a rear stage prepares a position transit plan to transit to a next motion (action) determined by the action determination mechanism part 41 .
  • the feeling/instinct model part 40 determines the states of the feeling and instinct, referring to the motion (action) determined. That is, the feeling/instinct model part 40 determines the instinct and feeling, referring also to a motion (action) result.
  • the control mechanism part 43 controls respective operation parts, based on position transit information S 18 supplied from the position transit mechanism part 42 on the basis of the position transit plan. After the position is actually transited, a next motion (action) determined by the action determination mechanism part 41 is actually carried out.
  • the robot apparatus 1 determines a next motion (action) based on the feeling/instinct by means of the above-described mechanisms of the controller 32 , prepares a transit plan until the motion (action) is executed, lets the position transit based on the transit plan, and actually executes the motion (action) based on the feeling/instinct.
  • the above-described respective structural parts of the controller 32 will be explained.
  • the feeling/instinct model part 40 can roughly sectioned into an emotion group 50 which constructs a feeling model, and a desire group 51 which constructs an instinct model prepared as a model having a property different from the emotion model.
  • the feeling model is constructed by a feeling parameter of a certain value and is a model for expressing a feeling defined for the robot apparatus through a motion corresponding to the value of the feeling parameter.
  • the value of the feeling parameter increases or decreases mainly based on an external input signal (external factor) which expresses “being hit” or “being scolded” and which is detected by a pressure sensor, a visual sensor, or the like.
  • an external input signal external factor
  • the feeling parameter changes based on an internal input signal such as a battery residue, a body temperature, or the like, in some cases.
  • the instinct model is constructed by an instinct parameter having a certain value and is a model for expressing an instinct (desire) defined for the robot apparatus, through a motion corresponding to the value of the instinct parameter.
  • the value of the instinct parameter increases and decreases based on an internal input signal which expresses “a desire for exercise” based on its action history or “a desire for electric charge (hungry)” based on the battery residue.
  • the instinct parameter may change based on an external input signal (external factor), like the feeling parameter.
  • Each of the feeling model and instinct model is constructed by plural types of models of an equal property. That is, the emotion group 50 includes emotion units 50 A to 50 F as independent feeling models having an equal property.
  • the desire group 51 includes desire units 51 A to 51 D as independent instinct models having an equal property.
  • the emotion group 50 includes, for example, an emotion unit 50 A indicating a feeling of “delight”, an emotion unit 50 B indicating a feeling of “sorrow”, an emotion unit 50 D expressing “surprise”, an emotion unit 50 E indicating a feeling of “fear”, and an emotion unit 50 F indicting a feeling of “hate”.
  • the desire group 51 includes, for example, a desire unit 51 A indicating a desire of “movement instinct”, a desire unit 51 B indicating a desire of “love instinct”, a desire unit 51 C indicating a desire of “recharge instinct”, and a desire unit 51 D indicating a desire of “search instinct”.
  • the level of the emotion is indicated by an intensity (emotion parameter) of 0 to 100 levels.
  • the intensity of each emotion changes moment by moment, based on the command signal S 1 , the external information signal S 2 , and the internal information signal S 3 supplied.
  • the feeling/instinct model part 40 combines the intensities of the emotion units 50 A with each other to express the state of the robot apparatus 1 thereby to model the time-based change of the emotion.
  • emotion units influence each other to change the intensities.
  • the emotion units are combined with each other such that the units restrain or stimulate each other thereby to change the intensities while influencing each other.
  • the emotion unit 50 A for “delight” and the emotion 50 B for “sorrow” can be combined so as to restrain mutually each other, as shown in FIG. 5.
  • the intensity of the emotion unit 50 B for “delight” is increased, and the intensity of the emotion unit 50 B for “sorrow” is decreased in accordance with the increase of the intensity of the emotion unit 50 A for “delight” even if such input information S 1 to S 3 which will change the intensity of the emotion unit 50 B for “sorrow” is not supplied.
  • the intensity of the emotion unit 50 B for “sorrow” increases, the intensity of the emotion unit 50 A for “delight” is decreased.
  • the emotion unit 50 B for “sorrow” and the emotion unit 50 C for “angry” may be combined so as to stimulate each other.
  • the intensity of the emotion unit 50 C for “angry” is increased, and the intensity of the emotion unit 50 B for “sorrow” is increased in accordance with the increase of the intensity of the emotion unit 50 C for “angry” even if such input information which will increase the intensity the intensity of the emotion unit 50 B for “sorrow” is not supplied.
  • the intensity of the emotion unit 50 B for “sorrow” is increased, the intensity of the emotion unit 50 C for “angry” is increased in accordance with the increase of the intensity of the emotion unit 50 B for “sorrow”.
  • the levels of desires are expressed by intensities (instinct parameters) of 0 to 100 levels, like the emotion units 50 A to 50 F.
  • the intensities of the desires are changed moment by moment.
  • the feeling/instinct model part 40 expresses the state of the instinct of the robot apparatus 1 , and the time-based change of the instinct is modeled.
  • desired desire units can influence each other thereby to change their intensities.
  • desired units can be combined so as to mutually restrain or mutually stimulate each other, so that the desired desire units influence each other thereby changing their intensities. In this manner, as the intensity of one of the desired desire units is changed, the intensity of another one of the combined desire units changes accordingly. It is thus possible to realize a robot apparatus 1 which has a natural instinct.
  • the units can influence each other between the emotion group 50 and the desire group 51 , so that their intensities can be changed.
  • changes of the intensities of the desire unit 51 B for “love instinct” and the desire unit 51 C for “recharge instinct” in the desire group 51 influence changes of the intensities of the emotion unit 50 B for “sorrow” and the emotion unit 50 C for “angry” in the emotion group 50 .
  • a change of the intensity of the desire unit 51 C for “recharge instinct” influences changes of the intensities of the emotion unit 50 B for “sorrow” and the emotion unit 50 C for “angry”.
  • the feeling/instinct model part 40 changes each of the intensities of the emotion units 50 A to 50 F and the desire units 51 A to 51 D by the input information S 1 to S 3 containing the command signal S 1 , the external information signal S 2 , and the internal information signal S 3 , or by a mutual action between the units in the emotion group 50 and/or between the units in the desire group 51 , and/or by a mutual action between the units in the emotion group 50 and the units in the desire group 51 .
  • the feeling/instinct model part 40 determines the state of feeling by combining the intensities of the changed emotion units 50 A to 50 F, and also determines the state of instinct by combining the changed intensities of the desire units 51 A to 51 D.
  • the part 40 further sends the determined states of the feeling and instinct, as feeling/instinct state information S 10 to the action determination mechanism part 41 .
  • the feeling/instinct model part 40 is supplied with action information S 12 which indicates the contents of current and past actions of the robot apparatus 1 from the action determination mechanism part 41 .
  • action information S 12 is supplied as information which indicates that “it has walked for a long time”.
  • intensity increase/decrease means 55 A to 55 C for generating intensity information S 14 A to S 14 C for increasing/decreasing the intensity of the emotion units 50 A to 50 C, respectively, based on action information S 12 indicating the action of the robot apparatus 1 and input information S 1 to S 3 .
  • the intensities of the emotion units 50 A to 50 C are respectively increased/decreased in accordance with the intensity information S 14 A to S 14 C outputted from the intensity increase/decrease means 55 A to 55 C.
  • the feeling/instinct model part 40 supplies the intensity increase/decrease means 55 A with action information S 12 indicating that it greeted the user and input information S 1 to S 3 indicating that it is rubbed by the user. Meanwhile, the feeling/instinct model part 40 does not change the intensity of the emotion unit 50 A for “delight” even when the robot apparatus 1 is rubbed at its head during execution of any work, i.e., even if the intensity increase/decrease means 55 A is supplied with action information S 12 indicating that it is executing the work and input information S 1 to S 3 indicating that it is rubbed at its head.
  • the intensity increase/decrease means 55 A is constructed in form of a function or table which will generate intensity information S 14 A to S 14 C based on the action information S 12 and the input information S 1 to S 3 . For example, this is the same with the other intensity increase/decrease means 55 B and 55 C.
  • the feeling/instinct model part 40 comprises the intensity increase means 55 A to 55 C.
  • the intensities of the emotion units 50 A to 50 C With reference to not only the input information S 1 to S 3 but also the action information S 12 indicating the current or past action of the robot apparatus 1 , it is possible to avoid generation of an unnatural feeling which will increase the intensity of the emotion unit 50 A for “delight” when the user rubs the robot apparatus 1 as a mischief.
  • the feeling/instinct model part 40 is arranged such that each of the intensities of the desire units 51 A to 51 C is increased/decreased, based on the input information S 1 to S 3 and the action information S 12 , like the case of the desire units 51 A to 51 C.
  • the present embodiment has been explained with reference to an example in which the emotion units 50 A to 50 C for “delight”, “sorrow”, and “angry” comprise the intensity increase/decrease means 55 A to 55 C.
  • the present invention is not limited hitherto.
  • the other emotion units 50 D to 50 F for “surprise”, “fear”, and “hate” may comprise intensity increase/decrease means.
  • the intensity increase/decrease means 55 A to 55 C generate and output intensity information S 14 A to S 14 C in accordance with predetermined parameters, upon input of the input information S 1 to S 3 and the action information S 12 . Therefore, if different values are respectively set in different robot apparatuses 1 , the robot apparatuses can have different personalities, e.g., a touchy robot apparatus, a cheerful robot apparatus, and the like can be provided.
  • the action determination mechanism part 41 is a section which determines a next motion (action) based on various information. Specifically, the action determination mechanism part 41 determines a next motion (action), based on the input information S 14 containing the feeling/instinct state information S 10 and the action information S 12 , and sends the contents of the determined motion (action), as the action command information S 16 , to the position transit mechanism part 42 .
  • the action determination mechanism part 41 uses an algorithm called probability finite automaton 57 having a finite number of states which will determine a next motion by expressing the history of the input information S 14 supplied in the past, as operation states (hereinafter called states), and by letting a corresponding state transit to another state, based on currently supplied input information S 14 and the state at this time.
  • states operation states
  • the action determination mechanism part 41 lets the state transit every time when input information S 14 is supplied. This part 41 determines an action in correspondence with the state to which the state has transited from a previous one. In this manner, the motion can be determined by referring to not only the current input information S 14 but also the past input information S 14 .
  • the action determination mechanism part 41 lets the current state transit to a next state, when existence of a predetermined trigger is detected.
  • a specific example of the trigger is a fact that the time for which the action of the current state is kept executed reaches a constant value, a fact that specific input information S 14 is inputted, or a fact that among the intensities of the emotion units 50 A to 50 F and desire units 51 A to 51 D indicated by the feeling/instinct state information S 10 , the intensity of a desired unit exceeds a predetermined threshold value.
  • the action determination mechanism part 41 selects the state of a transit destination, based on whether or not the intensity of a desired unit exceeds a predetermined threshold value, among the intensities of the emotion units 50 A to 50 F and desire units 51 A to 51 D indicated by the feeling/instinct state information S 10 supplied from the feeling/instinct model part 40 . IN this manner, for example, one same command signal S 1 is inputted, the state can transit to different states in correspondence with the intensities of the emotion units 50 A to 50 F and desire units 51 A to 51 D.
  • the action determination mechanism part 41 generates action command information S 16 for letting the robot apparatus itself “shake” in response to the palm of the hand set in front of the eyes and sends this information to the position transit mechanism part 42 , if the action determination mechanism part 41 detects that the palm of a user's hand is set in front of the eyes of the robot apparatus, based on a supplied external information signal S 2 , that the intensity of the emotion unit 50 C for “angry” is equal to or lower than a predetermined threshold value, based on the feeling/instinct state information S 10 , and that the robot apparatus is “not hungry”, i.e., the battery voltage is equal to or higher than a predetermined threshold value.
  • the action determination mechanism part 41 generates action command information S 16 for letting the robot apparatus 1 “lick the palm of the hand”, if the action determination mechanism part 41 detects that the palm of a user's hand is set in front of eyes, the intensity of the emotion unit 50 C for “angry” is equal to or lower than a predetermined threshold value, and the robot apparatus is “hungry (recharge is needed)”, i.e., the battery voltage is lower than a predetermined threshold value.
  • the action determination mechanism part 41 detects that the palm is set in front of the eyes and that the intensity of the emotion unit 50 C for “angry” is equal to or higher than a predetermined threshold value, the action determination mechanism part 41 generates action command information S 16 for letting the robot apparatus 1 act like it “looks away” and sends it to the position transit mechanism part 42 , regardless of whether or not the robot apparatus is not “hungry”, i.e., regardless of whether or not the battery voltage is equal to or higher than a predetermined threshold value.
  • the action determination mechanism part 41 determines a next action, based on the input information S 14 .
  • this part 41 holds a plurality of contents of actions to be determined, e.g., “action 1”, “action 2”, “action 3”, “action 4”, . . . “action n”.
  • the “action 1” contains a motion content for an action of kicking a ball.
  • the “action 2” contains a motion content for an action of expressing a feeling.
  • the “action 3” contains a motion content for an action of autonomous search.
  • the “action 4” contains a motion content for an action of avoiding an obstacle.
  • the “action n” contains a motion content for an action of notifying a small residue of battery.
  • the selection (determination) of an action is practically carried out by a selection module 44 as shown in FIG. 8.
  • the selection module 44 outputs the selection result as action command information S 16 to the position transit mechanism part 42 , and as action information S 12 to the feeling/instinct model part 40 and the action determination mechanism part 41 .
  • the selection module 44 stands a flag on a determined action, and outputs the information thereof as action information S 12 and action command information to the action determination mechanism part 41 and the position transit mechanism part 42 .
  • the action determination mechanism part 41 determines an action, based on the action information S 12 in addition to the external information S 21 (the command signal S 1 and external signal S 2 ) or the like and the internal information S 22 (the internal information signal S 3 and the feeling/instinct state information S 10 ) or the like. In this manner, a next action can be determined in consideration of a previous action.
  • the feeling/instinct model part 40 changes the states of the feeling and instinct, based on equal input information S 1 to S 3 , as described above. As a result of this, the feeling/instinct model part 40 can generate different feeling/instinct state information S 10 even if equal input information S 1 to S 3 is supplied, as described above.
  • a group ID can be appended to the information indicating the motion contents of the “action 1”, “action 2”, “action 3”, “action 4”, . . . “action n”.
  • the group ID expresses equal information common to one same category. For example, in case where a plurality of patterns of actions are included in the action of “kick a ball”, one same group ID is added to each of the plurality of actions. By thus appending one same group ID to each of actions in one same category, the actions in the one same category can be processed as a group.
  • the selection module 44 when an action is selected by the selection module 44 , one same group ID is issued to any action selected from one same category. Then, the group ID appended to the selected action is sent to the feeling/instinct model part 40 , so that the feeling/instinct model part 40 can determine the states of the feeling and instinct.
  • the action determination mechanism part 41 determines parameters of an action to be executed in the state as a transit destination, such as the speed of walking, magnitudes of motions when hands and legs are moved, the pitch of a tone when a tone is generated, and the like, based on the intensity of a desired unit among the intensities of the emotion units 50 A to 50 F and desire units 51 A to 51 D indicated by the feeling/instinct state information S 10 supplied from the feeling/instinct model part 40 . IN this manner, for example, one same command signal S 1 is inputted, the state can transit to different states in correspondence with the intensities of the emotion units 50 A to 50 F and desire units 51 A to 51 D. The part 41 then generates action command information S 16 and sends it to the position transit mechanism part 42 .
  • the input information S 1 to S 3 comprised of the command signal S 1 , external information signal S 2 , and internal information signal S 3 . Therefore, the input information S 1 to S 3 is inputted also to the action determination mechanism part 41 together with the feeling/instinct model part 40 .
  • the controller 32 when the controller 32 is supplied with an external information signal S 2 indicating that the “head is rubbed”, the controller 32 generates feeling/instinct state information S 10 indicating “delight” by means of the feeling/instinct model part 40 , and supplies it to the feeling/instinct state information S 10 to the action determination mechanism part 41 .
  • the action determination mechanism part 41 when an external information signal S 2 indicating “a hand exists in front of eyes” is supplied in this state, the action determination mechanism part 41 generates action command information S 16 of “willing to shake”, based on the feeling/instinct state information S 10 indicating “delight” and the external information signal S 2 indicating that “a hand exists in front of eyes”.
  • the part 41 sends it to the position transit mechanism part 42 .
  • the transit destination of the motion state can be determined by a certain probability by the probability finite automaton 57 .
  • the motion state transits to a certain motion state (action state) at a probability of 20% (transit probability).
  • action state a probability of 20% (transit probability).
  • the transit probability of transition to the state ST 11 of “running” is set to P1 and the transit probability of transition to the state ST 12 of “sleeping” is set to P2.
  • the transit destination is determined those probabilities. Note that a technique of determining a transit destination by probability is disclosed in the Japanese Patent Application KOKAI Publication No. 9-114514.
  • the robot apparatus 1 holds information concerning the probability of transition to a state, in form of a table.
  • An example of the table is shown in FIG. 10.
  • the table shown in FIG. 10 is constructed by node names, inputted event names, data names of inputted event, ranges of data of inputted events, and information concerning transit probabilities of transitions to states.
  • the transit probability of transition to a certain state is determined in correspondence with an inputted event, and the state of the transit destination is determined on the basis of the transit probability.
  • the node name which indicates a current action state indicates an action state (or state simply) of the robot apparatus 1 , i.e., it indicates what action is being executed now.
  • the inputted event is information inputted to the robot apparatus 1 , and the table is classified with use of these inputted events.
  • the “BALL” of the inputted event name means that the inputted event indicates detection of a ball.
  • PAT means being patted.
  • HIT means being hit.
  • MOTION means detection of a moving ball, and “OBSTACLE” means detection of an obstacle.
  • the table is set up with use of a large number of inputted events.
  • the present embodiment will be explained with reference to the case where “BALL”, “PAT”, “HIT”, “MOTION”, and “OBSTACLE”.
  • the data range of inputted events means the range of data in case where such an inputted event requires a parameter, and the data name of the inputted event means such a parameter name. That is, if an inputted event is “BALL”, the data name means “SIZE” which is the size of the ball, and the range of data means that the range of such a size is 0 to 1000. Similarly, if an inputted event is “OBSTACLE”, the data name means the distance “DISTANCE” thereof The data range means that the range of such a distance is 0 to 100.
  • a transit probability of transition to a state is assigned to each of a plurality of states which can be selected in accordance with the characteristic of an inputted event. That is, transit probabilities are assigned respectively to arcs such that the total of the transit probabilities assigned to states which are selectable with respect to an inputted event. More specifically, in case of the inputted event of “BALL”, the total of transit probabilities 30%, 20%, . . . assigned to “ACTION 1”, “ACTION 3”, . . . that can be selected in accordance with the characteristic of this inputted event is arranged to become 100%.
  • node and arc are generally defined by so-called probability finite automaton.
  • Node is defined as a state (called an action state in the present embodiment).
  • Arc is defined as an oriented line (called a transit motion in the present embodiment) which connects the “arc” and “node” with a certain probability.
  • a node 3 as a current state transits to a node 120 at a probability of 30% when the size of the ball is 0 to 1000.
  • the arc assigned to the “ACTION 1” is selected, and a motion or expression corresponding to “ACTION 1” is executed.
  • the node 3 as a current state transits to the node 500 at a probability of 20%.
  • the arc to which “ACTION 3” is assigned is selected, and a motion or expression corresponding to “ACTION 3” is carried out.
  • “ACTION 1” and “ACTION 3” may be, for example, “bark” and “kick” or so.
  • “node 120 ” and “node 500 ” are selected with no probability.
  • the node of “node 1000 ” as a moving-back action is selected at a probability of 100% when the distance therefrom to the obstacle is 0 to 100. That is, the arc appended with “MOVE_BACK” at a probability of 100%, and the “MOVE_BACK” is executed.
  • a state (node) or arc can be selected, or an action model can be determined with use of a table or the like.
  • an action model can be determined with use of a table or the like.
  • selection of a state described above and determination of an action model can also be made on the basis of the state of an emotion model.
  • an action mode can be determined by changing the transit probability of transition between states described above on the basis of the state of the emotion model.
  • the state of the emotion model e.g., the level thereof
  • the transit probability is changed in accordance with the state of the emotion model.
  • determination of an action model is made on the basis of the emotion model.
  • the action model is influenced by the state of the emotion model. This will be explained with user of the table shown in FIG. 10.
  • “JOY”, “SURPRISE”, and “SADNESS” are prepared as data which determines the transit probability.
  • the “JOY”, “SURPRISE”, and “SADNESS” correspond to emotion units 50 A, 50 D, and 50 B of the emotion model. Further, the range of data is set to 0 to 50, for example. This data range corresponds to the level of the emotion unit described above.
  • a predetermined transit probability of transition to a predetermined state is determined. For example, if “JOY” whose data range is 0 to 50 has an actual level of 30, arcs assigned with “ACTION 1”, “ACTION 2”, “MOVE_BACK”, “ACTION 4” are selected respectively at probabilities of 10%, 10%, 10%, and 70%, and the state transits to a predetermined states.
  • the transit probability can be determined even in a state where no input is supplied from the outside by referring to “JOY”, “SURPRISE”, and “SADNESS” regardless of inputted events, i.e., by referring to them in a so-called empty event state.
  • the emotion model can be referred to for determination of the transition probability when no inputted event is detected for a predetermined time.
  • the actual levels of “JOY”, “SURPRISE”, and “SADNESS” can be referred to by referring to “JOY”, “SURPRISE”, and “SADNESS” in this order.
  • the data range is set to 0 to 50 so that the actual level of the next “JOY” is referred to.
  • the arc to which “ACTION 2” is assigned is selected at a probability of 30% and the arc to which “MOVE_BACK” is assigned is selected at a probability of 60%, so that the state transits to a predetermined state.
  • the present embodiment is arranged such that the action model can be determined on the basis of the state of the feeling model.
  • the action model is influenced by the state of the feeling model thereby enriching expressions of the robot apparatus 1 .
  • the action command information S 16 is determined by the action determination mechanism part 41 .
  • the position transit mechanism part 42 is a part which generates information for transiting to an aimed position or aimed motion. Specifically, the position transit mechanism part 42 generates position transit information S 18 for letting the current position or motion to a next position or motion (an aimed position or motion), based on the action command information S 16 supplied from the action determination mechanism part 41 , as shown in FIG. 3. The part 42 then sends the information to the control mechanism part 43 .
  • the position to which the current position can transit is determined by the physical shape of the robot apparatus 1 , such as the shapes of the trunk, hands, and legs, the size, and the connecting states of respective parts, and the mechanisms of the actuators 23 A to 23 N, such as directions and angles in and at which joints bend.
  • the position transit information S 18 is prepared as information for executing transition, taking into consideration the above.
  • the control mechanism part 43 actually moves the robot apparatus 1 based on the position transit information S 18 thus sent from the position transit mechanism part 42 .
  • the position transit mechanism part 42 previously registers positions to which the robot apparatus 1 can transit and motions to be taken when it transits. For example, the positions and motions are maintained in form of a graph, and the part 42 sends action command information S 16 supplied from the action determination mechanism part 41 , as position transit information S 18 , to the control mechanism part 43 .
  • the control mechanism part 43 operates in accordance with the position transit information S 18 and lets the robot apparatus transit to an aimed position or an aimed motion. In the following, processing to be executed by the position transit mechanism part 42 will be explained specifically.
  • the robot apparatus 1 cannot directly transit to a position according to the contents of a command (action command information S 16 ) in some cases. This is because the positions of the robot apparatus 1 are classified into the type of positions to which the robot apparatus 1 can transit directly from the current position, and the type of positions to which it cannot transit directly from the current position but can transit through a certain motion or position.
  • the quadruped robot apparatus 1 can directly transit from a position in which the robot apparatus 1 lies down stretching out widely hands and legs to a position in which it keeps itself down but cannot directly transit to a standing position. Therefore, the robot apparatus 1 needs motions in two stages, i.e., the robot apparatus 1 must once contract its hands and legs close to the trunk and then stands up. In addition, there is a position which cannot be executed safely. For example, when the quadruped robot apparatus 1 is going to raise both front legs to give a hail in a standing position, it stumbles.
  • the robot apparatus 1 loses its balance and stumbles over.
  • the position transit mechanism part 42 directly sends the action command information S 16 as the position transit information S 18 to the control mechanism part 43 . Otherwise, if the action command information S 16 indicates a position to which the robot apparatus 1 cannot directly transit, the position transit mechanism part 42 generates position transit information S 18 which lets the robot apparatus 1 transit to an aimed position (action command information S 16 indicated by the action command information S 16 ) through another position or motion to which the robot apparatus can transit. The part 42 then sends it to the control mechanism part 43 . In this manner, the robot apparatus 1 can avoid a situation that a position to which the robot apparatus cannot transit is forcedly carried out or stumble. Or, preparation of a plurality of actions to be executed until an aimed position or motion is achieved can be connected to enrichment of expressions.
  • the position transit mechanism part 42 holds a graph that registers positions and motions, which the robot apparatus 1 can take, and that is constructed by connecting a position and a motion for letting this position transit.
  • the position transit mechanism part 42 searches a route from the current position to an aimed position or an aimed motion, on the graph, based on the action command information S 16 as command information, and lets the robot apparatus 1 move on the basis of the search result, thereby to let the robot apparatus transit from the current position to an aimed position or motion. That is, the position transit mechanism part 42 previously registers a position which the robot apparatus 1 can take and records routes through which the robot apparatus can transit between two positions. Based on this graph and the action command information S 16 outputted from the action determination mechanism part 41 , the robot apparatus 1 is let transit to an aimed position or motion.
  • the position transit mechanism part 42 uses an algorithm called an oriented graph 60 as shown in FIG. 11, as a graph described above.
  • the oriented graph 60 is constructed by nodes indicating positions which the robot apparatus 1 can take, oriented arcs (motion arc) each connecting two positions (nodes) between which the robot apparatus can transit, and, in some cases, arcs of motions each expressing return from a node to itself, i.e., a self-motion arc expressing a motion completed within one node.
  • the nodes and arcs are connected to each other.
  • the position transit mechanism part 42 maintains an oriented graph 60 which is constructed by nodes as information indicating positions of the robot apparatus 1 , and oriented arcs and self-motion arcs as information indicating motions of the robot apparatus 1 .
  • the part 42 grasps positions as information of points, and further, information of motions as information of oriented lines.
  • the oriented arcs and self-motion arcs may be plural. That is, a plurality of oriented arcs may be connected between nodes (positions) between which the robot apparatus can transit, and one node may be connected to a plurality of self-motion arcs.
  • the position transit mechanism part 42 searches a route from the current node to a next node, following the direction of an oriented arc a, such that a node corresponding to a current position and a position to be taken next, which is indicated by the action command information S 16 , and records nodes on the searched route in order, thereby to prepare a plan of position transition.
  • search for a route to an aimed node (the node indicated by a command) from the current position is called a route search.
  • the aimed arc described herein may be an oriented arc or a self-motion arc.
  • the case where the self-motion arc is an aimed arc is a case where the self-motion is aimed (commanded), e.g., a case where a predetermined trick (motion) is instructed, or the like.
  • the position transit mechanism part 42 outputs a control command (position transit information S 18 ) to the control mechanism part 43 in the rear stage, based on a position transit plan until an aimed position (node) or motion (oriented arc or self-motion are) is attained.
  • action command information S 16 of “Walk!” is supplied in case where the current position is at a node ND 2 indicating the position of “being down”, direct transition from “being down” to “walk” cannot be achieved, so that the position transit mechanism part 42 searches a route from a node ND 2 indicating the position of “being down” to a node ND 4 indicating the position of “walking”, thereby to prepare a position transit plan.
  • a position plan is prepared such that a node ND 3 indicating the position of “standing” is selected through the oriented arc a 2 from the node ND 2 indicating the position of being down, and the position further reaches a node ND 4 through an oriented arc a 3 from the node ND 3 indicating the position of “standing”.
  • the position transit mechanism part 42 issues position transit information S 18 having a content of “Stand up!”, and thereafter outputs position transit information S 18 having a content of “Walk!” to the control mechanism part 43 .
  • a self-motion arc indicating a motion of “dance” is pasted on a node ND 3 indicating a position indicating the position of “standing”.
  • a self-motion arc indicating a motion of “hail” is pasted on a node ND 5 indicating the position of “sitting”, or a self-motion arc indicating a motion of “snore” is pasted on a node ND 1 indicating the position of “stretch out”.
  • the robot apparatus 1 is constructed so as to normally grasp what position the robot apparatus 1 is in. However, the robot apparatus 1 loses its current position in some cases. For example, the robot apparatus 1 cannot grasp its current position when it is lifted up by a user, when it stumbles over, or when the power is started. For example, the current position which thus cannot be grasped is called an indefinite position. If the current position thus cannot be grasped and is determined to be an indefinite position, a so-called start position cannot be determined, so that a position transit plan until an aimed position or motion cannot be prepared.
  • a node indicating a neutral position is provided. If the current position is indefinite, the robot apparatus is let transit to a neutral position, and a position transit plan is then prepared. For example, when the current position is indefinite, the neutral position is let transit to the node ND nt , as shown in FIG. 13, and this position is then let transit to a node indicating a basic position such as a node ND 3 indicating the position of “standing”, the node ND 5 indicating the position of “sitting”, or the node ND 1 indicating the position of “lying down”. Further, after the transition to this basic position, a position transit plan is prepared as an original problem.
  • an operating part e.g., an actuator
  • the load to the servo is reduced. Therefore, the operating part can further be prevented from being driven like in normal operation and from being damaged thereby, for example.
  • the tail part 5 normally moves as if it swung. However, if this motion of the tail part 5 is carried out when the position transits to the neutral position (node), the tail part 5 may be damaged if the robot apparatus 1 lies down in an indefinite position.
  • the robot apparatus 1 can grasp a stumble of itself and can transit from a stumble position to a node indicating a basic position as described above.
  • the robot apparatus 1 is provided with an acceleration sensor and detects stumbling of itself.
  • the robot apparatus 1 detects that it has walked by means of an acceleration sensor, the robot apparatus 1 makes a predetermined motion for recovery from the stumble and thereafter transits to a node indicating a basic position as described above.
  • the robot apparatus 1 is also arranged so that it grasps the stumbling direction. More specifically, the robot apparatus 1 can grasp its stumbling direction in both of the frontward, backward, leftward, and rightward directions. In this manner, the robot apparatus 1 can make a motion for recovery from a stumble in correspondence with the stumbling direction. Accordingly, the robot apparatus can rapidly transit to a basic position.
  • a predetermined expression may be outputted.
  • the robot apparatus thrushes its legs according to a self-motion arc a 11 , as a predetermined expression. In this manner, it is possible to express a situation that the robot apparatus 1 has walked over and struggles.
  • a position transit plan of transiting to the node ND 3 of “standing” can be prepared by selecting an optimal oriented arc.
  • a position transit plan is prepared in order to take an index of shortening the distance between a current node and an aimed node, as an index, i.e., by means of a so-called shortest distance search for a route which provides the shortest distance.
  • the shortest distance search is carried out using the concept of distance for an oriented arc (arrow mark) connecting nodes (circle mark).
  • a route search method of this kind is a path search logic of DAIKISUTORA (phonetic translation). The distance can be substituted by a concept of weighting, time, or the like as described later.
  • FIG. 14 shows a result of connecting nodes to which the current position can transit through an oriented arc having a distance of “1”.
  • the method of searching the shortest distance is not limited to the method of this kind.
  • a plurality of routes are searched, and the route having the shortest distance to an aimed node is selected from the search result.
  • routes through which the current node can transit to an aimed node are searched as many as possible to attain a plurality of routes.
  • the shortest route is specified with the distance used as an index.
  • search for a route having the shortest distance is not limited to search for the shortest route by means of this method but the processing of route search can be terminated at the time point when the shortest route to an aimed node is detected.
  • the distance as a search index is gradually extended from the current position to search nodes one after another. Every time a node is searched, the node at the shortest distance (which is not the node as a target) is determined. Finally, at the time point when the node as a target is detected, processing for searching a route is terminated. That is, for example, the concept of “equal distance” which can be recognized by the concept of “contour lines” is used, and the distance from the current node is extended to detect nodes on the “equal distance” one after another. At the time point when the node as a target can be detected finally, the processing of searching a route is terminated. Path search of DAIKISUTORA can be cited as a method of searching a route.
  • the route causing the shortest distance can be searched without searching all the routes that can exit with respect to a node as a target. Therefore, the route of the shortest distance can be detected in the shortest time. As a result, it is possible to reduce the load to the CPU and the like, which is required to perform this kind of search processing. Accordingly, the shortest route to the node as a target can be detected without searching all of the routes.
  • the load caused by search from the entire network can be eliminated. For example, in case where the network which constructs this graph is of a large scale, the route can be searched with the load reduced.
  • the method of searching a route may be arranged as follows. Nodes are previously classified (clustered) roughly on the basis of actions or positions. A coarse search is carried out at first by clustering. Thereafter, a detailed search may be carried out. For example, when the robot apparatus is let take a position of “right front leg kick”, an area of the class of “kick a ball” is selected at first as a route search range, and a path is then searched only in the area.
  • coarse classes and elements thereof are related with each other, i.e., “kick a ball” and “right front leg kick” are related with each other by adding ID information and the like when designing this kind of system.
  • an oriented arc oriented in both directions to the nodes may exist. Therefore, there may be an oriented arc which returns from a node to which the robot apparatus once has transited.
  • a returning oriented arc may be selected if there is no limitation to route search, and the transition may return to an original node, in some cases. To prevent this, it is possible to select an oriented arc which does not lead to a node which the robot apparatus has once passed.
  • the target is a position (node).
  • the target may be a motion, i.e., an oriented arc or a self-motion arc.
  • a case that the arc as a target is a self-motion arc will be a situation that leg parts 4 are let thrash or so.
  • weights added to oriented arcs and motion times of oriented arcs are indexes for route search. If weights are added to oriented arcs, the following manner is taken.
  • the transit time requires three seconds.
  • the current node ND 1 transits to an aimed node ND 3 in the graph as shown in FIG. 16, there are a case where the current node transits to the aimed node ND 3 through a node ND 2 and a case where the current node transits to an aimed node ND 3 only through an oriented arc a 3 .
  • the robot apparatus 1 can reach faster the aimed node ND 3 in a shorter time period in the former case of transition through the node ND 2 passes through the oriented arc a 1 which requires a motion time of one second and the oriented arc a 2 which requires a motion time of two seconds, while the latter case of transition only through the oriented arc a 3 which directly connects the current node ND 1 and the aimed node ND 3 to each other and which requires a motion time of five seconds. Accordingly, in case where the motion time is taken as an index, it is possible to make a position transition plan that the aimed node can be reached in the shortest time by the transition through two oriented arcs.
  • the weights added to oriented arcs or the distances may be used as the difficulty levels of motions.
  • the fact that the difficulty level is low is set as a fact that the distance is short.
  • one of oriented arcs can be set as a default. By setting one of the oriented arcs as a default, it is possible that the default oriented arc is normally selected and another oriented arc which is not the default is selected only when an instruction is given.
  • a probability of selection can be assigned to every oriented arc. That is, different probabilities are respectively assigned to arcs.
  • various motions can be selected between same nodes, so that a variety can be given to a series of motions. For example, when the sitting position transits to the standing position, a motion in which folded legs are once extended backwards and the robot apparatus then stands up on four legs or a motion in which front legs are expanded forwards and the robot apparatus then stands up is selected depending on the probabilities. In this manner, it is possible to bring about an effect that which motion the robot apparatus takes to stand up cannot be predicted before a selected motion is reproduced (or carried out actually).
  • a probability Pi assigned to an oriented arc added with a weight or distance of mi can be expressed by the expression (1) where distances or weights of oriented arcs are m 1 , m 2 , m 3 , . . . and the total sum thereof (m 1 +m 2 +m 3 + . . . ) is M.
  • P i M - m i ( M - m i ) + ( M - m 2 ) + ( M - m 3 ) + ... ( 1 )
  • an oriented arc having a large weight is selected as a transit route at a low probability while an oriented arc having a small weight is selected as a passing route at a high probability.
  • route search may be limited to routes within a predetermined range. In this manner, an optimal route can be searched in a much shorter time.
  • arcs (oriented arcs or self-motion arcs) can be registered in the graph as described below.
  • a plurality of execution forms can be considered with respect to a motion of “walk”.
  • motions of walk in a direction of angle 0°, walk in a direction of angle 30°, and walk in a direction of angle 60° can be a plurality of execution forms.
  • Providing a plurality of execution forms for one motion, i.e., increase of the number of parameters concerning one motion leads to enrichment of expressions of the robot apparatus 1 .
  • These motions can be achieved by providing arcs having those different parameters.
  • this provision of arcs having different parameters corresponding to a plurality of execution forms is not preferred for the aspect of effective use of resources of a network. This is because ninety one arcs are required, in case of executing motions of walking in different directions at a regular interval of 1° from 0° to 90°.
  • “walk” is set as a path, and the walking directions are set as parameters. For example, in a self-motion arc, “walk!” is instructed and a walking direction is optionally supplied as a parameter at this time. “Walk” is carried out in the direction specified by the parameter when the motion is reproduced. As a result of this, the instruction of “walk” can be accomplished with the walking direction finely set and without necessitating a plurality of self-motion arcs, if only one self-motion arc of “walk” and a parameter of the walking direction are provided. In this manner, the graph can be simplified even if the scenario varies. Network resources can be used effectively.
  • the above-described embodiment has explained a case of adding a parameter to a self-motion arc.
  • the present embodiment is not limited hitherto but a parameter may be added to an oriented arc. Accordingly, it is unnecessary to prepare a plurality of oriented arcs which have different “walking directions” as parameters although the motion of “walk” is common to the oriented arcs.
  • the robot apparatus is let repeat a motion.
  • a stop command for stopping a repetitive motion is given later to the control mechanism part 43 . In this manner, the robot apparatus can be let execute repeatedly one same motion.
  • a predetermined arc can be executed with an expression pasted thereon. That is, in case where a predetermined motion (arc) is executed, it is possible to let the robot apparatus execute a motion as another expression in synchronization with the predetermined motion. For example, as shown in FIG. 19, in case where an oriented arc a for transiting from the node ND 1 of the sitting position to the node ND 2 of the standing position is set, a predetermined voice or motion can be let correspond to the predetermined oriented arc a. In this manner, an expression of smiling eyes or a voice of “Mmm” can be reproduced in synchronization with a motion of transiting from the sitting position to the standing position. Thus, a thick expression is enabled by a combination of different parts, thereby involving an effect that the scenario is enriched.
  • a motion named “angry” is named to each of motions expressing anger, such as a self-motion arc from the sleeping position to the sleeping position, a self-motion arc from the sitting position to the sitting position, and a self-motion arc from the standing position to the standing position. Then, by merely supplying an instruction of “anger”, the closest motion (self-motion arc) of “angry” can be searched by shortest route search. That is, the shortest executable route among aimed motions (self-motion arcs) is selected as a position transit plan.
  • a motion can be executed in an optimal position (the position at the shortest distance) by the shortest distance search.
  • the upper control means can execute an instructed motion without necessitating constant grasp of states or motions of respective parts. That is, for example, the upper control means (position transit mechanism part 42 ) needs only to grasp the current node if the command of “angry” is given.
  • a position transit plan up to a motion of getting angry in the sleeping position at the shortest distance can be prepared by only searching the self-motion arc of “angry” without searching an actual node of “getting angry in a sleeping position”.
  • the robot apparatus 1 can operate separately respective componential parts of itself That is, commands concerning respective componential parts can be executed.
  • Such componential parts of the (entire) robot apparatus 1 will be the head part 2 , leg parts 4 , and tail part 5 , as shown in FIG. 21.
  • the tail part 5 and the head part 2 can be moved individually. That is, since resources of these parts do not compete with each other, these parts can be operated individually. Meanwhile, the entire robot apparatus 1 and the head part 2 cannot be moved individually. That is, since resources of the entire apparatus and this part compete with each other, the entire robot apparatus 1 and the head part 2 cannot be moved individually. For example, contents of a command concerning the head part 2 cannot be executed while a motion of the entire apparatus, a command of which contains a motion of the head part 2 , is being executed. For example, it is possible to swing the tail part 5 while swinging the head part 2 . On the other hand, it is impossible to swing the head part 2 while a trick is carried out by the entire apparatus.
  • Table 1 shows combinations of resources in cases where resources compete with action command information. S 16 supplied from the action determination mechanism part 41 and resources do not compete with it. TABLE 1 COMBINATION OF PARTS COMPETITION OF RESOURCES HEAD, TAIL NO HEAD, ENTIRE YES LEGS, ENTIRE YES HEAD, LEGS, TAIL NO
  • the head part 2 makes a sharp motion which results in an unnatural behavior if the motion of the head part 2 is started in a state where the last position after the motion of the entire apparatus 1 is not suitable for the head part 2 to start a motion, i.e., in case where positions before and after transition takes place in response to a different command are not continuous with each other.
  • This is a problem caused in case where the current position (or motion) and the aimed position (or motion) are associated with the entire robot apparatus 1 and the componential parts thereof, and the network (graph) composed of nodes and arcs constructed for controlling the entire robot apparatus 1 and the network (graph) composed of nodes and arcs constructed for controlling the respective componential parts of the robot apparatus 1 are constructed separately without any association provided therebetween.
  • information concerning a network used for the position transit plan of the robot apparatus 1 is constructed into a hierarchical structure, as a whole, from information (graph) concerning the entire network and information (graph) concerning the network of respective componential parts.
  • information used for the position transit plan which is composed of the information (graph) concerning the entire network and the information (graph) concerning the componential parts, is constructed in the position transit mechanism part 42 as shown in FIG. 8 described above.
  • the basic position is a position to which the robot apparatus temporarily transits to shift the state between an motion of the entire and a motion of componential parts.
  • An example of the basic position is a sitting position as shown in FIG. 22B. A procedure of smoothly connecting transit motions will be explained with respect to a case where the sitting position is set as a basic position.
  • an oriented arc a 0 which lets the position of the entire robot apparatus 1 transit from the current position ND a0 to the basic position ND ab is selected. If the entire apparatus takes the basic position, the entire apparatus is grasped as being in the state (node) of the basic position on the graphs of the head part, leg parts, and tail part.
  • an optimal oriented arc a 1 is selected from the state of the basic position ND hb , and a route to an aimed motion (self-motion arc) a 2 of the head part 2 is determined.
  • a search for a route from the current position ND a0 to the basic position ND ab on the graph of the entire apparatus and a search for a route from the basic position ND hb to an aimed motion a 2 on the graph of the head part are carried out by means of the shortest distance search as described above.
  • the above example is a specific example in which a motion of the entire apparatus is continued smoothly to a motion of a componential part.
  • a motion of a componential part is continued smoothly to a motion of the entire apparatus. More specifically, as shown in FIG. 24, explanation will be made of the case where the head part 2 is grasped as being in a position ND h0 on the graph of the head part, the leg parts 4 are grasped as being in a position ND f0 on the graph of the leg parts, and a motion a 4 of the entire apparatus is executed as a target.
  • an oriented arc a 0 which lets the position of the head part 2 transit from the current position ND h0 to the basic position ND ab is selected.
  • oriented arcs a 1 and 12 which let the position of the leg parts 4 transit from the current position ND f0 to the basic position ND fb through the position ND f1 .
  • the tail part 5 has originally been in the basic position. If respective componential parts come to their basic positions, the position is grasped as a basic position also on the graph of the entire apparatus.
  • an optimal oriented arc a 3 is selected from the state of the basic position ND hb , and a route to the motion (self-motion arc) a 4 of the entire apparatus is determined.
  • the motion of each componential part may be executed at the same time when a motion of another componential part is executed to transit to a basic position, or the motion of each componential part may be executed with limitations added.
  • the motion of each componential part may be executed at a certain timing. Specifically, in case where a command is issued with respect to a motion of the entire apparatus 1 while the head part 2 is used to make a trick, the head part 2 cannot transit to the basic position ND hb because it is just executing a trick. Therefore, the leg parts 4 are firstly brought into the state of the basic position ND fb , and the had part 2 is then let transit to the state of the basic position ND hb after it finishes the trick.
  • each componential parts can be arranged to move in consideration of the balance of the position of the entire apparatus 1 . For example, if the head part 2 and the leg parts 4 are moved simultaneously or if the head part is firstly set in the basic position ND hb , the balance is lost and the robot apparatus stumbles over. In this case, the leg parts 4 are firstly brought into the state of the basic position ND fb , and the head part 2 is then let transit to the state of the basic position ND hb .
  • the resources can be used for another purpose. For example, if the resources of the head part 2 are released while the robot apparatus is walking, the head part 2 can be let track (follow) a moving ball.
  • the basic position is not limited to one basic position.
  • a plurality of positions such as the sitting position, sleeping position, and the like can be set as basic positions.
  • a shift from a motion of the entire apparatus to any of motions of componential parts or a shift from any of motions of componential parts to a motion of the entire apparatus can be achieve by a shortest motion through a shortest distance (or shortest time).
  • this setting of a plurality of basic positions leads to enrichment of expressions of the robot apparatus 1 .
  • a determination or the like of a position or motion in the position transit mechanism part 42 as described above is made on the basis of action command information S 16 from the action determination mechanism part 41 .
  • the action determination mechanism part 41 normally sends the action determination instruction information S 16 to the position transit mechanism part 42 without limitations. That is, while a motion is executed, a command concerning another motion is issued, and the command thus issued is sent to the position transit mechanism part 42 .
  • a command storage part for storing action command information S 16 sent from the position transit control part 42 in correspondence with the command.
  • This command storage part stores action command information S 16 generated by the action determination mechanism part 41 and can further perform a so-called list operation.
  • the command storage part is, for example, a buffer.
  • a command sent from the action determination mechanism part 41 cannot be executed at present, e.g., when a tick (predetermined motion) is being carried out, the command thus sent is stored into the buffer.
  • a newly sent command D is newly added as a list to the buffer, as shown in FIG. 25A.
  • the oldest command is picked up from the buffer, and a route search is carried out. For example, if commands are stored as shown in FIG. 25A, the oldest command A is executed at first.
  • Commands are thus stored into the buffer and are executed one after another in the order from the oldest command. However, it is also possible to perform a list operation to insert or cancel a command.
  • a command D is inserted into a command group which has been stored. In this manner, a route search for the command D can be executed, prior to commands A, B, and C waiting for execution.
  • the buffer can include a plurality of command storage areas corresponding to the entire robot apparatus 1 and the respective componential parts.
  • commands for motions of each the entire apparatus and the componential parts are stored as shown in FIG. 26.
  • synchronization information for reproducing motions of different componential parts such as the head part and the leg parts in synchronization with each other. For example, as shown in FIG. 27, commands respectively stored in the command storage areas of the head part 2 and the leg parts 4 are added with information concerning numbers of the order in which reproductions of commands are to be started. Information indicating one same number, e.g., information whose reproduction should be started fifth is assigned as synchronization information.
  • a search for an optimal route to an aimed position or motion based on action command information S 16 sent from an action determination mechanism part 41 , and a determination of a position transit plan are realized because the position transit mechanism part 42 comprises a motion route search device 60 as shown in FIG. 28.
  • the motion route search part 60 comprises a command hold part 1 , a route search part 62 , and a graph storage part 63 .
  • a command concerning the entire apparatus and the componential parts is supplied to the motion route search part 60 from the action determination mechanism part 41 .
  • an aimed position or an aimed part (e.g., the head part) of a motion a command for a list operation with respect to a current command and a series of commands issued in the past, information concerning the characteristic of the command itself, or the like is appended to action command information S 16 .
  • the information thus appended will be called appended information.
  • the command for a list operation with respect to a current operation and a series of commands issued in the past is a command for inserting a newly issued command explained with reference to FIG. 25B, into the top of a group of commands which are not yet executed.
  • information concerning the characteristic of the command itself is a parameter concerning the walking direction, which has been explained in FIG. 17, a parameter concerning a command, which has been explained in FIG. 18, e.g., the parameter of “three steps” where the motion is “walk forward”, or information for synchronizing another motion, which has been explained in FIG. 19.
  • the command storage part 61 serves to store action command information S 16 sent from the action determination mechanism part 41 , as described above, and is a buffer, for example.
  • the command storage part 61 performs processing based on the contents of appended information, if action command information S 16 has appended information. For example, if a command for inserting a command as information concerning a list operation is included in the appended information, an operation for inserting action command information S 16 , which has just reached, at the top of the row of commands in a standby state in the command storage part 61 is carried out together in accordance with the contents of the command.
  • the command storage part 61 stores it together with the command.
  • This command storage part 61 grasps what command is being executed and what command is waiting at what rank in the order. To realize this, for example, commands are stored with order ranks added.
  • the command storage part 61 has four storage areas corresponding to the entire apparatus 1 , the head part 2 , the leg parts 4 , and the tail part 5 , as has been described previously. Further, the command storage part 61 can make a determination that a command for moving the entire apparatus cannot be executed while a command for moving the head part 2 is being executed or a determination that a command which is now moving the head part and a command for moving the leg parts 4 can be issued independently from each other. That is, it is possible to perform processing which prevents resources of the entire apparatus and the componential parts from competing with each other, so that a so-called solution for competition between resources can be achieved.
  • the command storage part 61 remembers the ranks of the order in which commands have been issued over different parts. For example, if commands are supplied in the order from the entire apparatus 1 to the leg parts 4 , the head part 2 , and the entire apparatus 1 , the commands concerning the entire apparatus 1 are stored as commands executed first and fourth in the command storage part 61 , the command concerning the leg parts 4 is stored as a command executed second, and the command concerning the head part 2 is stored as a command executed third. Thus, the order of the commands remembers the order. In this manner, at first, the command storage part 61 sends a command concerning the entire apparatus 1 , which has been executed first, to the route search part 62 .
  • the command storage part 61 sends the command concerning the leg parts 4 , which has been executed second, and the command concerning the head part 2 , which has been executed third. After the componential parts finish reproduction of the contents of these commands, the command storage part 61 sends the command concerning the entire apparatus 1 , which has been executed fourth, to the route search part 62 .
  • the route search part 62 starts a so-called route search by the command supplied from the command storage part 61 .
  • the graph storage part 63 stores graphs corresponding to the parts sectioned in the command storage part 61 . That is, the graph storage part 63 stores graphs corresponding to the entire apparatus 1 , the head part 2 , the leg parts 4 , and the tail part 5 . Based on the graphs stored in the graph storage part 63 , the route search part 62 searches an optimal route to an aimed position or motion as the contents of a command, using the distance or weight described previously as an index.
  • the route search part 62 sends position transit information S 18 to the control mechanism part 43 until an aimed position or motion is executed.
  • the position transit mechanism part 42 searches an optimal route to an aimed position or motion in accordance with a command and prepares a position transit plan, based on action command information S 16 sent from the action determination mechanism part 41 . In accordance with the position transit plan, the position transit mechanism part 42 outputs the position transit information S 18 to the control mechanism part 43 .
  • control mechanism part 43 generates a control signal S 5 for driving actuators 23 , based on position transit information S 18 .
  • the part 43 then sends this signal to the actuators 23 , to drive the actuators 23 .
  • the robot apparatus 1 is let make a desired motion.
  • the control mechanism part 43 returns a end notification (reproduction result) made on the basis of the position transit information S 18 , to the route search part 62 , until an aimed position or motion is reached.
  • the route search part 62 which has received an end notification notifies the graph storage part 63 of the end of the motion.
  • the graph storage part 63 updates information on the graph. That is, for example, if the position transit information S 18 is to let the leg part 2 make a motion of transiting from the state of the sleeping position to the state of the sitting position, the graph storage part 63 which has received an end notification moves the graph position of the node of the leg part 2 corresponding to the sleeping position to the graph position of the sitting position.
  • the route search part 62 further sends “thrash legs” as position transit information S 18 to the control mechanism 43 .
  • the control mechanism part 43 generates control information for thrashing the leg parts 4 , based on the position transit information S 18 , and sends this information to the actuators 23 to execute a motion of thrashing leg parts 4 .
  • the control mechanism part 43 sends an end notification thereof again to the route search part 62 .
  • the route search part 62 notifies the graph storage part 63 of the completion of the motion of thrashing the leg parts 4 .
  • the route search part 62 notifies the command storage part 61 of that reproduction of an aimed position or motion has ended. Since the aimed position or motion has been completed without problems, i.e., since the contents of the command have been accomplished without problems, the command storage part 61 erases the command. For example, if thrashing of legs is supplied as an aimed motion as described above, this command is erased from the command storage area of the leg parts 4 .
  • the action determination mechanism part 41 , the motion route search part 60 , and the control mechanism part 43 transmits information, so basic functions of the action determination mechanism part 41 , the motion route search part 60 , and the control mechanism part 43 are realized.
  • FIG. 29 shows a series of processing procedures of a route search in the motion route search part 60 , which is carried out on the bases of the command generated in the action determination mechanism part 41 , and motions which are carried out on the basis of a route search result.
  • step SP 1 the action determination mechanism part 41 supplies a command to the motion route search part 60 .
  • step SP 2 the command storage part 61 of the motion route search part 60 adds the command to a command group.
  • a command concerning the tail part is added. If command information for making an insertion is added as a list operation to the command, the command is inserted into a row of commands in accordance with the command information.
  • the command storage part 61 determines whether or not there is a command which can be started.
  • step SP 4 If there is no command that can be started, the processing goes to the step SP 4 . Otherwise, if there is a command that can be started, the processing goes to the step SP 5 . At the time point when the processing goes to the step SP 3 , a command concerning a componential part cannot be executed while the entire robot apparatus 1 is making a motion. Therefore, the processing goes to the step SP 4 . Meanwhile, if the motion has ended at the time point when the processing goes to the step SP 3 , the processing further goes to the step SP 5 .
  • the command storage part 61 waits for elapse of a predetermined time. For example, after waiting for elapse of 0.1 second, the command storage part 61 determines again whether or not there is a command that can be started, in the step SP 3 .
  • the second command concerning the head part, the fourth part concerning the leg parts, and the fifth command concerning the tail parts are grasped as commands that can be started, in the step SP 3 .
  • the third command concerning the head part is grasped as a command that can be started, as soon as execution of the second command concerning the head part is finished.
  • the sixth command concerning the tail part is grasped as a command that can be started, as soon as execution of the fifth command concerning the head part is finished.
  • the processing goes to the step SP 4 and waits for elapse of a predetermined time. In this manner, the processing waits for completion of a predetermined motion in the entire apparatus or componential parts. A next command can be executed at a suitable timing.
  • the route search part 62 determines whether or not a motion route to an aimed position from the current position on the graph corresponding to the command stored in the graph storage part 63 can be found.
  • the route search part 62 determines whether or not there is a route that can reach an aimed state or motion from the current position (state) on the graph of the head part, as shown in FIG. 30.
  • the case where a route can be found is a case where there is a path (arc) that can reach an aimed state or motion from the current position (state). As shown in FIG. 30, this is a case where there are a plurality of oriented arcs a 0 and a 1 that can reach an aimed position from the current position.
  • the case where no route can be found is a case where there is no path to an instructed state or motion.
  • step SP 6 If no transit route is found to an aimed position from the current position, the processing goes to the step SP 6 . Otherwise, if a transit route is found, the processing goes to the step SP 7 . If a transit route to an aimed position (or motion) is found, the route (arc) a k (where k is an integer up to n including 0) is stored and this becomes information of a position transit plan.
  • the command storage part 61 erases the command from the list, considering that no motion route is found. By thus erasing the command if no motion route is found, a subsequent command can be picked up. After thus erasing a command, the command storage part 61 determines again whether or not there is a command that can be started, in the step SP 3 .
  • step SP 8 whether or not i is n or less is determined.
  • the control mechanism part 43 performs production of an arc a 1 , based on position transit information S 18 supplied from the route search part 62 . For example, if the robot apparatus is at the current position (first position), as shown in FIG. 30, the motion of the first oriented arc a 0 is carried out.
  • the route search part 62 receives a notification of a motion.
  • the graph storage part 63 updates the position on the graph.
  • step SP 9 whose processing is carried out if i is greater than n, i.e., if a motion to an aimed position is executed or if an arc (oriented arc or self-notion arc) as an aimed motion is executed, the command storage part 61 erases a command which has just been finished from stored commands. Further, the processing returns to the step SP 3 , and the command storage part 61 determines again whether or not there is a command that can be started.
  • the commands are stored into the command storage part 61 as has been described above. Since resources do not compete with each other, the respective componential parts of the robot apparatus 1 can perform simultaneous motions, based on such a command. Hence, with respect to the processing procedure (flow) as described above, flows of respective componential parts can exist in parallel with each other, so that a plurality of flows can be executed simultaneously. Accordingly, in case where a flow of a command concerning the entire apparatus is being processed, processing of flows of commands concerning respective componential parts is not carried out but is put in a standby state.
  • the feeling/instinct model part 40 of the controller 32 changes the states of the feeling and instinct of the robot apparatus 1 , based on supplied input information S 1 to S 3 .
  • the changes of the feeling and instinct of the robot apparatus 1 are reflected on actions of the robot apparatus 1 , thereby to let the robot apparatus 1 move autonomously, based on its own feeling and instinct.
  • a command from a user is inputted through a command receiving part 30 comprised of a remote controller receiving part 13 and a microphone 11 .
  • the present invention is not limited hitherto.
  • a computer may be connected to a robot apparatus 1 , and a command from a user can be inputted through the computer thus connected.
  • the above embodiment has been explained with respect to the case where states of the feeling and instinct are determined with user of emotion units 50 A to 50 F indicating emotions such as “delight”, “sorrow”, “anger”, and the like as well as desire units 51 A to 51 D indicating “desire for motion”, “desire for love”, and the like.
  • the present invention is not limited hitherto.
  • an emotion unit indicating “loneliness” may be added to the emotion units
  • a desire unit indicating a “desire for sleep” may be added to the desire units 51 .
  • the states of the feeling and the instinct may be determined by using emotion units and desire units constructed by various kinds of units or a various number of units.
  • the present invention has been explained with respect to a structure in which the robot apparatus 1 has a feeling model and an instinct model.
  • the present invention is not limited hitherto but the structure may include only the feeling model or instinct model.
  • the structure may include another model that will decide the action of an animal.
  • next action is determined by an action determination mechanism part 41 , based on a command signal S 1 , an external information signal S 2 , an internal information signal S 3 , feeling/instinct state information S 10 , and action information S 12 .
  • the present invention is not limited hitherto but a next action may be determined on the basis of a part of the information of the command signal S 1 , external information signal S 2 , internal information signal S 3 , feeling/instinct state information S 10 , and action information S 12 .
  • a next action is determined with use of an algorithm called a finite automaton 57 .
  • the present invention is not limited hitherto, but an action may be determined with use of an algorithm called a state machine which has an infinite number of states.
  • a state may be newly generated every time when input information S 14 is supplied, and an action may be determined in accordance with the state thus generated.
  • finite automaton 57 is used to determine a next action.
  • the present invention is not limited hitherto but an action may be determined with use of an algorithm called probability finite automaton in which a plurality of states are selected as candidates of a transit destination, based on the input information S 14 currently supplied and the state at this time, and the state of the transit destination is determined at random by means of random numbers, among the plurality of states thus selected.
  • action command information S 16 indicates a position to which the current position can directly transit
  • the action command information S 16 is sent directly as position transit information S 18 to the control mechanism part 43 , without changes.
  • the action command information S 16 indicates a position to which the current position cannot directly transit
  • such position transit information S 8 that lets the position once transit to another position to which the current position can transit and thereafter lets the position transit to an aimed position is generated and sent to the control mechanism part 43 .
  • the present invention is not limited hitherto but action command information S 16 is received and sent to the control mechanism part 43 , only in case where the action command information S 16 indicates a position to which the current position can directly transit. Meanwhile, if the action command information S 16 indicates a position to which the current position cannot directly transit, the action command information S 16 may be denied.
  • the above embodiment has been explained with respect to the case where the present invention is applied to a robot apparatus 1 .
  • the present invention is not limited hitherto but may be applied to other various robot apparatuses such as a robot apparatus and the like used for games or in the field of entertainment.
  • the present invention can be applied to a character which moves as computer graphics, e.g., an animation or the like using an articulated character.
  • the outer appearance of the robot apparatus 1 to which the present invention is applied is not limited to the structure as shown in FIG. 1 but may be arranged to be more similar to an actual dog, as shown in FIG. 32, or may be arranged to be a humanoid robot having a human shape.
  • a robot apparatus makes a motion corresponding to supplied input information, and comprises model change means including a model, which causes the motion, for determining the motion by changing the model, based on the input information. Therefore, the robot apparatus can autonomously act based on states of the feeling and instinct of the robot apparatus, by changing the model, based on input information, thereby to determine a motion.
  • a motion control method is to make a motion in accordance with supplied input information, and the motion is determined by changing a model which causes the motion, based on the input information.
  • the robot apparatus can act autonomously based on the states of its own feeling and instinct.
  • Another robot apparatus makes a motion in accordance with supplied input information, and comprises motion determination means for determining a next operation subsequent to a current motion, based on the current motion and the input information supplied next, said current motion corresponding to a history of input information supplied sequentially. Therefore, the robot apparatus autonomously can act based on the states of its own feeling and instinct, by determining a next motion subsequent to a current motion, based on the current motion corresponding to a history of input information supplied sequentially, and the input information supplied next.
  • Another motion control method is to make a motion in accordance with supplied input information, and a next motion subsequent to a current motion is determined, based on the current motion and the input information to be supplied next, the current motion corresponding to a history of input information supplied sequentially.
  • the robot apparatus can act autonomously, based on the state of its own feeling or instinct, for example.
  • another robot apparatus comprises: graph storage means for storing a graph which registers the positions and the motion and which is constructed by connecting the positions with the motion for letting the positions transit; and control means for searching a route from a current position to an aimed position or motion, on the graph, based on the action command information, and for letting the robot apparatus move, based on a search result, thereby to let the robot apparatus transit from the current position to the aimed position or motion. Therefore, a route from the current position to the aimed position or motion is searched on the graph by the control means, and a motion is made on the basis of the search result. The current position can thus let transit to the aimed position or motion. In this manner, the robot apparatus can enrich its own expressions.
  • a route from a current position to an aimed position or motion is searched on a graph which registers positions and motions and which is constructed by connecting the positions with motions for letting the positions transit, and a motion is made, based on a search result, thereby to make transit from the current position to the aimed position or motion.

Abstract

The robot apparatus of the present invention autonomously makes natural motions. The robot apparatus is provided with a control means 32, which has a feeling/instinct model that causes a motion and changes the feeling/instinct model based on input information S1 to S3 thereby to determine a motion. As a result of this, the robot apparatus 1 can autonomously act based on the state of its own feeling/instinct. A robot apparatus which can autonomously make natural motions can thus be realized.

Description

    TECHNICAL FIELD
  • The present invention relates to a robot apparatus and a motion control method and is preferably applied to a robot apparatus which behaves like a quadruped. [0001]
  • BACKGROUND ART
  • Proposals and developments have been made in quadruped robot apparatuses, articulated robots, or animations using characters who move via computer graphics, which operate in accordance with commands from a user or environments. These robot apparatuses and animations (which are referred together robot apparatuses) carry out a series of motion based on a command from a user. [0002]
  • For example, a so-called pet robot as a robot apparatus having a shape similar to a quadruped like a dog takes a position of “Down” when it receives a command of “Down”, or it always gives a hand whenever a user puts a hand before the mouth of the dog. [0003]
  • Meanwhile, it is desirable that the motion of the robot apparatus which simulates an animal should be similar to that of an actual animal as much as possible. [0004]
  • However, conventional robot apparatuses do only predetermined motions based on commands from a user or the environment. Thus, since conventional robot apparatuses do not autonomously move unlike actual animals, it has not been possible to satisfy a users' request that they want a robot apparatus that can autonomously determine its behaviors. [0005]
  • Further, robot apparatuses reach an aimed position or motion through a predetermined position or motion. However, motion expression of robot apparatuses become rich if there are prepared a plurality of positions or motions to be carried out during transition to an aimed position or motion. [0006]
  • If there are prepared a plurality of ways of positions or motions which a robot apparatus takes halfway before reaching an aimed position or motion, it is preferred that a position or motion which the robot apparatus passes is selected optimally to transit to the aimed position or motion. [0007]
  • DISCLOSURE OF THE INVENTION
  • The present invention has been made in view of the above situation, and has an object of proposing a robot apparatus which autonomously makes natural motions and a motion control method thereof. [0008]
  • Also, the present invention has been made in view of the above situation, and has an object of providing a robot apparatus whose positions and motions are optimized during transition, and a motion control method thereof. [0009]
  • More specifically, a robot apparatus according to the present invention makes a motion corresponding to supplied input information, and comprises model change means including a model for causing the motion, for determining the motion by changing the model, based on the input information. [0010]
  • The robot apparatus having this structure has a model which causes a motion, and changes the model based on input information thereby to determine a motion. Therefore, if the model is a feeling model or an instinct model, the robot apparatus autonomously acts based on states of the feeling and the instinct of the robot apparatus. [0011]
  • A motion control method according to the present invention is to make a motion in accordance with supplied input information, and the motion is determined by changing a model which causes the motion, based on the input information. [0012]
  • In this motion control method, a motion is determined by changing the model, based on input information. If the model is a feeling model or an instinct model, the robot apparatus can act autonomously based on its own feeling or instinct. [0013]
  • Another robot apparatus according to the present invention makes a motion in accordance with supplied input information, and comprises motion determination means for determining a next operation subsequent to a current motion, based on the current motion and the input information supplied next, said current motion corresponding to a history of input information supplied sequentially. [0014]
  • The robot apparatus having this structure determines a next motion subsequent to a current motion, based on the current motion corresponding to a history of input information supplied sequentially, and input information to be supplied next. Therefore, the robot apparatus autonomously acts based on the states of its own feeling or instinct. [0015]
  • Another motion control method according to the present invention is to make a motion in accordance with supplied input information, and a next motion subsequent to a current motion is determined, based on the current motion and the input information to be supplied next, the current motion corresponding to a history of input information supplied sequentially. [0016]
  • In this motion control method, a next motion subsequent to a current motion is determined, based on the current motion corresponding to the history of input information sequentially supplied, and input information to be supplied next. Therefore, the robot apparatus can act autonomously, based on the state of its own feeling or instinct. [0017]
  • Further, another robot apparatus according to the present invention comprises: graph storage means for storing a graph which registers the positions and the motion and which is constructed by connecting the positions with the motion for letting the positions transit; and control means for searching a route from a current position to an aimed position or motion, on the graph, based on the action command information, and for letting the robot apparatus move, based on a search result, thereby to let the robot apparatus transit from the current position to the aimed position or motion. [0018]
  • The robot apparatus having this structure transits to an aimed position or motion instructed by the action command information, based on a graph which registers positions and motions stored in graph storage means and which is constructed by connecting the positions with motions for letting the positions transit. Specifically, the robot apparatus searches a route from the current position to an aimed position or motion, on the graph, based on action command information. Based on the search result, a motion is made to transit to an aimed position or motion from the current position. [0019]
  • Further, in another motion control method according to the present invention, based on action command information, a route from a current position to an aimed position or motion is searched on a graph which registers positions and motions and which is constructed by connecting the positions with motions for letting the positions transit, and a motion is made, based on a search result, thereby to make transit from the current position to the aimed position or motion. [0020]
  • That is, in this motion control method, transit to an aimed position or motion instructed by the action command information is made, based on a graph which registers positions and motions and which is constructed by connecting the positions with motions for letting the positions transit.[0021]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a perspective view showing an embodiment of a robot apparatus according to the present invention. [0022]
  • FIG. 2 is a block diagram showing a circuit diagram of the robot apparatus. [0023]
  • FIG. 3 is a schematic diagram showing data processing in a controller. [0024]
  • FIG. 4 is a schematic diagram showing data processing by a feeling/instinct model part. [0025]
  • FIG. 5 is a schematic diagram showing data processing by the feeling/instinct model part. [0026]
  • FIG. 6 is a schematic diagram showing data processing by the feeling/instinct model part. [0027]
  • FIG. 7 is a view showing transit of states according to finite automaton in an action determination mechanism part. [0028]
  • FIG. 8 is a block diagram showing a structure of an action determination mechanism part and the like which are used for explaining generation of action command information. [0029]
  • FIG. 9 is a view used for explaining a case where a state is determined by probability. [0030]
  • FIG. 10 shows a table in which relationships between transit probabilities and states to which the positions transit. [0031]
  • FIG. 11 shows a graph of position transit in the position transit mechanism part. [0032]
  • FIG. 12 shows a specific example of a graph of position transit. [0033]
  • FIG. 13 is a view showing transit of states used for explaining that a neutral position to which the current position is let transit if the current position cannot be grasped is comprised and that recovery from stumble-over is enabled. [0034]
  • FIG. 14 is a view used for explaining a route search with the distance used as an index. [0035]
  • FIG. 15 is a view used for explaining a case of making a route search by classification. [0036]
  • FIG. 16 is a view used for explaining a case where a route search is made by classification. [0037]
  • FIG. 17 is a top view of a robot apparatus, used for explaining a case where a walking direction is set as a parameter. [0038]
  • FIG. 18 shows a table showing parameters and contents of motions. [0039]
  • FIG. 19 is a view used for explaining a case where another motion is synchronized with a motion during transit between positions. [0040]
  • FIG. 20 is a view used for explaining a case where similar motions are executed at different positions. [0041]
  • FIG. 21 is a perspective view showing a robot apparatus. [0042]
  • FIG. 22A and FIG. 22B are views used for explaining a case of positions are let transit between the entire apparatus and parts, with a basic position inserted therebetween. [0043]
  • FIG. 23 is a view used for explaining a case of executing an aimed motion after the current position is once let transit to a basic position when the current position relates to the entire apparatus and the aimed motion relates to a part. [0044]
  • FIG. 24 is a view used for explaining a case of executing an aimed motion after the current position is once let transit to a basic position when the aimed motion relates to the entire apparatus. [0045]
  • FIGS. 25A and 25B are views used for explaining processing of inserting a command. [0046]
  • FIG. 26 is a view showing a command storage part which can store commends corresponding to the entire apparatus and componential parts. [0047]
  • FIG. 27 is a view used for explaining an example of processing form by a command storage part which can store commands corresponding to the entire apparatus and componential parts. [0048]
  • FIG. 28 is a block diagram showing a motion route search part which makes a route search. [0049]
  • FIG. 29 is a flowchart showing a series of processing until a motion is executed in accordance with a command. [0050]
  • FIG. 30 is a view used for explaining that the current position transits through a plurality of oriented arcs to an aimed motion, on the graph of a head part. [0051]
  • FIG. 31 is a view showing another example to which the present invention is applied, wherein a character which moves as a computer graphic. [0052]
  • FIG. 32 is a perspective view showing another embodiment of a robot apparatus according to the present invention.[0053]
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • In the following, embodiments of the present invention will be explained in more details. [0054]
  • (1) Structure of Robot Apparatus [0055]
  • The [0056] entire robot apparatus 1 is constructed As shown in FIG. 1 and comprises a head part 2 corresponding to a head, a body part 3 corresponding to the trunk, leg parts 4A, 4B, 4C, and 4D corresponding to legs, and a tail part 5 corresponding to a tail. The robot apparatus 1 moves the head part 2, leg parts 4A to 4D, and tail part 5 in relation to the body part 3, thereby to move like an actual quadruped.
  • An [0057] image recognition part 10 equivalent to eyes and constructed, for example, by a CCD (Charge Coupled Device) camera for picking up images, a microphone 11 equivalent to ears for collecting voices, and a speaker 12 equivalent to a mouth for generating a voice are respectively equipped at predetermined position of the head part 2. Also, the head part 2 is equipped with a remote controller receiving part 13 for receiving commands transmitted through a remote controller (not shown) from a user, a touch sensor 14 for detecting contact of a hand of the user or the like, and a LED (Light Emitting Diode) 15 constructed by a light emitting means.
  • The [0058] body part 3 is equipped with a battery 21 at a position corresponding to its abdomen, and an electronic circuit (not shown) or the like is contained inside the body part 3.
  • Joint parts of the [0059] leg parts 4A to 4D, connecting parts between the leg parts 4A to 4D and the body part 3, a connecting part between the body part 3 and the head part 2, a connecting part between the body part 3 and the tail part 5 are connected by their own actuators 23A to 23N. These joint and connecting parts are driven on the basis of control by the electronic circuit contained in the body part 3. In the robot apparatus 1, the actuators 23A to 23N are driven to shake and nod the head part 2, wag the tail part 5, and move the leg parts 4A to 4D to walk. The robot apparatus 1 thus behaves like an actual quadruped.
  • For example, the [0060] robot apparatus 1 constructed as described above has the following features which will be explained in details later.
  • When the [0061] robot apparatus 1 is instructed to changes its position from a position (first position) to another position (second position), it does not directly change to the second position from the first position but it transits a prepared natural position.
  • Also, when the robot apparatus reaches a given position during the position transition, it can receive a notification. [0062]
  • The [0063] robot apparatus 1 is constructed by parts of a head, legs, and a tail, and can control independently the positions of these parts. For example, the positions of the head and legs can be controlled independently from each other. Also, the position of the entire apparatus including the head, legs, and tail can be managed separately from the respective parts.
  • It is also possible to transfer parameters which specify the details of motions to motion commands for the [0064] robot apparatus 1.
  • The [0065] robot apparatus 1 has the above-described features. Much more features will now be explained below including those described above.
  • (2) Circuit Configuration of Robot Apparatus [0066]
  • The circuit configuration of the [0067] robot apparatus 1 is as shown in FIG. 2, for example. The head part 2 comprises a command receiving part 30 constructed by a microphone 11 and a remote controller receiving part 13, an external sensor 31 constructed by an image recognition part 10 and a touch sensor 14, a speaker 12, and a LED 15. The body part 3 has a battery 21 and internally has a inner sensor 35 constructed by a controller 32 for controlling the operation of the entire robot apparatus 1, a battery sensor 33 for detecting the residue of the battery 21, and a heat sensor 34 for detecting head generated inside the robot apparatus 1. Further, actuators 23A to 23N are provided at predetermined positions of the robot apparatus 1, respectively.
  • The [0068] command receiving part 30 serves to receive commands such as “Walk”, “Down”, “Chase Ball”, and the like that are supplied to the robot apparatus 1 from the user. This part 30 is constructed by a remote controller receiving part 13 and a microphone 11.
  • The remote [0069] controller receiving part 13 is receives a desired command inputted by a remote controller (not shown) operated by a user. For example, transmission of a command from the remote controller is achieved by infrared light. The remote controller receiving part 13 receives the infrared light, generates a receive signal S1A, and sends it to the controller 32.
  • The remote controller is not limited to the type of using infrared light but may be a type of supplying a command to the [0070] robot apparatus 1 by a tone scale. In this case, the robot apparatus 1 is arranged to perform processing in correspondence with the tone scale from the remote controller inputted through the microphone 11.
  • When a user gives a voice corresponding to a desired command, the [0071] microphone 11 collects the voice given by the user, generates a voice signal S1B, and sends it to the controller 32.
  • The [0072] command receiving part 30 generates a command signal S1 containing a receiving signal SZ1A and a voice signal S1B in accordance with the command given to the robot apparatus 1 by the user. The part 30 supplies this command signal to the controller 32.
  • The [0073] touch sensor 14 of the external sensor 31 serves to detect action on the robot apparatus 1 from the user, such as “Rub”, “Hit”, and the like. For example, when a user touches the touch sensor 14 to make a desired action, a contact detection signal S2A corresponding to the action is generated and sent to the controller 32.
  • The [0074] image recognition part 10 of the external sensor 31 recognizes the environment around the robot apparatus 1. As a result, the part 10 detects environmental information around the robot apparatus 1, such as “Dark”, “Presence of a favorite toy”, or the like, or detects an action of another robot apparatus, such as “Another robot is running”, or the like. This image recognition part 10 sends an image signal S2B obtained as a result of picking up an environmental image, to the controller 32.
  • The [0075] external sensor 31 generates an external information signal S2 containing a contact detection signal S2A and an image signal S2B, in accordance with external information thus supplied from outside of the robot apparatus 1, and sends it to the controller 32.
  • The [0076] internal sensor 35 serves to detect an inner state of the robot apparatus 1 itself, such as “Hungry” which means a low battery, “Fever”, or the like. This sensor 35 is constructed by a battery sensor 33 and a heat sensor 34.
  • The [0077] battery sensor 33 serves to detect the residue of the battery 21 which supplies the power to respective circuits of the robot apparatus 1. This battery sensor 33 sends a battery capacitance detection signal S3A as a result of detection, to the controller 32. The heat sensor 34 serves to detect a heat inside the robot apparatus 1. This heat sensor 34 sends a heat detection signal S3B as a result of detection to the controller 32.
  • The [0078] internal sensor 35 thus generates an internal information signal S3 containing a battery capacitance detection signal S3A and a heat detection signal S3B in accordance with internal information of the robot apparatus 1, and sends it to the controller 32.
  • Based on the command signal S[0079] 1 supplied from the command receiving part 30, the external information signal S2 supplied from the external sensor 31, and the internal information signal S3 supplied from the internal sensor 35, the controller 32 generates control signals S5A to S5N for driving the actuators 23A to 23N, respectively. The controller 32 sends these signals to the actuators, respectively, to drive them so that the robot apparatus 1 operates.
  • At this time, the [0080] controller 32 generates a voice signal S10 and a light emitting signal S11 to be outputted to the outside if necessary. Of these signals, the voice signal S10 is outputted to the outside through the speaker 12 and the light emitting signal S1 is sent to the LED 15 to obtain a desired light emitting output (e.g., flicker, color change, or the like). The user is thus notified of necessary information. For example, the robot apparatus 1 notifies the user of its own feeling. It is possible to provide an image display part for displaying an image, in place of the LED 15. Information necessary for a user, such as feeling or the like, can then be notified by displaying a desired image.
  • (3) Processing in Controller [0081]
  • Based on programs previously stored in a predetermined storage area, the [0082] controller 32 performs software processing on the command signal S1 supplied from the command receiving part 30, the external information signal S2 supplied from the external sensor 31, and the internal information signal S3 supplied from the internal sensor 35. The controller 32 supplies a control signal S5 obtained as a result of the software processing, the actuators 23.
  • The operations of the actuators [0083] 23 at this time are expressed as the motion of the robot apparatus 1. The present invention has as its object to enrich such expressions.
  • As shown in FIG. 3, the contents of data processing performed by the [0084] controller 32 can be functionally sectioned into a feeling/instinct model part 40 as a feeling/instinct model change means, an action determination mechanism part 41 as an action determination means, a position transit mechanism part 42 as a position transit means, and a control mechanism part 43. The command signal S1, the external information signal S2, and internal information signal S3 supplied from the outside are inputted to the feeling/instinct model part 40 and the action determination mechanism part 41. The controller 32 schematically functions as follows.
  • The feeling/[0085] instinct model part 40 determines states of the feeling and instinct, based on the command signal S1, the external information signal S2, and the internal information signal S3. Further, the action determination mechanism part 41 determines next motion (action), based on feeling/instinct state information S10 obtained by the feeling/instinct model part 40 in addition to the command signal S1, the external information signal S2, and the internal information signal S3. The position transit mechanism part 42 in a rear stage prepares a position transit plan to transit to a next motion (action) determined by the action determination mechanism part 41. Note that the information concerning motion (action) determined by the action determination mechanism part 41 is fed back to the feeling/instinct model part 40, and the feeling/instinct model part 40 determines the states of the feeling and instinct, referring to the motion (action) determined. That is, the feeling/instinct model part 40 determines the instinct and feeling, referring also to a motion (action) result.
  • The [0086] control mechanism part 43 controls respective operation parts, based on position transit information S18 supplied from the position transit mechanism part 42 on the basis of the position transit plan. After the position is actually transited, a next motion (action) determined by the action determination mechanism part 41 is actually carried out.
  • That is, the [0087] robot apparatus 1 determines a next motion (action) based on the feeling/instinct by means of the above-described mechanisms of the controller 32, prepares a transit plan until the motion (action) is executed, lets the position transit based on the transit plan, and actually executes the motion (action) based on the feeling/instinct. In the following, the above-described respective structural parts of the controller 32 will be explained.
  • (3-1) Processing in Feeling/Instinct Model Part [0088]
  • The feeling/[0089] instinct model part 40 can roughly sectioned into an emotion group 50 which constructs a feeling model, and a desire group 51 which constructs an instinct model prepared as a model having a property different from the emotion model.
  • The feeling model is constructed by a feeling parameter of a certain value and is a model for expressing a feeling defined for the robot apparatus through a motion corresponding to the value of the feeling parameter. The value of the feeling parameter increases or decreases mainly based on an external input signal (external factor) which expresses “being hit” or “being scolded” and which is detected by a pressure sensor, a visual sensor, or the like. Of course, the feeling parameter changes based on an internal input signal such as a battery residue, a body temperature, or the like, in some cases. [0090]
  • The instinct model is constructed by an instinct parameter having a certain value and is a model for expressing an instinct (desire) defined for the robot apparatus, through a motion corresponding to the value of the instinct parameter. The value of the instinct parameter increases and decreases based on an internal input signal which expresses “a desire for exercise” based on its action history or “a desire for electric charge (hungry)” based on the battery residue. Of course, the instinct parameter may change based on an external input signal (external factor), like the feeling parameter. [0091]
  • Each of the feeling model and instinct model is constructed by plural types of models of an equal property. That is, the [0092] emotion group 50 includes emotion units 50A to 50F as independent feeling models having an equal property. The desire group 51 includes desire units 51A to 51D as independent instinct models having an equal property.
  • The [0093] emotion group 50 includes, for example, an emotion unit 50A indicating a feeling of “delight”, an emotion unit 50B indicating a feeling of “sorrow”, an emotion unit 50D expressing “surprise”, an emotion unit 50E indicating a feeling of “fear”, and an emotion unit 50F indicting a feeling of “hate”. The desire group 51 includes, for example, a desire unit 51A indicating a desire of “movement instinct”, a desire unit 51B indicating a desire of “love instinct”, a desire unit 51C indicating a desire of “recharge instinct”, and a desire unit 51D indicating a desire of “search instinct”.
  • In each of the [0094] emotion units 50A to 50F, the level of the emotion is indicated by an intensity (emotion parameter) of 0 to 100 levels. The intensity of each emotion changes moment by moment, based on the command signal S1, the external information signal S2, and the internal information signal S3 supplied. Thus, the feeling/instinct model part 40 combines the intensities of the emotion units 50A with each other to express the state of the robot apparatus 1 thereby to model the time-based change of the emotion.
  • It is also arranged such that desired emotion units influence each other to change the intensities. For example, the emotion units are combined with each other such that the units restrain or stimulate each other thereby to change the intensities while influencing each other. [0095]
  • Specifically, the [0096] emotion unit 50A for “delight” and the emotion 50B for “sorrow” can be combined so as to restrain mutually each other, as shown in FIG. 5. In this case, when the user praises the robot apparatus, the intensity of the emotion unit 50B for “delight” is increased, and the intensity of the emotion unit 50B for “sorrow” is decreased in accordance with the increase of the intensity of the emotion unit 50A for “delight” even if such input information S1 to S3 which will change the intensity of the emotion unit 50B for “sorrow” is not supplied. Similarly, when the intensity of the emotion unit 50B for “sorrow” increases, the intensity of the emotion unit 50A for “delight” is decreased.
  • In addition, the [0097] emotion unit 50B for “sorrow” and the emotion unit 50C for “angry” may be combined so as to stimulate each other. In this case, when the user hits the robot apparatus, the intensity of the emotion unit 50C for “angry” is increased, and the intensity of the emotion unit 50B for “sorrow” is increased in accordance with the increase of the intensity of the emotion unit 50C for “angry” even if such input information which will increase the intensity the intensity of the emotion unit 50B for “sorrow” is not supplied. Similarly, when the intensity of the emotion unit 50B for “sorrow” is increased, the intensity of the emotion unit 50C for “angry” is increased in accordance with the increase of the intensity of the emotion unit 50B for “sorrow”.
  • Since desired emotion units thus influence each other thereby changing the intensities, the intensity of one of the combined emotion units changes as the intensity of another emotion unit is changed. A [0098] robot apparatus 1 which has a natural feeling can be realized.
  • In the [0099] desire units 51A to 51D, the levels of desires are expressed by intensities (instinct parameters) of 0 to 100 levels, like the emotion units 50A to 50F. Based on the supplied command signal S1 and external information signal S2, the intensities of the desires are changed moment by moment. Thus, by combining the intensities of the emotion units 51A to 51D which change moment by moment, the feeling/instinct model part 40 expresses the state of the instinct of the robot apparatus 1, and the time-based change of the instinct is modeled.
  • Further, like the case of combining emotion units with each other, desired desire units can influence each other thereby to change their intensities. For example, desired units can be combined so as to mutually restrain or mutually stimulate each other, so that the desired desire units influence each other thereby changing their intensities. In this manner, as the intensity of one of the desired desire units is changed, the intensity of another one of the combined desire units changes accordingly. It is thus possible to realize a [0100] robot apparatus 1 which has a natural instinct.
  • Further, the units can influence each other between the [0101] emotion group 50 and the desire group 51, so that their intensities can be changed. For example, changes of the intensities of the desire unit 51 B for “love instinct” and the desire unit 51C for “recharge instinct” in the desire group 51 influence changes of the intensities of the emotion unit 50B for “sorrow” and the emotion unit 50C for “angry” in the emotion group 50. Or, a change of the intensity of the desire unit 51C for “recharge instinct” influences changes of the intensities of the emotion unit 50B for “sorrow” and the emotion unit 50C for “angry”. In this manner, it is possible to express the states that the feelings of “angry” and “sorrow” are restrained when the “love instinct” is satisfied, and the feelings of “angry” and “sorrow” are intensified when the “recharge instinct” is not satisfied. Thus, mutual actions between the feelings and desires can express a state that feelings and instincts complicatedly influence each other.
  • As has been described above, the feeling/[0102] instinct model part 40 changes each of the intensities of the emotion units 50A to 50F and the desire units 51A to 51D by the input information S1 to S3 containing the command signal S1, the external information signal S2, and the internal information signal S3, or by a mutual action between the units in the emotion group 50 and/or between the units in the desire group 51, and/or by a mutual action between the units in the emotion group 50 and the units in the desire group 51.
  • Further, the feeling/[0103] instinct model part 40 determines the state of feeling by combining the intensities of the changed emotion units 50A to 50F, and also determines the state of instinct by combining the changed intensities of the desire units 51A to 51D. The part 40 further sends the determined states of the feeling and instinct, as feeling/instinct state information S10 to the action determination mechanism part 41.
  • Also, the feeling/[0104] instinct model part 40 is supplied with action information S12 which indicates the contents of current and past actions of the robot apparatus 1 from the action determination mechanism part 41. For example, if the action of walk is determined by the determination mechanism part 41, action information S12 is supplied as information which indicates that “it has walked for a long time”.
  • By thus feeding back the action information S[0105] 12, different feeling/instinct state information S10 can be generated in correspondence with actions of the robot apparatus 1 even if one same input information S1 to S3 is supplied. More specifically, the action information S12 to be fed back is referred to when determining the states of feeling and action, by the structure as follows.
  • As shown in FIG. 6, in the feeling/[0106] instinct model part 40, there are provided intensity increase/decrease means 55A to 55C for generating intensity information S14A to S14C for increasing/decreasing the intensity of the emotion units 50A to 50C, respectively, based on action information S12 indicating the action of the robot apparatus 1 and input information S1 to S3. The intensities of the emotion units 50A to 50C are respectively increased/decreased in accordance with the intensity information S14A to S14C outputted from the intensity increase/decrease means 55A to 55C.
  • For example, if the [0107] robot apparatus 1 is rubbed when it greets a user, the feeling/instinct model part 40 supplies the intensity increase/decrease means 55A with action information S12 indicating that it greeted the user and input information S1 to S3 indicating that it is rubbed by the user. Meanwhile, the feeling/instinct model part 40 does not change the intensity of the emotion unit 50A for “delight” even when the robot apparatus 1 is rubbed at its head during execution of any work, i.e., even if the intensity increase/decrease means 55A is supplied with action information S12 indicating that it is executing the work and input information S1 to S3 indicating that it is rubbed at its head. For example, the intensity increase/decrease means 55A is constructed in form of a function or table which will generate intensity information S14A to S14C based on the action information S12 and the input information S1 to S3. For example, this is the same with the other intensity increase/decrease means 55B and 55C.
  • Thus, the feeling/[0108] instinct model part 40 comprises the intensity increase means 55A to 55C. By determining the intensities of the emotion units 50A to 50C with reference to not only the input information S1 to S3 but also the action information S12 indicating the current or past action of the robot apparatus 1, it is possible to avoid generation of an unnatural feeling which will increase the intensity of the emotion unit 50A for “delight” when the user rubs the robot apparatus 1 as a mischief. The feeling/instinct model part 40 is arranged such that each of the intensities of the desire units 51A to 51C is increased/decreased, based on the input information S1 to S3 and the action information S12, like the case of the desire units 51A to 51C.
  • The present embodiment has been explained with reference to an example in which the [0109] emotion units 50A to 50C for “delight”, “sorrow”, and “angry” comprise the intensity increase/decrease means 55A to 55C. However, the present invention is not limited hitherto. Needless to say, the other emotion units 50D to 50F for “surprise”, “fear”, and “hate” may comprise intensity increase/decrease means.
  • As has been described above, the intensity increase/decrease means [0110] 55A to 55C generate and output intensity information S14A to S14C in accordance with predetermined parameters, upon input of the input information S1 to S3 and the action information S12. Therefore, if different values are respectively set in different robot apparatuses 1, the robot apparatuses can have different personalities, e.g., a touchy robot apparatus, a cheerful robot apparatus, and the like can be provided.
  • (3-2) Processing in Action Determination Mechanism Part [0111]
  • The action [0112] determination mechanism part 41 is a section which determines a next motion (action) based on various information. Specifically, the action determination mechanism part 41 determines a next motion (action), based on the input information S14 containing the feeling/instinct state information S10 and the action information S12, and sends the contents of the determined motion (action), as the action command information S16, to the position transit mechanism part 42.
  • More specifically, as shown in FIG. 7, the action [0113] determination mechanism part 41 uses an algorithm called probability finite automaton 57 having a finite number of states which will determine a next motion by expressing the history of the input information S14 supplied in the past, as operation states (hereinafter called states), and by letting a corresponding state transit to another state, based on currently supplied input information S14 and the state at this time.
  • Thus, the action [0114] determination mechanism part 41 lets the state transit every time when input information S14 is supplied. This part 41 determines an action in correspondence with the state to which the state has transited from a previous one. In this manner, the motion can be determined by referring to not only the current input information S14 but also the past input information S14.
  • As a result, for example, if input information S[0115] 14 indicating that “the ball has been lost” is supplied in a state ST1 that “the robot apparatus 1 is chasing a ball”, the state transits to a state ST5 indicating “standing”. On the other hand, if input information S14 indicating “Stand up!” in a state ST2 that the robot is “sleeping”, the state transits to a state ST4 indicating “stading”. Accordingly, it is found that the states ST4 and ST5 are different from each other even though one same action is taken, because the history of the past input information S14 differs between these states.
  • In practice, the action [0116] determination mechanism part 41 lets the current state transit to a next state, when existence of a predetermined trigger is detected. A specific example of the trigger is a fact that the time for which the action of the current state is kept executed reaches a constant value, a fact that specific input information S14 is inputted, or a fact that among the intensities of the emotion units 50A to 50F and desire units 51A to 51D indicated by the feeling/instinct state information S10, the intensity of a desired unit exceeds a predetermined threshold value.
  • For example, the action [0117] determination mechanism part 41 selects the state of a transit destination, based on whether or not the intensity of a desired unit exceeds a predetermined threshold value, among the intensities of the emotion units 50A to 50F and desire units 51A to 51D indicated by the feeling/instinct state information S10 supplied from the feeling/instinct model part 40. IN this manner, for example, one same command signal S1 is inputted, the state can transit to different states in correspondence with the intensities of the emotion units 50A to 50F and desire units 51A to 51D.
  • Accordingly, the action [0118] determination mechanism part 41 generates action command information S16 for letting the robot apparatus itself “shake” in response to the palm of the hand set in front of the eyes and sends this information to the position transit mechanism part 42, if the action determination mechanism part 41 detects that the palm of a user's hand is set in front of the eyes of the robot apparatus, based on a supplied external information signal S2, that the intensity of the emotion unit 50C for “angry” is equal to or lower than a predetermined threshold value, based on the feeling/instinct state information S10, and that the robot apparatus is “not hungry”, i.e., the battery voltage is equal to or higher than a predetermined threshold value.
  • Accordingly, the action [0119] determination mechanism part 41 generates action command information S16 for letting the robot apparatus 1 “lick the palm of the hand”, if the action determination mechanism part 41 detects that the palm of a user's hand is set in front of eyes, the intensity of the emotion unit 50C for “angry” is equal to or lower than a predetermined threshold value, and the robot apparatus is “hungry (recharge is needed)”, i.e., the battery voltage is lower than a predetermined threshold value.
  • If the action [0120] determination mechanism part 41 detects that the palm is set in front of the eyes and that the intensity of the emotion unit 50C for “angry” is equal to or higher than a predetermined threshold value, the action determination mechanism part 41 generates action command information S16 for letting the robot apparatus 1 act like it “looks away” and sends it to the position transit mechanism part 42, regardless of whether or not the robot apparatus is not “hungry”, i.e., regardless of whether or not the battery voltage is equal to or higher than a predetermined threshold value.
  • As described above, the action [0121] determination mechanism part 41 determines a next action, based on the input information S14. For example, this part 41 holds a plurality of contents of actions to be determined, e.g., “action 1”, “action 2”, “action 3”, “action 4”, . . . “action n”. For example, the “action 1” contains a motion content for an action of kicking a ball. The “action 2” contains a motion content for an action of expressing a feeling. The “action 3” contains a motion content for an action of autonomous search. The “action 4” contains a motion content for an action of avoiding an obstacle. The “action n” contains a motion content for an action of notifying a small residue of battery.
  • From the plurality of information items thus prepared, a selection is made in accordance with the probability [0122] finite automaton 57, based on the input information S14. A next action is thereby determined.
  • The selection (determination) of an action is practically carried out by a [0123] selection module 44 as shown in FIG. 8. The selection module 44 outputs the selection result as action command information S16 to the position transit mechanism part 42, and as action information S12 to the feeling/instinct model part 40 and the action determination mechanism part 41. For example, the selection module 44 stands a flag on a determined action, and outputs the information thereof as action information S12 and action command information to the action determination mechanism part 41 and the position transit mechanism part 42.
  • The action [0124] determination mechanism part 41 determines an action, based on the action information S12 in addition to the external information S21 (the command signal S1 and external signal S2) or the like and the internal information S22 (the internal information signal S3 and the feeling/instinct state information S10) or the like. In this manner, a next action can be determined in consideration of a previous action.
  • In addition, the feeling/[0125] instinct model part 40 changes the states of the feeling and instinct, based on equal input information S1 to S3, as described above. As a result of this, the feeling/instinct model part 40 can generate different feeling/instinct state information S10 even if equal input information S1 to S3 is supplied, as described above.
  • A group ID can be appended to the information indicating the motion contents of the “[0126] action 1”, “action 2”, “action 3”, “action 4”, . . . “action n”. The group ID expresses equal information common to one same category. For example, in case where a plurality of patterns of actions are included in the action of “kick a ball”, one same group ID is added to each of the plurality of actions. By thus appending one same group ID to each of actions in one same category, the actions in the one same category can be processed as a group.
  • For example, when an action is selected by the [0127] selection module 44, one same group ID is issued to any action selected from one same category. Then, the group ID appended to the selected action is sent to the feeling/instinct model part 40, so that the feeling/instinct model part 40 can determine the states of the feeling and instinct.
  • In addition, the action [0128] determination mechanism part 41 determines parameters of an action to be executed in the state as a transit destination, such as the speed of walking, magnitudes of motions when hands and legs are moved, the pitch of a tone when a tone is generated, and the like, based on the intensity of a desired unit among the intensities of the emotion units 50A to 50F and desire units 51A to 51D indicated by the feeling/instinct state information S10 supplied from the feeling/instinct model part 40. IN this manner, for example, one same command signal S1 is inputted, the state can transit to different states in correspondence with the intensities of the emotion units 50A to 50F and desire units 51A to 51D. The part 41 then generates action command information S16 and sends it to the position transit mechanism part 42.
  • As for the input information S[0129] 1 to S3 comprised of the command signal S1, external information signal S2, and internal information signal S3, the contents of information differ in correspondence with the timing at which the information is inputted to the feeling/instinct model part 40 and the action determination mechanism part 41. Therefore, the input information S1 to S3 is inputted also to the action determination mechanism part 41 together with the feeling/instinct model part 40.
  • For example, when the [0130] controller 32 is supplied with an external information signal S2 indicating that the “head is rubbed”, the controller 32 generates feeling/instinct state information S10 indicating “delight” by means of the feeling/instinct model part 40, and supplies it to the feeling/instinct state information S10 to the action determination mechanism part 41. However, when an external information signal S2 indicating “a hand exists in front of eyes” is supplied in this state, the action determination mechanism part 41 generates action command information S16 of “willing to shake”, based on the feeling/instinct state information S10 indicating “delight” and the external information signal S2 indicating that “a hand exists in front of eyes”. The part 41 sends it to the position transit mechanism part 42.
  • In addition, the transit destination of the motion state (or node) can be determined by a certain probability by the probability [0131] finite automaton 57. For example, when there is an input from the outside, the motion state transits to a certain motion state (action state) at a probability of 20% (transit probability). In practice, in case where transition is possible from the state ST10 of “walking” to the state ST11 of “running” or the state ST12 of “sleeping” as shown in FIG. 9, the transit probability of transition to the state ST11 of “running” is set to P1 and the transit probability of transition to the state ST12 of “sleeping” is set to P2. The transit destination is determined those probabilities. Note that a technique of determining a transit destination by probability is disclosed in the Japanese Patent Application KOKAI Publication No. 9-114514.
  • Also, the [0132] robot apparatus 1 holds information concerning the probability of transition to a state, in form of a table. An example of the table is shown in FIG. 10. The table shown in FIG. 10 is constructed by node names, inputted event names, data names of inputted event, ranges of data of inputted events, and information concerning transit probabilities of transitions to states.
  • From this kind of table, the transit probability of transition to a certain state is determined in correspondence with an inputted event, and the state of the transit destination is determined on the basis of the transit probability. [0133]
  • The node name which indicates a current action state indicates an action state (or state simply) of the [0134] robot apparatus 1, i.e., it indicates what action is being executed now.
  • Also, the inputted event is information inputted to the [0135] robot apparatus 1, and the table is classified with use of these inputted events. For example, the “BALL” of the inputted event name means that the inputted event indicates detection of a ball. “PAT” means being patted. “HIT” means being hit. “MOTION” means detection of a moving ball, and “OBSTACLE” means detection of an obstacle.
  • In addition, the table is set up with use of a large number of inputted events. The present embodiment will be explained with reference to the case where “BALL”, “PAT”, “HIT”, “MOTION”, and “OBSTACLE”. [0136]
  • The data range of inputted events means the range of data in case where such an inputted event requires a parameter, and the data name of the inputted event means such a parameter name. That is, if an inputted event is “BALL”, the data name means “SIZE” which is the size of the ball, and the range of data means that the range of such a size is 0 to 1000. Similarly, if an inputted event is “OBSTACLE”, the data name means the distance “DISTANCE” thereof The data range means that the range of such a distance is 0 to 100. [0137]
  • Further, a transit probability of transition to a state is assigned to each of a plurality of states which can be selected in accordance with the characteristic of an inputted event. That is, transit probabilities are assigned respectively to arcs such that the total of the transit probabilities assigned to states which are selectable with respect to an inputted event. More specifically, in case of the inputted event of “BALL”, the total of [0138] transit probabilities 30%, 20%, . . . assigned to “ACTION 1”, “ACTION 3”, . . . that can be selected in accordance with the characteristic of this inputted event is arranged to become 100%.
  • Note that the “node” and “arc” described herein are generally defined by so-called probability finite automaton. “Node” is defined as a state (called an action state in the present embodiment). “Arc” is defined as an oriented line (called a transit motion in the present embodiment) which connects the “arc” and “node” with a certain probability. [0139]
  • From the table constructed by information as described above, an inputted event, the range of data obtained by the inputted event, and the transit probability are referred to, and the state as a transition destination is selected in the manner described below. [0140]
  • For example, in case where the [0141] robot apparatus 1 detects a ball (i.e., in case where an inputted event is “BALL”), a node 3 as a current state transits to a node 120 at a probability of 30% when the size of the ball is 0 to 1000. In this transition, the arc assigned to the “ACTION 1” is selected, and a motion or expression corresponding to “ACTION 1” is executed. Or, the node 3 as a current state transits to the node 500 at a probability of 20%. In this transition, the arc to which “ACTION 3” is assigned is selected, and a motion or expression corresponding to “ACTION 3” is carried out. “ACTION 1” and “ACTION 3” may be, for example, “bark” and “kick” or so. Meanwhile, when the size of the ball is 1000 or more, “node 120” and “node 500” are selected with no probability.
  • Also, in case where the [0142] robot apparatus 1 finds an obstacle (in case where an inputted event is “OBSTACLE”), the node of “node 1000” as a moving-back action is selected at a probability of 100% when the distance therefrom to the obstacle is 0 to 100. That is, the arc appended with “MOVE_BACK” at a probability of 100%, and the “MOVE_BACK” is executed.
  • As described above, a state (node) or arc can be selected, or an action model can be determined with use of a table or the like. Thus, it is possible to prevent one same transit determination from being always selected, by determining the state of a transit destination in consideration of a probability. That is, expression of the action of the [0143] robot apparatus 1 can be enriched.
  • In addition, selection of a state described above and determination of an action model can also be made on the basis of the state of an emotion model. For example, an action mode can be determined by changing the transit probability of transition between states described above on the basis of the state of the emotion model. [0144]
  • For example, determination of transition to a state as described above is utilized. That is, the state of the emotion model (e.g., the level thereof) is referred to, and the transit probability is changed in accordance with the state of the emotion model. In this manner, determination of an action model is made on the basis of the emotion model. As a result, the action model is influenced by the state of the emotion model. This will be explained with user of the table shown in FIG. 10. [0145]
  • For example, “JOY”, “SURPRISE”, and “SADNESS” are prepared as data which determines the transit probability. The “JOY”, “SURPRISE”, and “SADNESS” correspond to [0146] emotion units 50A, 50D, and 50B of the emotion model. Further, the range of data is set to 0 to 50, for example. This data range corresponds to the level of the emotion unit described above.
  • In this manner, when “JOY”, “SURPRISE”, and “SADNESS” are at predetermined levels, e.g., 0 to 50 in the case of the present embodiment, a predetermined transit probability of transition to a predetermined state is determined. For example, if “JOY” whose data range is 0 to 50 has an actual level of 30, arcs assigned with “[0147] ACTION 1”, “ACTION 2”, “MOVE_BACK”, “ACTION 4” are selected respectively at probabilities of 10%, 10%, 10%, and 70%, and the state transits to a predetermined states.
  • As shown in FIG. 10, the transit probability can be determined even in a state where no input is supplied from the outside by referring to “JOY”, “SURPRISE”, and “SADNESS” regardless of inputted events, i.e., by referring to them in a so-called empty event state. In this manner, for example, the emotion model can be referred to for determination of the transition probability when no inputted event is detected for a predetermined time. [0148]
  • Further, in this case, the actual levels of “JOY”, “SURPRISE”, and “SADNESS” can be referred to by referring to “JOY”, “SURPRISE”, and “SADNESS” in this order. In this manner, for example, when the actual level of “SADNESS” is 60, the data range is set to 0 to 50 so that the actual level of the next “JOY” is referred to. Further, in case where “JOY” whose data range is 0 to 50 has an [0149] actual level 20, the arc to which “ACTION 2” is assigned is selected at a probability of 30% and the arc to which “MOVE_BACK” is assigned is selected at a probability of 60%, so that the state transits to a predetermined state.
  • As described above, the present embodiment is arranged such that the action model can be determined on the basis of the state of the feeling model. Thus, the action model is influenced by the state of the feeling model thereby enriching expressions of the [0150] robot apparatus 1.
  • By various means as described above, the action command information S[0151] 16 is determined by the action determination mechanism part 41.
  • (3-3) Processing in Position Transit Mechanism Part [0152]
  • The position [0153] transit mechanism part 42 is a part which generates information for transiting to an aimed position or aimed motion. Specifically, the position transit mechanism part 42 generates position transit information S18 for letting the current position or motion to a next position or motion (an aimed position or motion), based on the action command information S16 supplied from the action determination mechanism part 41, as shown in FIG. 3. The part 42 then sends the information to the control mechanism part 43. For example, the position to which the current position can transit is determined by the physical shape of the robot apparatus 1, such as the shapes of the trunk, hands, and legs, the size, and the connecting states of respective parts, and the mechanisms of the actuators 23A to 23N, such as directions and angles in and at which joints bend. The position transit information S18 is prepared as information for executing transition, taking into consideration the above.
  • The [0154] control mechanism part 43 actually moves the robot apparatus 1 based on the position transit information S18 thus sent from the position transit mechanism part 42.
  • The position [0155] transit mechanism part 42 previously registers positions to which the robot apparatus 1 can transit and motions to be taken when it transits. For example, the positions and motions are maintained in form of a graph, and the part 42 sends action command information S16 supplied from the action determination mechanism part 41, as position transit information S18, to the control mechanism part 43. The control mechanism part 43 operates in accordance with the position transit information S18 and lets the robot apparatus transit to an aimed position or an aimed motion. In the following, processing to be executed by the position transit mechanism part 42 will be explained specifically.
  • For example, the [0156] robot apparatus 1 cannot directly transit to a position according to the contents of a command (action command information S16) in some cases. This is because the positions of the robot apparatus 1 are classified into the type of positions to which the robot apparatus 1 can transit directly from the current position, and the type of positions to which it cannot transit directly from the current position but can transit through a certain motion or position.
  • For example, the [0157] quadruped robot apparatus 1 can directly transit from a position in which the robot apparatus 1 lies down stretching out widely hands and legs to a position in which it keeps itself down but cannot directly transit to a standing position. Therefore, the robot apparatus 1 needs motions in two stages, i.e., the robot apparatus 1 must once contract its hands and legs close to the trunk and then stands up. In addition, there is a position which cannot be executed safely. For example, when the quadruped robot apparatus 1 is going to raise both front legs to give a hail in a standing position, it stumbles. Or, when “thrashing its legs” which can be carried out only in the sitting position is sent as a content of a command, the current position is a lie-down position (sleeping position), transition from the sleeping position to the sitting position and motion of thrashing legs are executed if the command is issued directly. Therefore, the robot apparatus 1 loses its balance and stumbles over.
  • Therefore, if the action command information S[0158] 16 supplied from the action determination mechanism part 41 indicates a position to which the robot apparatus 1 can directly transit, the position transit mechanism part 42 directly sends the action command information S16 as the position transit information S18 to the control mechanism part 43. Otherwise, if the action command information S16 indicates a position to which the robot apparatus 1 cannot directly transit, the position transit mechanism part 42 generates position transit information S18 which lets the robot apparatus 1 transit to an aimed position (action command information S16 indicated by the action command information S16) through another position or motion to which the robot apparatus can transit. The part 42 then sends it to the control mechanism part 43. In this manner, the robot apparatus 1 can avoid a situation that a position to which the robot apparatus cannot transit is forcedly carried out or stumble. Or, preparation of a plurality of actions to be executed until an aimed position or motion is achieved can be connected to enrichment of expressions.
  • Specifically, the position [0159] transit mechanism part 42 holds a graph that registers positions and motions, which the robot apparatus 1 can take, and that is constructed by connecting a position and a motion for letting this position transit. The position transit mechanism part 42 searches a route from the current position to an aimed position or an aimed motion, on the graph, based on the action command information S16 as command information, and lets the robot apparatus 1 move on the basis of the search result, thereby to let the robot apparatus transit from the current position to an aimed position or motion. That is, the position transit mechanism part 42 previously registers a position which the robot apparatus 1 can take and records routes through which the robot apparatus can transit between two positions. Based on this graph and the action command information S16 outputted from the action determination mechanism part 41, the robot apparatus 1 is let transit to an aimed position or motion.
  • Specifically, the position [0160] transit mechanism part 42 uses an algorithm called an oriented graph 60 as shown in FIG. 11, as a graph described above. The oriented graph 60 is constructed by nodes indicating positions which the robot apparatus 1 can take, oriented arcs (motion arc) each connecting two positions (nodes) between which the robot apparatus can transit, and, in some cases, arcs of motions each expressing return from a node to itself, i.e., a self-motion arc expressing a motion completed within one node. The nodes and arcs are connected to each other. That is, the position transit mechanism part 42 maintains an oriented graph 60 which is constructed by nodes as information indicating positions of the robot apparatus 1, and oriented arcs and self-motion arcs as information indicating motions of the robot apparatus 1. In addition, the part 42 grasps positions as information of points, and further, information of motions as information of oriented lines.
  • The oriented arcs and self-motion arcs may be plural. That is, a plurality of oriented arcs may be connected between nodes (positions) between which the robot apparatus can transit, and one node may be connected to a plurality of self-motion arcs. [0161]
  • When action command information S[0162] 16 is supplied from the action determination mechanism part 41, the position transit mechanism part 42 searches a route from the current node to a next node, following the direction of an oriented arc a, such that a node corresponding to a current position and a position to be taken next, which is indicated by the action command information S16, and records nodes on the searched route in order, thereby to prepare a plan of position transition. In the following, search for a route to an aimed node (the node indicated by a command) from the current position is called a route search. The aimed arc described herein may be an oriented arc or a self-motion arc. For example, the case where the self-motion arc is an aimed arc is a case where the self-motion is aimed (commanded), e.g., a case where a predetermined trick (motion) is instructed, or the like.
  • The position [0163] transit mechanism part 42 outputs a control command (position transit information S18) to the control mechanism part 43 in the rear stage, based on a position transit plan until an aimed position (node) or motion (oriented arc or self-motion are) is attained.
  • For example, as shown in FIG. 12, when action command information S[0164] 16 of “Sit down!” is supplied in case where the current position is at a node ND2 indicating a position of “being down”, an oriented arc a9 exists from a node ND2 indicating the position of “being down” to a node ND5 indicating the position of “sitting”, so direct transition can be achieved. As a result, the position transit mechanism part 42 supplies position transit information S18 indicating a content of “Sit down!” to the control mechanism part 43.
  • When action command information S[0165] 16 of “Walk!” is supplied in case where the current position is at a node ND2 indicating the position of “being down”, direct transition from “being down” to “walk” cannot be achieved, so that the position transit mechanism part 42 searches a route from a node ND2 indicating the position of “being down” to a node ND4 indicating the position of “walking”, thereby to prepare a position transit plan. That is, a position plan is prepared such that a node ND3 indicating the position of “standing” is selected through the oriented arc a2 from the node ND2 indicating the position of being down, and the position further reaches a node ND4 through an oriented arc a3 from the node ND3 indicating the position of “standing”. As a result of this position transit plan, the position transit mechanism part 42 issues position transit information S18 having a content of “Stand up!”, and thereafter outputs position transit information S18 having a content of “Walk!” to the control mechanism part 43.
  • As for pasting of a self-motion arc to each node in case where a graph is constructed by nodes as shown in FIG. 12, a self-motion arc indicating a motion of “dance” is pasted on a node ND[0166] 3 indicating a position indicating the position of “standing”. Or, a self-motion arc indicating a motion of “hail” is pasted on a node ND5 indicating the position of “sitting”, or a self-motion arc indicating a motion of “snore” is pasted on a node ND1 indicating the position of “stretch out”.
  • The [0167] robot apparatus 1 is constructed so as to normally grasp what position the robot apparatus 1 is in. However, the robot apparatus 1 loses its current position in some cases. For example, the robot apparatus 1 cannot grasp its current position when it is lifted up by a user, when it stumbles over, or when the power is started. For example, the current position which thus cannot be grasped is called an indefinite position. If the current position thus cannot be grasped and is determined to be an indefinite position, a so-called start position cannot be determined, so that a position transit plan until an aimed position or motion cannot be prepared.
  • Hence, a node indicating a neutral position is provided. If the current position is indefinite, the robot apparatus is let transit to a neutral position, and a position transit plan is then prepared. For example, when the current position is indefinite, the neutral position is let transit to the node ND[0168] nt, as shown in FIG. 13, and this position is then let transit to a node indicating a basic position such as a node ND3 indicating the position of “standing”, the node ND5 indicating the position of “sitting”, or the node ND1 indicating the position of “lying down”. Further, after the transition to this basic position, a position transit plan is prepared as an original problem.
  • The present embodiment has been explained with the “standing”, “sitting”, and “lying down” positions cited as basic positions to which the neutral position transits. Needless to say, the present embodiment is not limited hitherto but another position may be taken as a basic position. [0169]
  • With respect to transition from an indefinite position to the neutral position (node), an operating part (e.g., an actuator) is driven at a low torque or a low speed, for example. In this manner, the load to the servo is reduced. Therefore, the operating part can further be prevented from being driven like in normal operation and from being damaged thereby, for example. For example, the [0170] tail part 5 normally moves as if it swung. However, if this motion of the tail part 5 is carried out when the position transits to the neutral position (node), the tail part 5 may be damaged if the robot apparatus 1 lies down in an indefinite position.
  • Also, the [0171] robot apparatus 1 can grasp a stumble of itself and can transit from a stumble position to a node indicating a basic position as described above. For example, the robot apparatus 1 is provided with an acceleration sensor and detects stumbling of itself.
  • Specifically, when the [0172] robot apparatus 1 detects that it has stumbled by means of an acceleration sensor, the robot apparatus 1 makes a predetermined motion for recovery from the stumble and thereafter transits to a node indicating a basic position as described above.
  • The [0173] robot apparatus 1 is also arranged so that it grasps the stumbling direction. More specifically, the robot apparatus 1 can grasp its stumbling direction in both of the frontward, backward, leftward, and rightward directions. In this manner, the robot apparatus 1 can make a motion for recovery from a stumble in correspondence with the stumbling direction. Accordingly, the robot apparatus can rapidly transit to a basic position.
  • When a stumble is detected, a predetermined expression may be outputted. For example, the robot apparatus thrushes its legs according to a self-motion arc a[0174] 11, as a predetermined expression. In this manner, it is possible to express a situation that the robot apparatus 1 has stumbled over and struggles.
  • Further, if a plurality of oriented arcs not shown exist between a node ND[0175] 2 of “lying down” and a node ND3 of “standing”, a position transit plan of transiting to the node ND3 of “standing” can be prepared by selecting an optimal oriented arc.
  • As described above, there are cases that a plurality of oriented arcs exist between nodes and that a plurality of nodes are connected to one node through oriented arcs. Therefore, there are a plurality of routes that connect an aimed node or arc (oriented arc or a self-motion arc) from a current node. [0176]
  • From the above, a position transit plan is prepared in order to take an index of shortening the distance between a current node and an aimed node, as an index, i.e., by means of a so-called shortest distance search for a route which provides the shortest distance. [0177]
  • As shown in FIG. 14, the shortest distance search is carried out using the concept of distance for an oriented arc (arrow mark) connecting nodes (circle mark). A route search method of this kind is a path search logic of DAIKISUTORA (phonetic translation). The distance can be substituted by a concept of weighting, time, or the like as described later. FIG. 14 shows a result of connecting nodes to which the current position can transit through an oriented arc having a distance of “1”. [0178]
  • Therefore, it is possible to select the shortest route from the current position (node) to an aimed node by using this kind of distance as an index of a route search. [0179]
  • More specifically, suppose a case that four routes exist from the current node to an aimed node wherein the first to fourth routes respective have distances of “12”, “10”, “15”, and “18”. The node transits through the second route having the distance of “10”, and a position transit plan until an aimed position is prepared. [0180]
  • The method of searching the shortest distance is not limited to the method of this kind. [0181]
  • In the above example, a plurality of routes are searched, and the route having the shortest distance to an aimed node is selected from the search result. In other words, routes through which the current node can transit to an aimed node are searched as many as possible to attain a plurality of routes. From the plurality of routes, the shortest route is specified with the distance used as an index. However, search for a route having the shortest distance is not limited to search for the shortest route by means of this method but the processing of route search can be terminated at the time point when the shortest route to an aimed node is detected. [0182]
  • For example, the distance as a search index is gradually extended from the current position to search nodes one after another. Every time a node is searched, the node at the shortest distance (which is not the node as a target) is determined. Finally, at the time point when the node as a target is detected, processing for searching a route is terminated. That is, for example, the concept of “equal distance” which can be recognized by the concept of “contour lines” is used, and the distance from the current node is extended to detect nodes on the “equal distance” one after another. At the time point when the node as a target can be detected finally, the processing of searching a route is terminated. Path search of DAIKISUTORA can be cited as a method of searching a route. [0183]
  • According to this kind of search for a route of the shortest distance, the route causing the shortest distance can be searched without searching all the routes that can exit with respect to a node as a target. Therefore, the route of the shortest distance can be detected in the shortest time. As a result, it is possible to reduce the load to the CPU and the like, which is required to perform this kind of search processing. Accordingly, the shortest route to the node as a target can be detected without searching all of the routes. The load caused by search from the entire network can be eliminated. For example, in case where the network which constructs this graph is of a large scale, the route can be searched with the load reduced. [0184]
  • Also, as shown in FIG. 15, the method of searching a route may be arranged as follows. Nodes are previously classified (clustered) roughly on the basis of actions or positions. A coarse search is carried out at first by clustering. Thereafter, a detailed search may be carried out. For example, when the robot apparatus is let take a position of “right front leg kick”, an area of the class of “kick a ball” is selected at first as a route search range, and a path is then searched only in the area. [0185]
  • For example, coarse classes and elements thereof are related with each other, i.e., “kick a ball” and “right front leg kick” are related with each other by adding ID information and the like when designing this kind of system. [0186]
  • Between nodes, an oriented arc oriented in both directions to the nodes may exist. Therefore, there may be an oriented arc which returns from a node to which the robot apparatus once has transited. In this case, a returning oriented arc may be selected if there is no limitation to route search, and the transition may return to an original node, in some cases. To prevent this, it is possible to select an oriented arc which does not lead to a node which the robot apparatus has once passed. [0187]
  • In case where the node as a target cannot be reached, a result that a final position cannot be taken is outputted to an upper control means, a lower control means, or the like. [0188]
  • The above example has explained that the target is a position (node). However, the target may be a motion, i.e., an oriented arc or a self-motion arc. A case that the arc as a target is a self-motion arc will be a situation that [0189] leg parts 4 are let thrash or so.
  • In addition, it is possible to refer to weights added to oriented arcs and motion times of oriented arcs, as indexes for route search. If weights are added to oriented arcs, the following manner is taken. [0190]
  • The total sum of weights of oriented arcs existing on each candidate for the route from the current node to a node as a target is calculated, and comparison of the total sum is made for every candidate for the route. An optimal route (e.g., a route requiring the minimum cost) is selected. [0191]
  • The following will shows a case of considering the motion times (execution time) of oriented arcs. [0192]
  • For example, if the current node transits to an aimed node through two oriented arcs wherein the two oriented arcs require respectively motion times of one second and two seconds, the transit time requires three seconds. Hence, if the current node ND[0193] 1 transits to an aimed node ND3 in the graph as shown in FIG. 16, there are a case where the current node transits to the aimed node ND3 through a node ND2 and a case where the current node transits to an aimed node ND3 only through an oriented arc a3. Where the motion time is used as an index, the robot apparatus 1 can reach faster the aimed node ND3 in a shorter time period in the former case of transition through the node ND2 passes through the oriented arc a1 which requires a motion time of one second and the oriented arc a2 which requires a motion time of two seconds, while the latter case of transition only through the oriented arc a3 which directly connects the current node ND1 and the aimed node ND3 to each other and which requires a motion time of five seconds. Accordingly, in case where the motion time is taken as an index, it is possible to make a position transition plan that the aimed node can be reached in the shortest time by the transition through two oriented arcs.
  • Also, the weights added to oriented arcs or the distances may be used as the difficulty levels of motions. For example, the fact that the difficulty level is low is set as a fact that the distance is short. Also, one of oriented arcs can be set as a default. By setting one of the oriented arcs as a default, it is possible that the default oriented arc is normally selected and another oriented arc which is not the default is selected only when an instruction is given. [0194]
  • As described above, by adding any indexes for route search to oriented arcs, i.e., by adding motion times or difficulty levels whose balances are difficult to keep to oriented arcs, it is possible to make a position transit plan which selects an optimal route by avoiding a large action. In addition, it is possible to make a position transit plan which selects a route of an optimal efficiency by adding energy consumption rates to oriented arcs. [0195]
  • Since there is a case where a plurality of oriented arcs exist between nodes, a probability of selection can be assigned to every oriented arc. That is, different probabilities are respectively assigned to arcs. In this manner, various motions can be selected between same nodes, so that a variety can be given to a series of motions. For example, when the sitting position transits to the standing position, a motion in which folded legs are once extended backwards and the robot apparatus then stands up on four legs or a motion in which front legs are expanded forwards and the robot apparatus then stands up is selected depending on the probabilities. In this manner, it is possible to bring about an effect that which motion the robot apparatus takes to stand up cannot be predicted before a selected motion is reproduced (or carried out actually). [0196]
  • More specifically, a probability Pi assigned to an oriented arc added with a weight or distance of mi can be expressed by the expression (1) where distances or weights of oriented arcs are m[0197] 1, m2, m3, . . . and the total sum thereof (m1+m2+m3+ . . . ) is M. P i = M - m i ( M - m i ) + ( M - m 2 ) + ( M - m 3 ) + ( 1 )
    Figure US20030023348A1-20030130-M00001
  • In this manner, an oriented arc having a large weight is selected as a transit route at a low probability while an oriented arc having a small weight is selected as a passing route at a high probability. [0198]
  • Also, it is possible to limit the area where route search is carried out. For example, route search may be limited to routes within a predetermined range. In this manner, an optimal route can be searched in a much shorter time. [0199]
  • In addition, if an optimal route is selected by additionally executing weighting thereon, monotonousness in motions for compensation between distant positions can be avoided, and the probability of transition through a route of a high risk can be reduced. [0200]
  • Also, arcs (oriented arcs or self-motion arcs) can be registered in the graph as described below. [0201]
  • For example, a plurality of execution forms can be considered with respect to a motion of “walk”. For example, as shown in FIG. 17, motions of walk in a direction of [0202] angle 0°, walk in a direction of angle 30°, and walk in a direction of angle 60° can be a plurality of execution forms. Providing a plurality of execution forms for one motion, i.e., increase of the number of parameters concerning one motion leads to enrichment of expressions of the robot apparatus 1. These motions can be achieved by providing arcs having those different parameters.
  • Meanwhile, this provision of arcs having different parameters corresponding to a plurality of execution forms is not preferred for the aspect of effective use of resources of a network. This is because ninety one arcs are required, in case of executing motions of walking in different directions at a regular interval of 1° from 0° to 90°. [0203]
  • Therefore, “walk” is set as a path, and the walking directions are set as parameters. For example, in a self-motion arc, “walk!” is instructed and a walking direction is optionally supplied as a parameter at this time. “Walk” is carried out in the direction specified by the parameter when the motion is reproduced. As a result of this, the instruction of “walk” can be accomplished with the walking direction finely set and without necessitating a plurality of self-motion arcs, if only one self-motion arc of “walk” and a parameter of the walking direction are provided. In this manner, the graph can be simplified even if the scenario varies. Network resources can be used effectively. [0204]
  • Specifically, information concerning a parameter of a motion, such as a walking direction, is added as additional information to action command information S[0205] 16. The position transit mechanism part 42 sends position transit information S16 having a content of “walk” and added with such a parameter, to the control mechanism part 43.
  • The above-described embodiment has explained a case of adding a parameter to a self-motion arc. The present embodiment is not limited hitherto but a parameter may be added to an oriented arc. Accordingly, it is unnecessary to prepare a plurality of oriented arcs which have different “walking directions” as parameters although the motion of “walk” is common to the oriented arcs. [0206]
  • In case of repeating one same motion, information of “three times” is optionally supplied as the number of times to be repeated. As a result, the command forms can be simplified. In this manner, for example, when the robot apparatus is let walk three steps, this instruction can be executed by an instruction form of “walk!” and “three times”, in place of issuing the command of “walk!” three times or issuing a command of “walk three times (steps)!”. In this manner, the arcs or node which constructed by such instruction forms can be maintained with the information amount reduced. In addition, since the number of repetitions can be specified, the number of time for which instructions are issued and the time period until an instruction is transmitted can be shortened. [0207]
  • For example, as shown in FIG. 18, information of “one step walk” as an instruction for a motion of walk step by step is outputted to the [0208] control mechanism part 43, and “3” is instructed as a parameter thereof In this manner, a motion of walking only three steps is achieved. Also, by instructing “7” as a parameter, the robot apparatus is let make a motion of walking seven steps. Further, by instructing “−1 ” as a parameter, the robot apparatus can be let keep walking until another instruction is given with respect to legs.
  • Also, the robot apparatus is let repeat a motion. A stop command for stopping a repetitive motion is given later to the [0209] control mechanism part 43. In this manner, the robot apparatus can be let execute repeatedly one same motion.
  • Also, a predetermined arc can be executed with an expression pasted thereon. That is, in case where a predetermined motion (arc) is executed, it is possible to let the robot apparatus execute a motion as another expression in synchronization with the predetermined motion. For example, as shown in FIG. 19, in case where an oriented arc a for transiting from the node ND[0210] 1 of the sitting position to the node ND2 of the standing position is set, a predetermined voice or motion can be let correspond to the predetermined oriented arc a. In this manner, an expression of smiling eyes or a voice of “Mmm” can be reproduced in synchronization with a motion of transiting from the sitting position to the standing position. Thus, a thick expression is enabled by a combination of different parts, thereby involving an effect that the scenario is enriched.
  • In some cases, self-motion arcs connected to nodes overlap each other. An example thereof will be the case where a motion of getting angry is made at each of the nodes of the sleeping position, sitting position, and standing position, as shown in FIG. 20. [0211]
  • In this case, a motion named “angry” is named to each of motions expressing anger, such as a self-motion arc from the sleeping position to the sleeping position, a self-motion arc from the sitting position to the sitting position, and a self-motion arc from the standing position to the standing position. Then, by merely supplying an instruction of “anger”, the closest motion (self-motion arc) of “angry” can be searched by shortest route search. That is, the shortest executable route among aimed motions (self-motion arcs) is selected as a position transit plan. In this manner, for example, when action command information S[0212] 16 of “angry” is sent in case where the current position is the closest to the sleeping position, a motion of scraping the ground in the sleeping position is executed as a motion of getting angry which is connected as a self-motion arc of the node of the sleeping position.
  • If there is a self-motion arc of executing thus one same motion at each of a plurality of nodes and if an instruction for such a predetermined motion is given, a motion can be executed in an optimal position (the position at the shortest distance) by the shortest distance search. In this manner, the upper control means can execute an instructed motion without necessitating constant grasp of states or motions of respective parts. That is, for example, the upper control means (position transit mechanism part [0213] 42) needs only to grasp the current node if the command of “angry” is given. A position transit plan up to a motion of getting angry in the sleeping position at the shortest distance can be prepared by only searching the self-motion arc of “angry” without searching an actual node of “getting angry in a sleeping position”.
  • Also, as described above, if a motion of one same system is instructed by one same name, an optimal motion is searched and selected on a previously registered graph, based on the current position. Therefore, a motion instruction such as “angry” or the like, which has abstract and rich patterns, can be instructed in a simple manner. [0214]
  • The [0215] robot apparatus 1 can operate separately respective componential parts of itself That is, commands concerning respective componential parts can be executed. Such componential parts of the (entire) robot apparatus 1 will be the head part 2, leg parts 4, and tail part 5, as shown in FIG. 21.
  • In the [0216] robot apparatus 1 thus constructed, the tail part 5 and the head part 2 can be moved individually. That is, since resources of these parts do not compete with each other, these parts can be operated individually. Meanwhile, the entire robot apparatus 1 and the head part 2 cannot be moved individually. That is, since resources of the entire apparatus and this part compete with each other, the entire robot apparatus 1 and the head part 2 cannot be moved individually. For example, contents of a command concerning the head part 2 cannot be executed while a motion of the entire apparatus, a command of which contains a motion of the head part 2, is being executed. For example, it is possible to swing the tail part 5 while swinging the head part 2. On the other hand, it is impossible to swing the head part 2 while a trick is carried out by the entire apparatus.
  • Table 1 shows combinations of resources in cases where resources compete with action command information. S[0217] 16 supplied from the action determination mechanism part 41 and resources do not compete with it.
    TABLE 1
    COMBINATION OF PARTS COMPETITION OF RESOURCES
    HEAD, TAIL NO
    HEAD, ENTIRE YES
    LEGS, ENTIRE YES
    HEAD, LEGS, TAIL NO
  • Thus, if commands whose resources compete with each other are supplied, either a command concerning a motion of the [0218] entire apparatus 1 or a command concerning a motion of the head part 2 must be executed in advance. The following will explain processing in case where such commands are executed.
  • In case where either one command is executed in advance because resources compete with each other, e.g., in case where the motion of the [0219] entire apparatus 1 is finished and a command concerning the head part 2 is then executed, the motion of the head part 2 is started from the last position attained through the motion of the entire apparatus 1. However, the last position after the motion of the entire apparatus 1 is not always a position suitable for the head part 2 to start a motion. The head part 2 makes a sharp motion which results in an unnatural behavior if the motion of the head part 2 is started in a state where the last position after the motion of the entire apparatus 1 is not suitable for the head part 2 to start a motion, i.e., in case where positions before and after transition takes place in response to a different command are not continuous with each other. This is a problem caused in case where the current position (or motion) and the aimed position (or motion) are associated with the entire robot apparatus 1 and the componential parts thereof, and the network (graph) composed of nodes and arcs constructed for controlling the entire robot apparatus 1 and the network (graph) composed of nodes and arcs constructed for controlling the respective componential parts of the robot apparatus 1 are constructed separately without any association provided therebetween.
  • An unnatural motion which the [0220] robot apparatus 1 makes due to discontinuous positions before and after transition as described above can be eliminated by preparing a position transit plan such that transit motions continue smoothly. Specifically, this problem is eliminated by preparing a position transit plan adopting a basic position common to the entire apparatus and the componential parts on graphs.
  • Explained below will be a case where information concerning a network used for the position transit plan of the [0221] robot apparatus 1 is constructed into a hierarchical structure, as a whole, from information (graph) concerning the entire network and information (graph) concerning the network of respective componential parts. For example, information used for the position transit plan, which is composed of the information (graph) concerning the entire network and the information (graph) concerning the componential parts, is constructed in the position transit mechanism part 42 as shown in FIG. 8 described above.
  • The basic position is a position to which the robot apparatus temporarily transits to shift the state between an motion of the entire and a motion of componential parts. An example of the basic position is a sitting position as shown in FIG. 22B. A procedure of smoothly connecting transit motions will be explained with respect to a case where the sitting position is set as a basic position. [0222]
  • Specifically, explanation will be made of a case where the current position is grasped as a position ND[0223] a0 on the graph of the entire apparatus, as shown in FIG. 23, and a motion a2 of the head part is executed as a target.
  • On the graph of the entire apparatus, an oriented arc a[0224] 0 which lets the position of the entire robot apparatus 1 transit from the current position NDa0 to the basic position NDab is selected. If the entire apparatus takes the basic position, the entire apparatus is grasped as being in the state (node) of the basic position on the graphs of the head part, leg parts, and tail part.
  • On the graph of the head part, an optimal oriented arc a[0225] 1 is selected from the state of the basic position NDhb, and a route to an aimed motion (self-motion arc) a2 of the head part 2 is determined.
  • At this time, a search for a route from the current position ND[0226] a0 to the basic position NDab on the graph of the entire apparatus and a search for a route from the basic position NDhb to an aimed motion a2 on the graph of the head part are carried out by means of the shortest distance search as described above.
  • According to this procedure, selection of a transit route (position transit plan) which smoothly connects motions of the entire apparatus and the componential part is made on the graphs of the entire apparatus and the head part. Further, the [0227] control mechanism part 43 outputs position transit information S19 to the control mechanism part 43, based on the position transit plan.
  • The above example is a specific example in which a motion of the entire apparatus is continued smoothly to a motion of a componential part. Explained next will be a specific example in which a motion of a componential part is continued smoothly to a motion of the entire apparatus. More specifically, as shown in FIG. 24, explanation will be made of the case where the [0228] head part 2 is grasped as being in a position NDh0 on the graph of the head part, the leg parts 4 are grasped as being in a position NDf0 on the graph of the leg parts, and a motion a4 of the entire apparatus is executed as a target.
  • On the graph of the head part, an oriented arc a[0229] 0 which lets the position of the head part 2 transit from the current position NDh0 to the basic position NDab is selected. On the graph of the leg parts, oriented arcs a1 and 12 which let the position of the leg parts 4 transit from the current position NDf0 to the basic position NDfb through the position NDf1. Suppose that the tail part 5 has originally been in the basic position. If respective componential parts come to their basic positions, the position is grasped as a basic position also on the graph of the entire apparatus.
  • Further, on the graph of the entire apparatus, an optimal oriented arc a[0230] 3 is selected from the state of the basic position NDhb, and a route to the motion (self-motion arc) a4 of the entire apparatus is determined.
  • For example, the motion of each componential part may be executed at the same time when a motion of another componential part is executed to transit to a basic position, or the motion of each componential part may be executed with limitations added. For example, the motion of each componential part may be executed at a certain timing. Specifically, in case where a command is issued with respect to a motion of the [0231] entire apparatus 1 while the head part 2 is used to make a trick, the head part 2 cannot transit to the basic position NDhb because it is just executing a trick. Therefore, the leg parts 4 are firstly brought into the state of the basic position NDfb, and the had part 2 is then let transit to the state of the basic position NDhb after it finishes the trick.
  • Also, each componential parts can be arranged to move in consideration of the balance of the position of the [0232] entire apparatus 1. For example, if the head part 2 and the leg parts 4 are moved simultaneously or if the head part is firstly set in the basic position NDhb, the balance is lost and the robot apparatus stumbles over. In this case, the leg parts 4 are firstly brought into the state of the basic position NDfb, and the head part 2 is then let transit to the state of the basic position NDhb.
  • By preparing-a position transit plan in which the state is let once transit to the basic position, motions can be continued smoothly. [0233]
  • In addition, if resources of the parts not used are released, the resources can be used for another purpose. For example, if the resources of the [0234] head part 2 are released while the robot apparatus is walking, the head part 2 can be let track (follow) a moving ball.
  • Note that the basic position is not limited to one basic position. For example, a plurality of positions such as the sitting position, sleeping position, and the like can be set as basic positions. In this manner, a shift from a motion of the entire apparatus to any of motions of componential parts or a shift from any of motions of componential parts to a motion of the entire apparatus can be achieve by a shortest motion through a shortest distance (or shortest time). Also, this setting of a plurality of basic positions leads to enrichment of expressions of the [0235] robot apparatus 1.
  • Also, a determination or the like of a position or motion in the position [0236] transit mechanism part 42 as described above is made on the basis of action command information S16 from the action determination mechanism part 41. Further, the action determination mechanism part 41 normally sends the action determination instruction information S16 to the position transit mechanism part 42 without limitations. That is, while a motion is executed, a command concerning another motion is issued, and the command thus issued is sent to the position transit mechanism part 42. There is provided a command storage part for storing action command information S16 sent from the position transit control part 42 in correspondence with the command. This command storage part stores action command information S16 generated by the action determination mechanism part 41 and can further perform a so-called list operation. The command storage part is, for example, a buffer.
  • In this manner, if a command sent from the action [0237] determination mechanism part 41 cannot be executed at present, e.g., when a tick (predetermined motion) is being carried out, the command thus sent is stored into the buffer. For example, a newly sent command D is newly added as a list to the buffer, as shown in FIG. 25A. After the trick is finished, normally, the oldest command is picked up from the buffer, and a route search is carried out. For example, if commands are stored as shown in FIG. 25A, the oldest command A is executed at first.
  • Commands are thus stored into the buffer and are executed one after another in the order from the oldest command. However, it is also possible to perform a list operation to insert or cancel a command. [0238]
  • As for insertion of a command, a command D is inserted into a command group which has been stored. In this manner, a route search for the command D can be executed, prior to commands A, B, and C waiting for execution. [0239]
  • As for cancellation of a command; a command which has already been stored in a buffer is cancelled. In this manner, a command unnecessitated due to any reason is prevented from execution. [0240]
  • The buffer can include a plurality of command storage areas corresponding to the [0241] entire robot apparatus 1 and the respective componential parts. In this case, commands for motions of each the entire apparatus and the componential parts are stored as shown in FIG. 26. By thus comprising areas for storing commands corresponding to the entire and the componential parts, the following operation is enabled.
  • It is possible to add synchronization information for reproducing motions of different componential parts such as the head part and the leg parts in synchronization with each other. For example, as shown in FIG. 27, commands respectively stored in the command storage areas of the [0242] head part 2 and the leg parts 4 are added with information concerning numbers of the order in which reproductions of commands are to be started. Information indicating one same number, e.g., information whose reproduction should be started fifth is assigned as synchronization information.
  • In this manner, if the third motion of the [0243] head part 2 whose execution has been started earlier by two ranks should end before the fourth motion of the leg parts 4 whose execution has been started earlier by one rank, execution of only the fifth command is prevented from being started, with respect to the head part 2. After waiting until reproduction of the fourth motion of the leg parts 4 which is added with the information of the fifth rank is finished, reproduction requests for the motions of the head part 2 and the leg parts 4 which are added with the information of the fifth ranks can be issued simultaneously. In this manner, for example, it is possible to realize expressions of the robot apparatus 1 with a greater impact by issuing start of the motion of swinging laterally the head part 2 and start of the motion of inclining the leg parts 4 leftward and rightward.
  • Also, by the list operation, a plan of a series of motions before and under execution can be stopped halfway or a command with a higher priority can be inserted later into a higher rank. Therefore, it is possible to construct a scenario with enough flexibility. [0244]
  • As described above, a search for an optimal route to an aimed position or motion based on action command information S[0245] 16 sent from an action determination mechanism part 41, and a determination of a position transit plan are realized because the position transit mechanism part 42 comprises a motion route search device 60 as shown in FIG. 28.
  • The motion [0246] route search part 60 comprises a command hold part 1, a route search part 62, and a graph storage part 63.
  • A command concerning the entire apparatus and the componential parts is supplied to the motion [0247] route search part 60 from the action determination mechanism part 41. For example, an aimed position or an aimed part (e.g., the head part) of a motion, a command for a list operation with respect to a current command and a series of commands issued in the past, information concerning the characteristic of the command itself, or the like is appended to action command information S16. In the following, the information thus appended will be called appended information.
  • The command for a list operation with respect to a current operation and a series of commands issued in the past is a command for inserting a newly issued command explained with reference to FIG. 25B, into the top of a group of commands which are not yet executed. Also, information concerning the characteristic of the command itself(hereinafter called command characteristic information) is a parameter concerning the walking direction, which has been explained in FIG. 17, a parameter concerning a command, which has been explained in FIG. 18, e.g., the parameter of “three steps” where the motion is “walk forward”, or information for synchronizing another motion, which has been explained in FIG. 19. [0248]
  • The [0249] command storage part 61 serves to store action command information S16 sent from the action determination mechanism part 41, as described above, and is a buffer, for example. The command storage part 61 performs processing based on the contents of appended information, if action command information S16 has appended information. For example, if a command for inserting a command as information concerning a list operation is included in the appended information, an operation for inserting action command information S16, which has just reached, at the top of the row of commands in a standby state in the command storage part 61 is carried out together in accordance with the contents of the command.
  • Also, if there is command characteristic information as appended information, the [0250] command storage part 61 stores it together with the command.
  • This [0251] command storage part 61 grasps what command is being executed and what command is waiting at what rank in the order. To realize this, for example, commands are stored with order ranks added.
  • In addition, the [0252] command storage part 61 has four storage areas corresponding to the entire apparatus 1, the head part 2, the leg parts 4, and the tail part 5, as has been described previously. Further, the command storage part 61 can make a determination that a command for moving the entire apparatus cannot be executed while a command for moving the head part 2 is being executed or a determination that a command which is now moving the head part and a command for moving the leg parts 4 can be issued independently from each other. That is, it is possible to perform processing which prevents resources of the entire apparatus and the componential parts from competing with each other, so that a so-called solution for competition between resources can be achieved.
  • Also, the [0253] command storage part 61 remembers the ranks of the order in which commands have been issued over different parts. For example, if commands are supplied in the order from the entire apparatus 1 to the leg parts 4, the head part 2, and the entire apparatus 1, the commands concerning the entire apparatus 1 are stored as commands executed first and fourth in the command storage part 61, the command concerning the leg parts 4 is stored as a command executed second, and the command concerning the head part 2 is stored as a command executed third. Thus, the order of the commands remembers the order. In this manner, at first, the command storage part 61 sends a command concerning the entire apparatus 1, which has been executed first, to the route search part 62. As soon as reproduction of the contents of the command is finished, the command storage part 61 sends the command concerning the leg parts 4, which has been executed second, and the command concerning the head part 2, which has been executed third. After the componential parts finish reproduction of the contents of these commands, the command storage part 61 sends the command concerning the entire apparatus 1, which has been executed fourth, to the route search part 62.
  • The [0254] route search part 62 starts a so-called route search by the command supplied from the command storage part 61. The graph storage part 63 stores graphs corresponding to the parts sectioned in the command storage part 61. That is, the graph storage part 63 stores graphs corresponding to the entire apparatus 1, the head part 2, the leg parts 4, and the tail part 5. Based on the graphs stored in the graph storage part 63, the route search part 62 searches an optimal route to an aimed position or motion as the contents of a command, using the distance or weight described previously as an index.
  • Further, based on the position transit plan obtained by the route search, the [0255] route search part 62 sends position transit information S18 to the control mechanism part 43 until an aimed position or motion is executed.
  • As described above, the position [0256] transit mechanism part 42 searches an optimal route to an aimed position or motion in accordance with a command and prepares a position transit plan, based on action command information S16 sent from the action determination mechanism part 41. In accordance with the position transit plan, the position transit mechanism part 42 outputs the position transit information S18 to the control mechanism part 43.
  • (3-5) Processing in Control Mechanism Part [0257]
  • Returning to FIG. 3, the [0258] control mechanism part 43 generates a control signal S5 for driving actuators 23, based on position transit information S18. The part 43 then sends this signal to the actuators 23, to drive the actuators 23. Thus, the robot apparatus 1 is let make a desired motion.
  • Also, the [0259] control mechanism part 43 returns a end notification (reproduction result) made on the basis of the position transit information S18, to the route search part 62, until an aimed position or motion is reached. The route search part 62 which has received an end notification notifies the graph storage part 63 of the end of the motion. The graph storage part 63 updates information on the graph. That is, for example, if the position transit information S18 is to let the leg part 2 make a motion of transiting from the state of the sleeping position to the state of the sitting position, the graph storage part 63 which has received an end notification moves the graph position of the node of the leg part 2 corresponding to the sleeping position to the graph position of the sitting position.
  • For example, if an aimed motion is to “thrash legs” (self-motion arc) in the sitting position, the [0260] route search part 62 further sends “thrash legs” as position transit information S18 to the control mechanism 43. The control mechanism part 43 generates control information for thrashing the leg parts 4, based on the position transit information S18, and sends this information to the actuators 23 to execute a motion of thrashing leg parts 4. When the control mechanism part 43 completes reproduction of a predetermined motion, the control mechanism part 43 sends an end notification thereof again to the route search part 62. The route search part 62 notifies the graph storage part 63 of the completion of the motion of thrashing the leg parts 4. However, in case of a self-motion arc, it is unnecessary to update information on the graphs, and therefore, the current position of the leg parts 4 goes to the sitting position. However, the position at the end of thrashing the leg parts 4 is the sitting position of the leg parts, which is equal to the position before this motion was started. Therefore, the graph storage part 63 does not change the current position of the leg parts.
  • Meanwhile, the [0261] route search part 62 notifies the command storage part 61 of that reproduction of an aimed position or motion has ended. Since the aimed position or motion has been completed without problems, i.e., since the contents of the command have been accomplished without problems, the command storage part 61 erases the command. For example, if thrashing of legs is supplied as an aimed motion as described above, this command is erased from the command storage area of the leg parts 4.
  • As described above, the action [0262] determination mechanism part 41, the motion route search part 60, and the control mechanism part 43 transmits information, so basic functions of the action determination mechanism part 41, the motion route search part 60, and the control mechanism part 43 are realized.
  • FIG. 29 shows a series of processing procedures of a route search in the motion [0263] route search part 60, which is carried out on the bases of the command generated in the action determination mechanism part 41, and motions which are carried out on the basis of a route search result.
  • At first, in a step SP[0264] 1, the action determination mechanism part 41 supplies a command to the motion route search part 60. In a subsequent step SP2, the command storage part 61 of the motion route search part 60 adds the command to a command group.
  • For example, as shown in FIG. 26, a command concerning the tail part is added. If command information for making an insertion is added as a list operation to the command, the command is inserted into a row of commands in accordance with the command information. [0265]
  • In a subsequent step SP[0266] 3, the command storage part 61 determines whether or not there is a command which can be started.
  • For example, it is determined that there is no command that can be started, if the first command concerning the entire apparatus is being executed in case where the [0267] command storage part 61 grasps a command concerning the entire apparatus as a first command (being currently reproduced), stored commands concerning the head part as second and third commands, a command concerning the leg parts as a fourth command, and stored commands concerning the tail part 5 as fifth and sixth commands.
  • If there is no command that can be started, the processing goes to the step SP[0268] 4. Otherwise, if there is a command that can be started, the processing goes to the step SP5. At the time point when the processing goes to the step SP3, a command concerning a componential part cannot be executed while the entire robot apparatus 1 is making a motion. Therefore, the processing goes to the step SP4. Meanwhile, if the motion has ended at the time point when the processing goes to the step SP3, the processing further goes to the step SP5.
  • In the step SP[0269] 4, the command storage part 61 waits for elapse of a predetermined time. For example, after waiting for elapse of 0.1 second, the command storage part 61 determines again whether or not there is a command that can be started, in the step SP3.
  • For example, if the command concerning the entire apparatus, which was being executed, has been finished after waiting for elapse of 0.1 second in the step SP[0270] 4, the second command concerning the head part, the fourth part concerning the leg parts, and the fifth command concerning the tail parts are grasped as commands that can be started, in the step SP3.
  • The third command concerning the head part is grasped as a command that can be started, as soon as execution of the second command concerning the head part is finished. The sixth command concerning the tail part is grasped as a command that can be started, as soon as execution of the fifth command concerning the head part is finished. [0271]
  • Thus, in case where there is no command that can be started immediately in the step SP[0272] 3, the processing goes to the step SP4 and waits for elapse of a predetermined time. In this manner, the processing waits for completion of a predetermined motion in the entire apparatus or componential parts. A next command can be executed at a suitable timing.
  • In the step SP[0273] 5, based on the contents of a next command sent from the command storage part 61, the route search part 62 determines whether or not a motion route to an aimed position from the current position on the graph corresponding to the command stored in the graph storage part 63 can be found.
  • For example, in case of a command concerning a motion of the [0274] head part 2, the route search part 62 determines whether or not there is a route that can reach an aimed state or motion from the current position (state) on the graph of the head part, as shown in FIG. 30. The case where a route can be found is a case where there is a path (arc) that can reach an aimed state or motion from the current position (state). As shown in FIG. 30, this is a case where there are a plurality of oriented arcs a0 and a1 that can reach an aimed position from the current position. The case where no route can be found is a case where there is no path to an instructed state or motion.
  • If no transit route is found to an aimed position from the current position, the processing goes to the step SP[0275] 6. Otherwise, if a transit route is found, the processing goes to the step SP7. If a transit route to an aimed position (or motion) is found, the route (arc) ak (where k is an integer up to n including 0) is stored and this becomes information of a position transit plan.
  • In the step SP[0276] 6, the command storage part 61 erases the command from the list, considering that no motion route is found. By thus erasing the command if no motion route is found, a subsequent command can be picked up. After thus erasing a command, the command storage part 61 determines again whether or not there is a command that can be started, in the step SP3.
  • In the step SP[0277] 7, i=0 is set. In the subsequent step SP8, whether or not i is n or less is determined. The case where i is n means that an aimed position or motion is completed. That is, the n-th arc is an oriented arc an which directly transits to an aimed node or an aimed self-motion arc an. If i=0 is satisfied, i is confirmed as being n or less at least in this step SP8.
  • If i is n or less, the processing goes to the step SP[0278] 10. Otherwise, if i is greater than n, the processing goes to the step SP9.
  • In the step SP[0279] 10, the control mechanism part 43 performs production of an arc a1, based on position transit information S18 supplied from the route search part 62. For example, if the robot apparatus is at the current position (first position), as shown in FIG. 30, the motion of the first oriented arc a0 is carried out.
  • In the step SP[0280] 11, the route search part 62 receives a notification of a motion. In the step SP12, the graph storage part 63 updates the position on the graph.
  • In the subsequent step SP[0281] 13, i=i+1 is set as i to execute a next arc, to achieve an increment. Subsequently, the processing returns to the step SP8, and whether or not i is n or less is determined again.
  • In the step SP[0282] 9 whose processing is carried out if i is greater than n, i.e., if a motion to an aimed position is executed or if an arc (oriented arc or self-notion arc) as an aimed motion is executed, the command storage part 61 erases a command which has just been finished from stored commands. Further, the processing returns to the step SP3, and the command storage part 61 determines again whether or not there is a command that can be started.
  • If an aimed position or motion is associated with the entire apparatus and the componential parts, an optimal route is searched on the graphs corresponding to them in the steps SP[0283] 5 and SP6, as has been explained with reference to FIGS. 23 and 24. In the processing from the step SP7, corresponding processing is carried out. That is, the position is let transit to a basic position, and the contents of an oriented arc to an aimed position or motion are executed.
  • Also, the commands are stored into the [0284] command storage part 61 as has been described above. Since resources do not compete with each other, the respective componential parts of the robot apparatus 1 can perform simultaneous motions, based on such a command. Hence, with respect to the processing procedure (flow) as described above, flows of respective componential parts can exist in parallel with each other, so that a plurality of flows can be executed simultaneously. Accordingly, in case where a flow of a command concerning the entire apparatus is being processed, processing of flows of commands concerning respective componential parts is not carried out but is put in a standby state.
  • (4) Motion and Effect [0285]
  • In the structure as described above, the feeling/[0286] instinct model part 40 of the controller 32 changes the states of the feeling and instinct of the robot apparatus 1, based on supplied input information S1 to S3. The changes of the feeling and instinct of the robot apparatus 1 are reflected on actions of the robot apparatus 1, thereby to let the robot apparatus 1 move autonomously, based on its own feeling and instinct.
  • Further, various positions and motions are let transit on the basis of graphs which register nodes and arcs, thereby to enable various expressions by the [0287] robot apparatus 1.
  • That is, it is possible to have a large number of expression patterns depending on a combination of motions by using previously registered graphs. Also, natural and smooth transit of position from the current position to an aimed position or motion can be realized, so that balance is not unintentionally lost during a motion. [0288]
  • (5) Other Embodiments [0289]
  • The above embodiment has been explained with respect to the case where a command from a user, which is transmitted by infrared light from a remote controller, is received. However, the present invention is not limited hitherto but a command from a user, which is transmitted by a radio wave or a sound wave can be received. [0290]
  • Also, the above embodiment has been explained with respect to the case where a command from a user is inputted through a [0291] command receiving part 30 comprised of a remote controller receiving part 13 and a microphone 11. However, the present invention is not limited hitherto. For example, a computer may be connected to a robot apparatus 1, and a command from a user can be inputted through the computer thus connected.
  • Also, the above embodiment has been explained with respect to the case where states of the feeling and instinct are determined with user of [0292] emotion units 50A to 50F indicating emotions such as “delight”, “sorrow”, “anger”, and the like as well as desire units 51A to 51D indicating “desire for motion”, “desire for love”, and the like. The present invention, however, is not limited hitherto. For example, an emotion unit indicating “loneliness” may be added to the emotion units, and a desire unit indicating a “desire for sleep” may be added to the desire units 51. Furthermore, the states of the feeling and the instinct may be determined by using emotion units and desire units constructed by various kinds of units or a various number of units.
  • The above embodiment has been explained with respect to a structure in which the [0293] robot apparatus 1 has a feeling model and an instinct model. However, the present invention is not limited hitherto but the structure may include only the feeling model or instinct model. Furthermore, the structure may include another model that will decide the action of an animal.
  • Also, the above embodiment has been explained with respect to the case where a next action is determined by an action [0294] determination mechanism part 41, based on a command signal S1, an external information signal S2, an internal information signal S3, feeling/instinct state information S10, and action information S12. However, the present invention is not limited hitherto but a next action may be determined on the basis of a part of the information of the command signal S1, external information signal S2, internal information signal S3, feeling/instinct state information S10, and action information S12.
  • Also, the above embodiment has been explained with respect to the case where a next action is determined with use of an algorithm called a [0295] finite automaton 57. The present invention, however, is not limited hitherto, but an action may be determined with use of an algorithm called a state machine which has an infinite number of states. In this case, a state may be newly generated every time when input information S14 is supplied, and an action may be determined in accordance with the state thus generated.
  • Also, the above embodiment has been explained with respect to the case where an algorithm called [0296] finite automaton 57 is used to determine a next action. The present invention, however, is not limited hitherto but an action may be determined with use of an algorithm called probability finite automaton in which a plurality of states are selected as candidates of a transit destination, based on the input information S14 currently supplied and the state at this time, and the state of the transit destination is determined at random by means of random numbers, among the plurality of states thus selected.
  • Also, the above embodiment has been explained with respect to the following case. That is, if action command information S[0297] 16 indicates a position to which the current position can directly transit, the action command information S16 is sent directly as position transit information S18 to the control mechanism part 43, without changes. On the other hand, if the action command information S16 indicates a position to which the current position cannot directly transit, such position transit information S8 that lets the position once transit to another position to which the current position can transit and thereafter lets the position transit to an aimed position is generated and sent to the control mechanism part 43. However, the present invention is not limited hitherto but action command information S16 is received and sent to the control mechanism part 43, only in case where the action command information S16 indicates a position to which the current position can directly transit. Meanwhile, if the action command information S16 indicates a position to which the current position cannot directly transit, the action command information S16 may be denied.
  • Also, the above embodiment has been explained with respect to the case where the present invention is applied to a [0298] robot apparatus 1. However, the present invention is not limited hitherto but may be applied to other various robot apparatuses such as a robot apparatus and the like used for games or in the field of entertainment. As shown in FIG. 31, the present invention can be applied to a character which moves as computer graphics, e.g., an animation or the like using an articulated character.
  • The outer appearance of the [0299] robot apparatus 1 to which the present invention is applied is not limited to the structure as shown in FIG. 1 but may be arranged to be more similar to an actual dog, as shown in FIG. 32, or may be arranged to be a humanoid robot having a human shape.
  • INDUSTRIAL APPLICABILITY
  • A robot apparatus according to the present invention makes a motion corresponding to supplied input information, and comprises model change means including a model, which causes the motion, for determining the motion by changing the model, based on the input information. Therefore, the robot apparatus can autonomously act based on states of the feeling and instinct of the robot apparatus, by changing the model, based on input information, thereby to determine a motion. [0300]
  • A motion control method according to the present invention is to make a motion in accordance with supplied input information, and the motion is determined by changing a model which causes the motion, based on the input information. As a result, the robot apparatus can act autonomously based on the states of its own feeling and instinct. [0301]
  • Another robot apparatus according to the present invention makes a motion in accordance with supplied input information, and comprises motion determination means for determining a next operation subsequent to a current motion, based on the current motion and the input information supplied next, said current motion corresponding to a history of input information supplied sequentially. Therefore, the robot apparatus autonomously can act based on the states of its own feeling and instinct, by determining a next motion subsequent to a current motion, based on the current motion corresponding to a history of input information supplied sequentially, and the input information supplied next. [0302]
  • Another motion control method according to the present invention is to make a motion in accordance with supplied input information, and a next motion subsequent to a current motion is determined, based on the current motion and the input information to be supplied next, the current motion corresponding to a history of input information supplied sequentially. As a result, the robot apparatus can act autonomously, based on the state of its own feeling or instinct, for example. [0303]
  • Further, another robot apparatus according to the present invention comprises: graph storage means for storing a graph which registers the positions and the motion and which is constructed by connecting the positions with the motion for letting the positions transit; and control means for searching a route from a current position to an aimed position or motion, on the graph, based on the action command information, and for letting the robot apparatus move, based on a search result, thereby to let the robot apparatus transit from the current position to the aimed position or motion. Therefore, a route from the current position to the aimed position or motion is searched on the graph by the control means, and a motion is made on the basis of the search result. The current position can thus let transit to the aimed position or motion. In this manner, the robot apparatus can enrich its own expressions. [0304]
  • Further, in another motion control method according to the present invention, based on action command information, a route from a current position to an aimed position or motion is searched on a graph which registers positions and motions and which is constructed by connecting the positions with motions for letting the positions transit, and a motion is made, based on a search result, thereby to make transit from the current position to the aimed position or motion. As a result, it is possible to enrich expressions of a robot apparatus and a character which moves as a computer graphic. [0305]

Claims (53)

1. A robot apparatus which makes a motion corresponding to supplied input information, comprising
model change means including a model for causing the motion, for determining the motion by changing the model, based on the input information.
2. The robot apparatus according to claim 1, wherein the model change means includes a feeling model for expressing a feeling through a motion, as the model, or an instinct model for expressing an instinct through a motion, as a model having a property different from the feeling model, and wherein
the model change means changes the feeling model, based on the input information, thereby to change a feeling state for causing the motion, or changes the instinct model, based on the input information, thereby to change an instinct state for causing the motion.
3. The robot apparatus according to claim 1, wherein the model change means uses a plurality of models having different properties, as the model.
4. The robot apparatus according to claim 3, wherein the model change means a feeling model for expressing a feeling through a motion, as a first property model, and an instinct model for expressing an instinct through a motion, as a second property model, and wherein
the model change means changes the feeling model, based on the input information, thereby to change a feeling state for causing the motion, and changes the instinct model, based on the input information, thereby to change an instinct state for causing the motion.
5. The robot apparatus according to claim 1, wherein the model change means has a plurality of kinds of models, as the model, with respect to one same property.
6. The robot apparatus according to claim 5, wherein the model change means changes individually levels of the plurality of kinds of models, based on the input information, thereby to change the state which causes the motion.
7. The robot apparatus according to claim 3, wherein the model change means has a plurality of kinds of models with respect to each of the models having different properties.
8. The robot apparatus according to claim 7, wherein the model change means changes individually levels of the plurality of kinds of models of the plurality of models having different properties, based on the input information, thereby to change the state which causes the motion.
9. The robot apparatus according to claim 3, wherein the model change means changes the models having different properties which are influencing each other.
10. The robot apparatus according to claim 7, wherein the model change means changes desired models among the models having different properties, with the desired models influencing each other.
11. The robot apparatus according to claim 1, wherein the model change means changes the model in accordance with an action of the robot apparatus.
12. The robot apparatus according to claim 1, wherein the model change means changes the model in a way inherent to the robot apparatus itself.
13. The robot apparatus according to claim 1, wherein the input information is constructed by user command information supplied to the robot apparatus from a user.
14. The robot apparatus according to claim 1, wherein the input information is constructed by an action on the robot apparatus from a user.
15. The robot apparatus according to claim 1, wherein the input information is constructed by environmental information around the robot apparatus.
16. The robot apparatus according to claim 15, wherein the input information is constructed by information concerning a motion of another robot apparatus existing around the robot apparatus.
17. The robot apparatus according to claim 1, wherein the input information is constructed by information concerning a condition inside the robot apparatus.
18. The robot apparatus according to claim 1, wherein the input information is constructed by information concerning a current or past action of the robot apparatus.
19. A motion control method for making a motion in accordance with supplied input information, wherein
a model which causes the motion is changed, based on the input information, thereby to determine the motion.
20. The method according to claim 19, wherein the model is a feeling model for expressing a feeling through a motion, or an instinct model having a property different from that of the feeling model, for expressing an instinct through a motion, and wherein
the feeling model is changed, based on the input information, thereby to change a feeling state which causes the motion, or the instinct model is changed, based on the input information, thereby to an instinct state which causes the motion.
21. The method according to claim 19, wherein the model is a feeling model for expressing a feeling through a motion, and an instinct model for expressing an instinct through a motion, as a model of a second property, and wherein
the feeling model is change, based on the input information, thereby to change a feeling state which causes the motion, and the instinct model is changed, based on the input information, thereby to change an instinct state which causes the motion.
22. A robot apparatus which makes a motion in accordance with supplied input information, comprising
motion determination means for determining a next operation subsequent to a current motion, based on the current motion and the input information supplied next, said current motion corresponding to a history of input information supplied sequentially.
23. The apparatus according to claim 22, wherein the input information is constituted by combining all or part of user command information supplied from a user, information concerning an action from the user, information concerning an environment, information concerning a condition inside the robot apparatus, information concerning a current or past action, information concerning a state of a feeling and/or an instinct.
24. The apparatus according to claim 22, wherein the motion determination means determines the next motion with use of finite automaton having a finite number of motions.
25. The apparatus according to claim 22, wherein the motion determination means uses probability-finite automaton by which a plurality of motions are selected as candidates for a transit destination, based on the current motion and the input information, and a desired motion among the selected plurality of motions is determined at random by random numbers.
26. The apparatus according to claim 22, wherein the motion determination means uses a fact that a time for which a current motion has been being executed reaches a predetermined value, a fact that specific input information is inputted, or a fact that a level of a desired feeling model or instinct model among a plurality of kinds of feeling models and a plurality of kinds of instinct models that determine states of a feeling and instinct of the robot apparatus exceeds a predetermined threshold value, as a condition for letting the current motion transit to the next motion.
27. The apparatus according to claim 22, wherein the motion determination means determines a motion as a transit destination, based on whether or not a level of a desired feeling model or instinct model among a plurality of kinds of feeling models and a plurality of kinds of instinct models that determine states of a feeling and an instinct exceeds a predetermined threshold value.
28. The apparatus according to claim 22, wherein the motion determination means changes a parameter which characterizes an action to be carried out as a motion at a transit destination, in correspondence with a level of a desired feeling model or instinct model among a plurality of kinds of feeling models and a plurality of kinds of instinct models that determine states of a feeling and an instinct exceeds a predetermined threshold value.
29. A motion control method for making a motion in accordance with supplied input information, wherein
a next motion subsequent to a current motion is determined, based on the current motion and the input information to be supplied next, the current motion corresponding to a history of input information supplied sequentially.
30. A robot apparatus which is let make a motion, based on action command information, to transit between a plurality of positions, comprising:
graph storage means storing a graph which registers the positions and the motion and which is constructed by connecting the positions with the motion for letting the positions transit; and
control means for searching a route from a current position to an aimed position or motion, on the graph, based on the action command information, and for letting the robot apparatus move, based on a search result, thereby to let the robot apparatus transit from the current position to the aimed position or motion.
31. The apparatus according to claim 30, further comprising action command information storage means for temporarily storing the action command information, wherein the control means searches a route from the current position to the aimed position or motion, on the graph, based on the action command information sequentially supplied from the action command information storage means.
32. The apparatus according to claim 31, wherein the action command information is added with additional information concerning the action command information.
33. The apparatus according to claim 32, wherein the additional information is the command operation information, and wherein
the action command information storage means performs an operation on newly supplied action command information and previously supplied action command information, based on the command operation information.
34. The apparatus according to claim 32, wherein the additional information is delayed input information which instructs contents of a motion in details, and wherein the control means controls finely the aimed motion, based on the detailed input information.
35. The apparatus according to claim 34, wherein the detailed input information is information concerning a parameter of a motion, and wherein
the control means executes the aimed motion in accordance with the parameter.
36. The apparatus according to claim 34, wherein the detailed input information is information concerning a number of repetitions of a motion, and wherein
the control means repeatedly executes the aimed motion, based on information concerning the number of repetitions.
37. The apparatus according to claim 34, wherein the detailed input information is information concerning a repetitive motion, and wherein
the control means executes the aimed motion until information concerning an end of repetition is inputted.
38. The apparatus according to claim 30, wherein motions on the graphs are added with weights, and wherein
the control means searches a route with reference to the weights.
39. The apparatus according to claim 38, wherein the control means searches a route which sets a sum of the weights to a minimum value.
40. The apparatus according to claim 30, wherein the control means searches a route in correspondence with a physical shape and a mechanism.
41. The apparatus according to claim 30, wherein an entire of the robot apparatus is constructed by a plurality of componential parts, wherein
the graph storage means stores a plurality of graphs respectively corresponding to the entire and the componential parts, and wherein
the control means searches route with respect to the entire and the componential parts on the graphs, respectively, and lets the entire and the componential parts move, based on each search result.
42. The apparatus according to claim 41, wherein a basic position common to the entire and the componential parts is registered on the plurality of graphs, and wherein
the control means searches a route from the current position to the aimed position or motion, between the graphs through the basic position, when the current position and the aimed position or motion bridge over the entire and the componential parts.
43. The apparatus according to claim 41, wherein the action command information is added with synchronization information for synchronizing a motion with a different componential part, and wherein
the control means lets the different componential part move synchronously, based on the synchronization information.
44. The apparatus according to claim 30, wherein the control means execute a motion of transiting from the current position to the aimed position, with an expression added.
45. The apparatus according to claim 30, wherein a plurality of equal motions are registered on the graph, and wherein
the control means searches a route to one of the plurality of equal motions, referring at least to the current position, when the plurality of equal motions are instructed by the action command information.
46. The apparatus according to claim 45, wherein the control means searches a route to one of the plurality of equal motions that is the shortest and is executable.
47. The apparatus according to claim 30, wherein a neutral position is registered on the graph, and wherein
the control means once sets the neutral position and then lets the neutral position transit to the aimed position or motion, when an actual position is unclear.
48. The apparatus according to claim 47, wherein the control means sets the neutral position through a slower motion than a normal motion in a normal transit.
49. A motion control method for making a motion, based on action command information, thereby to control transit between a plurality of positions, comprising the steps of:
searching, based on the action command information, a route from a current position to an aimed position or motion on a graph which registers the positions and the motion and which is constructed by connecting the positions with a motion for letting the positions transit, and
executing a motion based on a search result, thereby to make transit from the current position to the aimed position or motion.
50. The method according to claim 49, wherein the action command information is temporarily stored in action command information storage means, and wherein
a route from the current position to the aimed position or motion is searched on the graph, based on the action command information sequentially supplied from the action command information storage means.
51. The method according to claim 49, wherein motions on the graph are added with weights, and a route is searched with reference to the weights.
52. The method according to claim 49, wherein a motion of a robot apparatus which can transit to a plurality of positions.
53. The method according to claim 52, wherein an entire of the robot is constructed by a plurality of componential parts, and wherein
the route is searched with respect to the entire and the componential parts, on a plurality of graphs respectively corresponding to the entire and the componential parts, and the entire and the componential parts are moved, based on search results, respectively.
US10/196,683 1999-01-20 2002-07-15 Robot apparatus and motion control method Abandoned US20030023348A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/196,683 US20030023348A1 (en) 1999-01-20 2002-07-15 Robot apparatus and motion control method

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP1229299 1999-01-20
JP11-012292 1999-01-20
JP11-341374 1999-11-30
JP34137499 1999-11-30
US09/646,506 US6442450B1 (en) 1999-01-20 2000-01-20 Robot device and motion control method
US10/196,683 US20030023348A1 (en) 1999-01-20 2002-07-15 Robot apparatus and motion control method

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US09/646,506 Continuation US6442450B1 (en) 1999-01-20 2000-01-20 Robot device and motion control method
PCT/JP2000/000263 Continuation WO2000043167A1 (en) 1999-01-20 2000-01-20 Robot device and motion control method

Publications (1)

Publication Number Publication Date
US20030023348A1 true US20030023348A1 (en) 2003-01-30

Family

ID=26347871

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/457,318 Expired - Lifetime US6337552B1 (en) 1919-01-20 1999-12-08 Robot apparatus
US09/646,506 Expired - Lifetime US6442450B1 (en) 1999-01-20 2000-01-20 Robot device and motion control method
US10/017,532 Expired - Fee Related US6667593B2 (en) 1999-01-20 2001-12-14 Robot apparatus
US10/196,683 Abandoned US20030023348A1 (en) 1999-01-20 2002-07-15 Robot apparatus and motion control method

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US09/457,318 Expired - Lifetime US6337552B1 (en) 1919-01-20 1999-12-08 Robot apparatus
US09/646,506 Expired - Lifetime US6442450B1 (en) 1999-01-20 2000-01-20 Robot device and motion control method
US10/017,532 Expired - Fee Related US6667593B2 (en) 1999-01-20 2001-12-14 Robot apparatus

Country Status (6)

Country Link
US (4) US6337552B1 (en)
EP (1) EP1088629A4 (en)
JP (3) JP4696361B2 (en)
KR (1) KR100721694B1 (en)
CN (1) CN1246126C (en)
WO (1) WO2000043167A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030078682A1 (en) * 2001-10-19 2003-04-24 Nobuhiko Tezuka Simulation apparatus and simulation method
US20050008001A1 (en) * 2003-02-14 2005-01-13 John Leslie Williams System and method for interfacing with heterogeneous network data gathering tools
US20050177276A1 (en) * 2002-04-30 2005-08-11 Morel Cyrille C. Animation system for a robot comprising a set of movable parts
US20070191986A1 (en) * 2004-03-12 2007-08-16 Koninklijke Philips Electronics, N.V. Electronic device and method of enabling to animate an object
US20070199108A1 (en) * 2005-09-30 2007-08-23 Colin Angle Companion robot for personal interaction
US20070255546A1 (en) * 2003-11-10 2007-11-01 Karsten Strehl Simulation System and Computer-Implemented Method for Simulation and Verifying a Control System
US20070283441A1 (en) * 2002-01-15 2007-12-06 Cole David M System And Method For Network Vulnerability Detection And Reporting
US20080077277A1 (en) * 2006-09-26 2008-03-27 Park Cheon Shu Apparatus and method for expressing emotions in intelligent robot by using state information
US20090254217A1 (en) * 2008-04-02 2009-10-08 Irobot Corporation Robotics Systems
US20090259748A1 (en) * 2002-01-15 2009-10-15 Mcclure Stuart C System and method for network vulnerability detection and reporting
US20090299751A1 (en) * 2008-06-03 2009-12-03 Samsung Electronics Co., Ltd. Robot apparatus and method for registering shortcut command thereof
US20110071718A1 (en) * 2005-10-21 2011-03-24 William Robert Norris Systems and Methods for Switching Between Autonomous and Manual Operation of a Vehicle
US20120065747A1 (en) * 2003-09-25 2012-03-15 Roy-G-Biv Corporation Database Event Driven Motion Systems
US20120191460A1 (en) * 2011-01-26 2012-07-26 Honda Motor Co,, Ltd. Synchronized gesture and speech production for humanoid robots
US9915934B2 (en) 1999-05-04 2018-03-13 Automation Middleware Solutions, Inc. Systems and methods for communicating with motion control systems and devices
US10100968B1 (en) 2017-06-12 2018-10-16 Irobot Corporation Mast systems for autonomous mobile robots
CN110297697A (en) * 2018-03-21 2019-10-01 北京猎户星空科技有限公司 Robot motion sequence generating method and device
US10471611B2 (en) 2016-01-15 2019-11-12 Irobot Corporation Autonomous monitoring robot systems
US20210122044A1 (en) * 2019-10-29 2021-04-29 Kabushiki Kaisha Toshiba Control system, control method, robot system, and storage medium
US11110595B2 (en) 2018-12-11 2021-09-07 Irobot Corporation Mast systems for autonomous mobile robots

Families Citing this family (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6337552B1 (en) * 1999-01-20 2002-01-08 Sony Corporation Robot apparatus
US6560511B1 (en) * 1999-04-30 2003-05-06 Sony Corporation Electronic pet system, network system, robot, and storage medium
CN1304516A (en) 1999-05-10 2001-07-18 索尼公司 Robot device, its control method and recorded medium
GB2350696A (en) * 1999-05-28 2000-12-06 Notetry Ltd Visual status indicator for a robotic machine, eg a vacuum cleaner
US6663393B1 (en) * 1999-07-10 2003-12-16 Nabil N. Ghaly Interactive play device and method
US7442107B1 (en) * 1999-11-02 2008-10-28 Sega Toys Ltd. Electronic toy, control method thereof, and storage medium
US20020059386A1 (en) * 2000-08-18 2002-05-16 Lg Electronics Inc. Apparatus and method for operating toys through computer communication
WO2002030630A1 (en) * 2000-10-11 2002-04-18 Sony Corporation Robot apparatus and robot apparatus motion control method
JP4524524B2 (en) * 2000-10-11 2010-08-18 ソニー株式会社 Robot apparatus and control method thereof
JP3645848B2 (en) * 2000-11-17 2005-05-11 株式会社ソニー・コンピュータエンタテインメント Information processing program, recording medium on which information processing program is recorded, information processing apparatus and method
TWI236610B (en) * 2000-12-06 2005-07-21 Sony Corp Robotic creature device
DE60029976T2 (en) * 2000-12-12 2007-04-05 Sony France S.A. Automatic system with heterogeneous bodies
JP2002304188A (en) * 2001-04-05 2002-10-18 Sony Corp Word string output device and word string output method, and program and recording medium
US6697707B2 (en) * 2001-04-06 2004-02-24 Vanderbilt University Architecture for robot intelligence
JP4689107B2 (en) * 2001-08-22 2011-05-25 本田技研工業株式会社 Autonomous robot
US20030045947A1 (en) * 2001-08-30 2003-03-06 The Boeing Company System, method and computer program product for controlling the operation of motion devices by directly implementing electronic simulation information
JP3837479B2 (en) * 2001-09-17 2006-10-25 独立行政法人産業技術総合研究所 Motion signal generation method, apparatus and motion signal generation program for operating body
GB2380847A (en) * 2001-10-10 2003-04-16 Ncr Int Inc Self-service terminal having a personality controller
CN100509308C (en) * 2002-03-15 2009-07-08 索尼公司 Robot behavior control system, behavior control method, and robot device
US7118443B2 (en) 2002-09-27 2006-10-10 Mattel, Inc. Animated multi-persona toy
GB2393803A (en) * 2002-10-01 2004-04-07 Hewlett Packard Co Two mode creature simulaton
US7401057B2 (en) 2002-12-10 2008-07-15 Asset Trust, Inc. Entity centric computer system
US7238079B2 (en) * 2003-01-14 2007-07-03 Disney Enterprise, Inc. Animatronic supported walking system
US7248170B2 (en) * 2003-01-22 2007-07-24 Deome Dennis E Interactive personal security system
DE10302800A1 (en) * 2003-01-24 2004-08-12 Epcos Ag Method of manufacturing a component
JP2004237392A (en) * 2003-02-05 2004-08-26 Sony Corp Robotic device and expression method of robotic device
US20050105769A1 (en) * 2003-11-19 2005-05-19 Sloan Alan D. Toy having image comprehension
WO2005069890A2 (en) * 2004-01-15 2005-08-04 Mega Robot, Inc. System and method for reconfiguring an autonomous robot
JP4456537B2 (en) * 2004-09-14 2010-04-28 本田技研工業株式会社 Information transmission device
KR100595821B1 (en) * 2004-09-20 2006-07-03 한국과학기술원 Emotion synthesis and management for personal robot
JP2006198017A (en) * 2005-01-18 2006-08-03 Sega Toys:Kk Robot toy
US8713025B2 (en) 2005-03-31 2014-04-29 Square Halt Solutions, Limited Liability Company Complete context search system
US20070016029A1 (en) * 2005-07-15 2007-01-18 General Electric Company Physiology workstation with real-time fluoroscopy and ultrasound imaging
JP4237737B2 (en) * 2005-08-04 2009-03-11 株式会社日本自動車部品総合研究所 Automatic control device for on-vehicle equipment and vehicle equipped with the device
KR100745720B1 (en) * 2005-11-30 2007-08-03 한국전자통신연구원 Apparatus and method for processing emotion using emotional model
KR100746300B1 (en) * 2005-12-28 2007-08-03 엘지전자 주식회사 Method for determining moving direction of robot
JP4316630B2 (en) 2007-03-29 2009-08-19 本田技研工業株式会社 Robot, robot control method, and robot control program
US20080274812A1 (en) * 2007-05-02 2008-11-06 Inventec Corporation System of electronic pet capable of reflecting habits of user and method therefor and recording medium
EP2014425B1 (en) * 2007-07-13 2013-02-20 Honda Research Institute Europe GmbH Method and device for controlling a robot
CN101127152B (en) * 2007-09-30 2012-02-01 山东科技大学 Coding signal generator and radio remote control device for robot and animal control
CN101406756A (en) * 2007-10-12 2009-04-15 鹏智科技(深圳)有限公司 Electronic toy for expressing emotion and method for expressing emotion, and luminous unit control device
KR100893758B1 (en) * 2007-10-16 2009-04-20 한국전자통신연구원 System for expressing emotion of robots and method thereof
CN101411946B (en) * 2007-10-19 2012-03-28 鸿富锦精密工业(深圳)有限公司 Toy dinosaur
CN101596368A (en) * 2008-06-04 2009-12-09 鸿富锦精密工业(深圳)有限公司 Interactive toy system and method thereof
US8414350B2 (en) * 2008-08-18 2013-04-09 Rehco, Llc Figure with controlled motorized movements
CN101653662A (en) * 2008-08-21 2010-02-24 鸿富锦精密工业(深圳)有限公司 Robot
CN101653660A (en) * 2008-08-22 2010-02-24 鸿富锦精密工业(深圳)有限公司 Type biological device for automatically doing actions in storytelling and method thereof
CN101727074B (en) * 2008-10-24 2011-12-21 鸿富锦精密工业(深圳)有限公司 Biology-like device with biological clock and behavior control method thereof
US20100181943A1 (en) * 2009-01-22 2010-07-22 Phan Charlie D Sensor-model synchronized action system
US8539359B2 (en) 2009-02-11 2013-09-17 Jeffrey A. Rapaport Social network driven indexing system for instantly clustering people with concurrent focus on same topic into on-topic chat rooms and/or for generating on-topic search results tailored to user preferences regarding topic
JP4947073B2 (en) 2009-03-11 2012-06-06 トヨタ自動車株式会社 Robot apparatus and control method thereof
EP2420045A1 (en) * 2009-04-17 2012-02-22 Koninklijke Philips Electronics N.V. An ambient telephone communication system, a movement member, method, and computer readable medium therefor
US8939840B2 (en) 2009-07-29 2015-01-27 Disney Enterprises, Inc. System and method for playsets using tracked objects and corresponding virtual worlds
JP5447811B2 (en) * 2009-09-10 2014-03-19 国立大学法人 奈良先端科学技術大学院大学 Path plan generation apparatus and method, robot control apparatus and robot system
KR100968944B1 (en) * 2009-12-14 2010-07-14 (주) 아이알로봇 Apparatus and method for synchronizing robot
US8483873B2 (en) * 2010-07-20 2013-07-09 Innvo Labs Limited Autonomous robotic life form
US20120042263A1 (en) 2010-08-10 2012-02-16 Seymour Rapaport Social-topical adaptive networking (stan) system allowing for cooperative inter-coupling with external social networking systems and other content sources
FR2969026B1 (en) * 2010-12-17 2013-02-01 Aldebaran Robotics HUMANOID ROBOT HAVING A MANAGER OF ITS PHYSICAL AND VIRTUAL RESOURCES, METHODS OF USE AND PROGRAMMING
EP2665585B1 (en) * 2011-01-21 2014-10-22 Abb Ag System for commanding a robot
US8676937B2 (en) * 2011-05-12 2014-03-18 Jeffrey Alan Rapaport Social-topical adaptive networking (STAN) system allowing for group based contextual transaction offers and acceptances and hot topic watchdogging
US9434072B2 (en) 2012-06-21 2016-09-06 Rethink Robotics, Inc. Vision-guided robots and methods of training them
JP6162736B2 (en) * 2015-03-19 2017-07-12 ファナック株式会社 Robot control system with a function to change the communication quality standard according to the distance between the machine and the portable wireless operation panel
JP6467710B2 (en) * 2015-04-06 2019-02-13 信太郎 本多 A versatile artificial intelligence system that adapts to the environment
US10500716B2 (en) * 2015-04-08 2019-12-10 Beijing Evolver Robotics Co., Ltd. Multi-functional home service robot
KR20170052976A (en) * 2015-11-05 2017-05-15 삼성전자주식회사 Electronic device for performing motion and method for controlling thereof
CN105573490A (en) * 2015-11-12 2016-05-11 于明 Human-computer interaction system, wearing device and method
US10421186B2 (en) * 2016-01-04 2019-09-24 Hangzhou Yameilijia Technology Co., Ltd. Method and apparatus for working-place backflow of robots
CN107346107A (en) * 2016-05-04 2017-11-14 深圳光启合众科技有限公司 Diversified motion control method and system and the robot with the system
WO2017199566A1 (en) * 2016-05-20 2017-11-23 シャープ株式会社 Information processing device, robot, and control program
WO2018008323A1 (en) * 2016-07-08 2018-01-11 Groove X株式会社 Autonomous robot that wears clothes
JP6571618B2 (en) * 2016-09-08 2019-09-04 ファナック株式会社 Human cooperation robot
CN107813306B (en) * 2016-09-12 2021-10-26 徐州网递智能科技有限公司 Robot and motion control method and device thereof
CN108229640B (en) * 2016-12-22 2021-08-20 山西翼天下智能科技有限公司 Emotion expression method and device and robot
JP6729424B2 (en) 2017-01-30 2020-07-22 富士通株式会社 Equipment, output device, output method, and output program
JP6886334B2 (en) * 2017-04-19 2021-06-16 パナソニック株式会社 Interaction devices, interaction methods, interaction programs and robots
US10250532B2 (en) * 2017-04-28 2019-04-02 Microsoft Technology Licensing, Llc Systems and methods for a personality consistent chat bot
US20190111565A1 (en) * 2017-10-17 2019-04-18 True Systems, LLC Robot trainer
WO2019138618A1 (en) * 2018-01-10 2019-07-18 ソニー株式会社 Animal-shaped autonomous moving body, method of operating animal-shaped autonomous moving body, and program
JP7139643B2 (en) * 2018-03-23 2022-09-21 カシオ計算機株式会社 Robot, robot control method and program
CN108382488A (en) * 2018-04-27 2018-08-10 梧州学院 A kind of Doraemon
JP7298860B2 (en) * 2018-06-25 2023-06-27 Groove X株式会社 Autonomous action type robot assuming a virtual character
US11226408B2 (en) * 2018-07-03 2022-01-18 Panasonic Intellectual Property Management Co., Ltd. Sensor, estimating device, estimating method, and recording medium
US11230017B2 (en) 2018-10-17 2022-01-25 Petoi Llc Robotic animal puzzle
JP7247560B2 (en) * 2018-12-04 2023-03-29 カシオ計算機株式会社 Robot, robot control method and program
US11670156B2 (en) 2020-10-30 2023-06-06 Honda Research Institute Europe Gmbh Interactive reminder companion

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6337552B1 (en) * 1999-01-20 2002-01-08 Sony Corporation Robot apparatus
US3888023A (en) * 1974-08-21 1975-06-10 Jardine Ind Inc Physical training robot
JPS6224988A (en) * 1985-07-23 1987-02-02 志井田 孝 Robot having feeling
US5182557A (en) * 1989-09-20 1993-01-26 Semborg Recrob, Corp. Motorized joystick
JPH0573143A (en) * 1991-05-27 1993-03-26 Shinko Electric Co Ltd Mobile robot system
JPH0612401A (en) * 1992-06-26 1994-01-21 Fuji Xerox Co Ltd Emotion simulating device
JP3018865B2 (en) 1993-10-07 2000-03-13 富士ゼロックス株式会社 Emotion expression device
JPH0876810A (en) 1994-09-06 1996-03-22 Nikon Corp Intensified learning method/device
JPH0934698A (en) 1995-07-20 1997-02-07 Hitachi Ltd Software generating method and software developing and supporting method
AU7662396A (en) * 1995-10-13 1997-04-30 Na Software, Inc. Creature animation and simulation technique
JP3413694B2 (en) * 1995-10-17 2003-06-03 ソニー株式会社 Robot control method and robot
JP4153989B2 (en) 1996-07-11 2008-09-24 株式会社日立製作所 Document retrieval and delivery method and apparatus
US5832189A (en) * 1996-09-26 1998-11-03 Interval Research Corporation Affect-based robot communication methods and systems
US5929585A (en) * 1996-11-19 1999-07-27 Sony Corporation Robot system and its control method
DE69840655D1 (en) * 1997-01-31 2009-04-23 Honda Motor Co Ltd Control system of a mobile robot with legs
JPH10235019A (en) * 1997-02-27 1998-09-08 Sony Corp Portable life game device and its data management device
JPH10289006A (en) * 1997-04-11 1998-10-27 Yamaha Motor Co Ltd Method for controlling object to be controlled using artificial emotion
JP3273550B2 (en) 1997-05-29 2002-04-08 オムロン株式会社 Automatic answering toy
JP3655054B2 (en) * 1997-06-13 2005-06-02 ヤンマー農機株式会社 Rice transplanter seedling stand configuration
JPH11126017A (en) 1997-08-22 1999-05-11 Sony Corp Storage medium, robot, information processing device and electronic pet system
DE69943312D1 (en) * 1998-06-09 2011-05-12 Sony Corp MANIPULATOR AND METHOD FOR CONTROLLING ITS LOCATION
IT1304014B1 (en) 1998-06-29 2001-03-02 Reale S R L MOLDS FOR THE MANUFACTURE OF ICE GLASSES AND SUPPORTS TO SUPPORT THESE GLASSES DURING USE.
US6151571A (en) * 1999-08-31 2000-11-21 Andersen Consulting System, method and article of manufacture for detecting emotion in voice signals through analysis of a plurality of voice signal parameters
JP3555107B2 (en) * 1999-11-24 2004-08-18 ソニー株式会社 Legged mobile robot and operation control method for legged mobile robot

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9915934B2 (en) 1999-05-04 2018-03-13 Automation Middleware Solutions, Inc. Systems and methods for communicating with motion control systems and devices
US20030078682A1 (en) * 2001-10-19 2003-04-24 Nobuhiko Tezuka Simulation apparatus and simulation method
US8135830B2 (en) 2002-01-15 2012-03-13 Mcafee, Inc. System and method for network vulnerability detection and reporting
US20070283441A1 (en) * 2002-01-15 2007-12-06 Cole David M System And Method For Network Vulnerability Detection And Reporting
US20090259748A1 (en) * 2002-01-15 2009-10-15 Mcclure Stuart C System and method for network vulnerability detection and reporting
US7440819B2 (en) * 2002-04-30 2008-10-21 Koninklijke Philips Electronics N.V. Animation system for a robot comprising a set of movable parts
US20050177276A1 (en) * 2002-04-30 2005-08-11 Morel Cyrille C. Animation system for a robot comprising a set of movable parts
US20050008001A1 (en) * 2003-02-14 2005-01-13 John Leslie Williams System and method for interfacing with heterogeneous network data gathering tools
US20150112471A1 (en) * 2003-09-25 2015-04-23 Roy-G-Biv Corporation Database Event Driven Motion Systems
US9588510B2 (en) * 2003-09-25 2017-03-07 Automation Middleware Solutions, Inc. Database event driven motion systems
US20120065747A1 (en) * 2003-09-25 2012-03-15 Roy-G-Biv Corporation Database Event Driven Motion Systems
US20070255546A1 (en) * 2003-11-10 2007-11-01 Karsten Strehl Simulation System and Computer-Implemented Method for Simulation and Verifying a Control System
US20070191986A1 (en) * 2004-03-12 2007-08-16 Koninklijke Philips Electronics, N.V. Electronic device and method of enabling to animate an object
US8583282B2 (en) * 2005-09-30 2013-11-12 Irobot Corporation Companion robot for personal interaction
US10661433B2 (en) 2005-09-30 2020-05-26 Irobot Corporation Companion robot for personal interaction
US9878445B2 (en) 2005-09-30 2018-01-30 Irobot Corporation Displaying images from a robot
US20070199108A1 (en) * 2005-09-30 2007-08-23 Colin Angle Companion robot for personal interaction
US8874300B2 (en) * 2005-10-21 2014-10-28 Deere & Company Systems and methods for obstacle avoidance
US9043016B2 (en) 2005-10-21 2015-05-26 Deere & Company Versatile robotic control module
US9098080B2 (en) 2005-10-21 2015-08-04 Deere & Company Systems and methods for switching between autonomous and manual operation of a vehicle
US9429944B2 (en) 2005-10-21 2016-08-30 Deere & Company Versatile robotic control module
US20120046820A1 (en) * 2005-10-21 2012-02-23 James Allard Systems and Methods for Obstacle Avoidance
US20110071718A1 (en) * 2005-10-21 2011-03-24 William Robert Norris Systems and Methods for Switching Between Autonomous and Manual Operation of a Vehicle
US20080077277A1 (en) * 2006-09-26 2008-03-27 Park Cheon Shu Apparatus and method for expressing emotions in intelligent robot by using state information
US8452448B2 (en) * 2008-04-02 2013-05-28 Irobot Corporation Robotics systems
US20090254217A1 (en) * 2008-04-02 2009-10-08 Irobot Corporation Robotics Systems
US10438589B2 (en) * 2008-06-03 2019-10-08 Samsung Electronics Co., Ltd. Robot apparatus and method for registering shortcut command thereof based on a predetermined time interval
US11037564B2 (en) 2008-06-03 2021-06-15 Samsung Electronics Co., Ltd. Robot apparatus and method for registering shortcut command thereof based on a predetermined time interval
US20090299751A1 (en) * 2008-06-03 2009-12-03 Samsung Electronics Co., Ltd. Robot apparatus and method for registering shortcut command thereof
US9953642B2 (en) * 2008-06-03 2018-04-24 Samsung Electronics Co., Ltd. Robot apparatus and method for registering shortcut command consisting of maximum of two words thereof
US9431027B2 (en) * 2011-01-26 2016-08-30 Honda Motor Co., Ltd. Synchronized gesture and speech production for humanoid robots using random numbers
US20120191460A1 (en) * 2011-01-26 2012-07-26 Honda Motor Co,, Ltd. Synchronized gesture and speech production for humanoid robots
US10471611B2 (en) 2016-01-15 2019-11-12 Irobot Corporation Autonomous monitoring robot systems
US11662722B2 (en) 2016-01-15 2023-05-30 Irobot Corporation Autonomous monitoring robot systems
US10458593B2 (en) 2017-06-12 2019-10-29 Irobot Corporation Mast systems for autonomous mobile robots
US10100968B1 (en) 2017-06-12 2018-10-16 Irobot Corporation Mast systems for autonomous mobile robots
CN110297697A (en) * 2018-03-21 2019-10-01 北京猎户星空科技有限公司 Robot motion sequence generating method and device
US11110595B2 (en) 2018-12-11 2021-09-07 Irobot Corporation Mast systems for autonomous mobile robots
US20210122044A1 (en) * 2019-10-29 2021-04-29 Kabushiki Kaisha Toshiba Control system, control method, robot system, and storage medium
US11660753B2 (en) * 2019-10-29 2023-05-30 Kabushiki Kaisha Toshiba Control system, control method, robot system, and storage medium

Also Published As

Publication number Publication date
US20020050802A1 (en) 2002-05-02
JP4985805B2 (en) 2012-07-25
US6442450B1 (en) 2002-08-27
CN1246126C (en) 2006-03-22
KR100721694B1 (en) 2007-05-28
EP1088629A1 (en) 2001-04-04
JP2010149277A (en) 2010-07-08
JP2010149276A (en) 2010-07-08
CN1297393A (en) 2001-05-30
JP4696361B2 (en) 2011-06-08
US6667593B2 (en) 2003-12-23
KR20010092244A (en) 2001-10-24
WO2000043167A1 (en) 2000-07-27
EP1088629A4 (en) 2009-03-18
US6337552B1 (en) 2002-01-08

Similar Documents

Publication Publication Date Title
US6442450B1 (en) Robot device and motion control method
US7117190B2 (en) Robot apparatus, control method thereof, and method for judging character of robot apparatus
US6587751B2 (en) Robot device and method for controlling the robot's emotions
US6362589B1 (en) Robot apparatus
US7076334B2 (en) Robot apparatus and method and system for controlling the action of the robot apparatus
US7063591B2 (en) Edit device, edit method, and recorded medium
JP2003039363A (en) Robot device, action learning method therefor, action learning program thereof, and program recording medium
US6711467B2 (en) Robot apparatus and its control method
KR20020067694A (en) Robot apparatus and robot apparatus motion control method
JP2001157982A (en) Robot device and control method thereof
JP2001157981A (en) Robot device and control method thereof
JP2005169567A (en) Content reproducing system, content reproducing method and content reproducing device
JP2002163631A (en) Dummy creature system, action forming method for dummy creature for the same system and computer readable storage medium describing program for making the same system action
JP2001191279A (en) Behavior control system, behavior controlling method, and robot device
JP2001157980A (en) Robot device, and control method thereof
JP2001154707A (en) Robot device and its controlling method
JP2002120179A (en) Robot device and control method for it
JP2002120180A (en) Robot device and control method for it
JP2001157984A (en) Robot device and action control method for robot device
JP2003136451A (en) Robot device and control method thereof
JP2002120182A (en) Robot device and control method for it
JP2004298976A (en) Robot device and recognition control method for robot device
JP2003340761A (en) Robot device and robot device control method
JP2001283019A (en) System/method for transmitting information, robot, information recording medium, system/method for online sales and sales server
JP2001157978A (en) Robot device, and control method thereof

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION