Cognitive Development Based on Predictive Learning
Cognitive development based on sensorimotor predictive learning We propose a theory of cognitive development based on predictive learning of sensorimotor information. Our robot experiments demonstrate that various cognitive functions can be acquired through sensorimotor predictive learning, whereas traditional studies have modeled them independently.
Modeling of the emergence of mirror neuron system The mirror neuron system is a group of neurons that discharge both when people are executing an action and when they are observing the same action produced by other individuals. Such "mirroring" function enables us to better understand and anticipate the goal of others' actions and to imitate them. We develop neural models by which the mirror neuron system emerges through sensorimotor learning accompanied with perceptual development.
Modeling of the emergence of prosocial behavior Infants show prosocial behaviors even without expected rewards. We propose a computational model for the emergence of prosocial behaviors. A robot executes a predicted action in order to minimize a prediction error, which is caused by a failure in others' actions. This results in a prosocial behavior although the robot does not have such an intention.
Modeling of the emotional development through infant-caregiver tactile interaction Pleasant/unpleasant state gradually differentiates into emotion such as happiness, anger, and sadness over the first few years of life. We propose a probabilistic neural network model, which hierarchically integrates multimodal information to structure the emotional state. Our key idea is tactile interaction between an infant and a caregiver. Tactile sense of an infant, which has better acuity from birth and an innate ability to discriminate pleasant and unpleasant, leads to normal development of emotion.
Infant-Directed Action (Motionese)
Microscopic analysis of infant-caregiver interaction using transfer entropy Interaction between an infant and a caregiver is dynamic. Both mutually shape the interaction by sending various signals such as gaze, body movement, emotional expressions, and speech. Open questions are when and what signals influence stronger on the partner's reaction and how the dynamics of interaction changes as infants develop. We investigate the microscopic structure of interaction by measuring the information flow between an infant and a caregiver using transfer entropy.
Designing robots that elicit and learn from motionese Developmental studies have suggested that human caregivers significantly exaggerate their actions when interacting with infants, which is called motionese. Open questions are what signals from infants elicit motionese and what advantages infants can take from it in action learning. Our hypothesis is that bottom-up visual attention plays an important role there. We demonstrate how a robot equipped with bottom-up attention elicits motionese and how it can extract action segments from such exaggerated actions.
Analysis of motionese using infant-like bottom-up visual attention Developmental studies have suggested that human caregivers significantly modify their actions when interacting with an infant compared to with an adult. Such modification called motionese is characterised by higher roundness of movement, longer and more pauses between actions, and closer distance to the infant. Our analysis of motionese using a saliency-based attention model reveals that motionese has the effect of highlighting important aspects of actions (e.g., the goal of an action) and thus properly guiding infants' visual attention.
Modeling of the developmental mechanism for joint attention Joint attention is a process to look at the object that someone is looking at. Previous studies in developmental psychology have suggested that the ability to achieve joint attention is acquired between 6 and 18 months old and that it plays an important role in infants' further cognitive development. We investigate what inherent mechanisms enable infants to acquire the ability, how their perceptual and motor development influence the learning, and how caregivers support their development.
Role of visual attention in human-robot interaction People interact with other individuals using various modalities such as gaze, speech, and gesture. Human-robot interaction as well as human-human interaction involves multimodal information. We investigate the role of visual attention by observing how human partners respond to a robot whose attention is suddenly distracted.