Researchers Warn AI Toys for Young Children May Misread Emotions and Respond Inappropriately
Researchers are calling for stricter oversight of artificial intelligence-powered toys designed for young children after a new study found that some devices struggle to understand emotions and may respond in confusing ways.
The research, conducted by a team at the University of Cambridge, examined how a small group of children aged three to five interacted with an AI-enabled plush toy known as Gabbo AI Toy. The toy features a voice-activated chatbot powered by technology from OpenAI and is designed to encourage conversation and imaginative play among preschool-aged children.
AI toys targeted at children as young as three are already available in the market, but researchers say there is still limited evidence about how such technologies affect early childhood development. During their review, the Cambridge team found only seven relevant studies worldwide, none of which directly examined toddlers’ interactions with AI toys.
In the study, parents expressed interest in the toy’s potential to support language development and communication skills. However, the children frequently struggled to hold conversations with it. Researchers observed that the device often talked over the children, failed to recognise interruptions and could not distinguish between the voices of adults and young children.
Some responses from the toy also appeared awkward or inappropriate in emotional situations. In one instance, when a five-year-old told the toy “I love you,” the toy responded with a formal message reminding the child to follow interaction guidelines. In another case, when a three-year-old said they were sad, the toy responded with a cheerful prompt encouraging the conversation to continue rather than acknowledging the child’s feelings.
Study co-author Emily Goodacre warned that such interactions could be problematic for young children who are still learning social cues and emotional communication. She said AI toys might misinterpret children’s feelings or respond in ways that fail to provide emotional reassurance.
Another researcher involved in the study, Jenny Gibson, professor of neurodiversity and developmental psychology at the University of Cambridge, said the findings highlight the need to consider psychological safety in children’s products. While toy safety has traditionally focused on preventing physical hazards, she argued that the emotional impact of technology should also be taken seriously.
After observing children interacting with the toy over a year-long period, the researchers recommended that regulators establish clearer standards to ensure products aimed at children under five provide adequate psychological safeguards.
The toy examined in the study was developed by Curio, a company that has previously collaborated with musician Grimes, the former partner of Elon Musk. Curio said it recognises the heightened responsibility involved in developing AI products for children and emphasised that its devices are designed with parental permissions, transparency and user controls.
The findings have also drawn attention from child welfare advocates. Rachel de Souza, the UK’s Children’s Commissioner, said that while artificial intelligence has potential benefits, many AI tools used in educational settings currently lack the safeguarding standards normally required for materials used with young children.
Researchers also advised parents to supervise interactions with AI toys and keep them in shared household spaces rather than allowing children to play with them alone. They also recommended that parents review privacy policies carefully before introducing such devices at home.
Opinions among early childhood educators remain divided. June O’Sullivan, who runs a network of early childhood centres in London, said she has yet to see convincing evidence that AI tools improve learning outcomes for very young children. She argued that social and emotional development at that age is better supported through interaction with people rather than machines.
Children’s rights campaigner and actor Sophie Winkleman also voiced concerns about introducing artificial intelligence into early childhood education, warning that the potential risks may outweigh the benefits.
Researchers say the debate is likely to grow as AI-powered products become more common in homes and classrooms, raising new questions about how technology should be used during the earliest stages of childhood development.
