A safety expert has warned that parents need to be aware of the risks artificial intelligence (AI) poses to their children.

This comes after a lawsuit was filed, claiming a chatbot 'encouraged a teenager to take his own life.'

Florida mum Megan Garcia is taking legal action against Character.ai, claiming her 14-year-old son killed himself as a result of an interaction with a chatbot taking on the identity of the Game of Thrones character Daenerys Targaryen.

Dale Allen, the founder of The Safety-Verse, an initiative which aims to make information and resources on safety more accessible, said it was important to recognise that AI is a rapidly evolving technology that has not yet been fully tried and tested.

He said: "Artificial intelligence is still an emerging technology that hasn’t yet undergone the extensive learning and safety refinements we’ve achieved in health and safety over the years by learning from past accidents.

"Because AI is somewhat ‘childlike’ in its development, it needs to be human-led with parental controls in place—especially for systems accessed or used in the home—to protect our children and ensure safety as the technology matures over the coming years.

"We need to remain the guardians of these technologies, ensuring that human judgment and oversight guide their use—especially when it comes to protecting children, the elderly, and ourselves in our homes.

"In relation to Character.AI specifically, we need to ensure we give AI the same attention we would to platforms like YouTube, Google, Netflix, and any other systems that require child settings for users who are not yet adults."