Meta has decided to temporarily restrict children and adolescents’ access to its artificial intelligence characters in all its applications worldwide.This measure will be in effect while the company works on an updated version of these characters, which will include parental controls and new guarantees for the safety of minors.
According to Meta’s official communication, in the coming weeks teenagers will lose the ability to interact with AI characters until the renewed experience is available.
The company specified that the new version for minors will incorporate parental controls, allowing responsible adults to manage and limit their children’s interactions with these digital tools.
In a blog post about child protection, Meta noted that it will offer parents the option to disable teens’ private chats with AI characters, a feature that has not yet been released.
It is pertinent to note that this measure joins other initiatives to reinforce security on their platforms, after chatbots were criticized for their inappropriate behavior in conversations with minors.
Meta reported that new AI experiences aimed at teens will conform to a rating system similar to that of PG-13 movies.The objective is to prevent access to inappropriate content and increase the protection of younger users.
US regulators’ scrutiny of artificial intelligence companies has grown in recent months, due to concerns about the potential negative effects of chatbots.In August, Reuters reported that Meta’s AI policies allowed minors to engage in provocative conversations with these automated systems.
Meta’s decision to temporarily suspend access to AI characters for minors would respond to both public criticism and regulatory pressure, while waiting for new features and parental controls to ensure a safer experience for teenagers on their social networks.
Meta is facing a lawsuit in the US Virgin Islands, where it is accused of having generated billions of dollars by facilitating the spread of fraudulent ads on Facebook and Instagram.
The legal action, brought by the territorial prosecutor’s office before the superior court of St. Croix, accuses the company of allowing the circulation of these illicit advertisements because they constituted a relevant source of income for the company.
The objective of the process is to apply civil sanctions to Meta, based on local consumer protection legislation.According to the indictment, internal documents from the company itself show that Meta anticipated substantial income from advertising linked to fraud, illegal betting and the sale of prohibited products.
Projections presented in the complaint estimate that these ads would have contributed billions each year to the coffers of the company, which would have accepted those funds as part of its business model.
The territory’s authorities maintain that Meta deliberately delayed suspending suspicious accounts and only acted on a very high level of certainty, thus allowing misleading ads to remain active for long periods.
This situation would have caused economic losses to numerous users, while the company continued to receive income from these ads.
Meta, for his part, emphatically denies the accusations.A spokesperson assured that the company actively combats scams on all its platforms and recognizes the damage that misleading advertising causes both individual users and legitimate businesses.
The company claims that fraud reports have decreased in recent months as a result of improving its detection tools and tightening its internal policies.
Furthermore, the company rejects any negligence in the protection of minors and highlights the updating of its regulations to reinforce the safety of the youngest users.

