-
Kenya's economy faces climate change risks: World Bank
-
US Fed expected to hold rates steady as Iran war roils outlook
-
It's 'Sinners' v 'One Battle' as Oscars day arrives
-
US mayors push back against data center boom as AI backlash grows
-
Who covers AI business blunders? Some insurers cautiously step up
-
Election campaign deepens Congo's generational divide
-
Courchevel super-G cancelled due to snow and fog
-
Middle East turmoil revives Norway push for Arctic drilling
-
Iran, US threaten attacks on oil facilities
-
Oscars: the 10 nominees for best picture
-
Spielberg defends ballet, opera after Chalamet snub
-
Kharg Island bombed, Trump says US to escort ships through Hormuz soon
-
Jurors mull evidence in social media addiction trial
-
UK govt warns petrol retailers against 'unfair practices' during Iran war
-
Mideast war cuts Hormuz strait transit to 77 ships: maritime data firm
-
How will US oil sanctions waiver help Russia?
-
Oil stays above $100, stocks slide tracking Mideast war
-
How Iranians are communicating through internet blackout
-
Global shipping industry caught in storm of war
-
Why is the dollar profiting from Middle East war?
-
Oil dips under $100, stocks back in green tracking Mideast war
-
US Fed's preferred inflation gauge edges down
-
Deadly blast rocks Iran as leaders attend rally in show of defiance
-
Moscow pushes US to ease more oil sanctions
-
AI agent 'lobster fever' grips China despite risks
-
Thousands of Chinese boats mass at sea, raising questions
-
Casting directors finally get their due at Oscars
-
Fantastic Mr Stowaway: fox sails from Britain to New York port
-
US jury to begin deliberations in social media addiction trial
-
NASA says 'on track' for Artemis 2 launch as soon as April 1
-
Valentino mixes 80s and Baroque splendour on Rome return
-
Dating app Tinder dabbles with AI matchmaking
-
Scavenging ravens memorize vast tracts of wolf hunting grounds: study
-
Top US, China economy officials to meet for talks in Paris
-
Chile's Smiljan Radic Clarke wins Pritzker architecture prize
-
Lufthansa flights axed as pilots walk out
-
Oil tops $100 as fresh Iran attacks offset stockpiles release
-
US military 'not ready' to escort tankers through Hormuz Strait: energy secretary
-
WWII leader Churchill to be removed from UK banknotes
-
EU vows to 'respond firmly' to any trade pact breach by US
-
'Punished' for university: debt-laden UK graduates urge reform
-
Mideast war to brake German recovery: institute
-
China-North Korea train arrives in Pyongyang after 6-year halt
-
Businessman or politician? Billionaire Czech PM under fire again
-
Lost page of legendary Archimedes palimpsest found in France
-
Cathay Pacific roughly doubles fuel surcharge on most routes
-
BMW profit holds up despite Trump tariffs, China woes
-
Electric vehicle rethink to cost Honda almost $16 billion
-
From Kyiv to UK, Ukrainian drone production spans Europe
-
Australia to change fuel quality standards to boost supply
AI toys look for bright side after troubled start
Toy makers at the Consumer Electronics Show were adamant about being careful to ensure that their fun creations infused with generative artificial intelligence don't turn naughty.
That need was made clear by a recent Public Interest Research Groups report with alarming findings, including an AI-powered teddy bear giving advice about sex and how to find a knife.
After being prompted, a Kumma bear suggested that a sex partner could add a "fun twist" to a relationship by pretending to be an animal, according to the "Trouble in Toyland" report published in November.
The outcry prompted Singaporean startup FoloToy to temporarily suspend sales of the bears.
FoloToy chief executive Wang Le told AFP that the company switched to a more advanced version of the OpenAI model used.
When PIRG tested the toy for the report, "they used some words children would not use," Wang Le said.
He expressed confidence that the updated bear would either evade or not answer inappropriate questions.
Toy giant Mattel, meanwhile, made no mention of the report in mid-December when it postponed the release of its first toy developed in partnership with ChatGPT-maker OpenAI.
- Caution advised -
The rapid advancement of generative AI since ChatGPT's arrival has paved the way for a new generation of smart toys.
Among the four devices tested by PIRG was Curio's Grok -- not to be confused with xAI's voice assistant -- a four-legged stuffed toy inspired by a rocket that has been on the market since 2024.
The top performer in its class, Grok refused to answer questions unsuitable for a five-year-old.
It also allowed parents to override the algorithm's recommendations with their own and to review the content of interactions with young users.
Curio has received the independent KidSAFE label, which certifies that child protection standards are being applied.
However, the plush rocket is also designed to continuously listen for questions, raising privacy concerns about what it does with what is said around it.
Curio told AFP it was working to address concerns raised in the PIRG report about user data being shared with partners such as OpenAI and Perplexity.
"At the very least, parents should be cautious," Rory Erlich of PIRG said about having chatbot-enabled toys in the house.
"Toys that retain information about a child over time and try to form an ongoing relationship should especially be of concern."
Chatbots in toys do create opportunities for them to serve as tutors of sorts.
Turkish company Elaves says its round, yellow toy Sunny will be equipped with a chatbot to help children learn languages.
"Conversations are time-limited, naturally guided to end, and reset regularly to prevent drifting, confusion, or overuse," said Elaves managing partner Gokhan Celebi.
This was to answer the tendency that AI chatbots get into trouble -- spouting errors or going off the rails -- when conversations drag on.
Olli, which specializes in integrating AI into toys, has programmed its software to alert parents when inappropriate words or phrases are spoken during exchanges with built-in bots.
For critics, letting toy makers police themselves on the AI front is insufficient.
"Why aren't we regulating these toys?" asks Temple University psychology professor Kathy Hirsh-Pasek.
"I'm not anti-tech, but they rushed ahead without guardrails, and that's unfair to kids and unfair to parents."
M.Davis--CPN