-
Kenya's economy faces climate change risks: World Bank
-
Cash handouts, fare hikes as Philippines battles soaring fuel costs
-
Indonesia weighs response to price pressures from Middle East war
-
In Hollywood, AI's no match for creativity, say top executives
-
Nvidia chief expects revenue of $1 trillion through 2027
-
Nvidia making AI module for outer space
-
Migrant workers bear brunt of Iran attacks in Gulf
-
Trump vows to 'take' Cuba as island reels from oil embargo
-
Equities rise on oil easing, with focus on Iran war and central banks
-
Nvidia rides 'claw' craze with AI agent platform
-
Damaged Russian tanker has 700 tonnes of fuel on board: Moscow
-
Talks towards international panel to tackle 'inequality emergency' begin at UN
-
EU talks energy as oil price soars
-
Swiss government rejects proposal to limit immigration
-
Ingredients of life discovered in Ryugu asteroid samples
-
Why Iranian drones are hard to stop
-
France threatens to block funds for India over climate inaction
-
"So proud": Irish hometown hails Oscar winner Jessie Buckley
-
European bank battle heats up as UniCredit swoops for Commerzbank
-
Italian bank UniCredit makes bid for Germany's Commerzbank
-
AI to drive growth despite geopolitics, Taiwan's Foxconn says
-
Filipinas seek abortions online in largely Catholic nation
-
'One Battle After Another' wins best picture Oscar
-
South Koreans bask in Oscars triumph for 'KPop Demon Hunters'
-
'One Battle After Another' dominates Oscars
-
Norway's Oscar winner 'Sentimental Value': a failing father seeks redemption
-
Indonesia firms in palm oil fraud probe supplied fuel majors
-
Milan-Cortina Paralympics end as a 'beacon of unity'
-
It's 'Sinners' vs 'One Battle' as Oscars day arrives
-
Oscars night: latest developments
-
US Fed expected to hold rates steady as Iran war roils outlook
-
It's 'Sinners' v 'One Battle' as Oscars day arrives
-
US mayors push back against data center boom as AI backlash grows
-
Who covers AI business blunders? Some insurers cautiously step up
-
Election campaign deepens Congo's generational divide
-
Courchevel super-G cancelled due to snow and fog
-
Middle East turmoil revives Norway push for Arctic drilling
-
Iran, US threaten attacks on oil facilities
-
Oscars: the 10 nominees for best picture
-
Spielberg defends ballet, opera after Chalamet snub
-
Kharg Island bombed, Trump says US to escort ships through Hormuz soon
-
Jurors mull evidence in social media addiction trial
-
UK govt warns petrol retailers against 'unfair practices' during Iran war
-
Mideast war cuts Hormuz strait transit to 77 ships: maritime data firm
-
How will US oil sanctions waiver help Russia?
-
Oil stays above $100, stocks slide tracking Mideast war
-
How Iranians are communicating through internet blackout
-
Global shipping industry caught in storm of war
-
Why is the dollar profiting from Middle East war?
-
Oil dips under $100, stocks back in green tracking Mideast war
OpenAI insiders blast lack of AI transparency
A group of current and former employees from OpenAI on Tuesday issued an open letter warning that the world's leading artificial intelligence companies were falling short of necessary transparency and accountability to meet the potential risks posed by the technology.
The letter raised serious concerns about AI safety risks "ranging from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction."
The 16 signatories, which also included a staff member from Google DeepMind, warned that AI companies "have strong financial incentives to avoid effective oversight" and that self-regulation by the companies would not effectively change this.
"AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm," the letter said.
"However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."
That reality, the letter added, meant that employees inside the companies were the only ones who could notify the public, and the signatories called for broader whistleblower laws to protect them.
"Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues," the letter said.
The four current employees of OpenAI signed the letter anonymously because they feared retaliation from the company, The New York Times reported.
It was also signed by Yoshua Bengio, Geoffrey Hinton and Stuart Russell, who are often described as AI "godfathers" and have criticized the lack of preparation for AI's dangers.
OpenAI in a statement pushed back at the criticism.
"We’re proud of our track record of providing the most capable and safest AI systems and believe in our scientific approach to addressing risk," a statement said.
"We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world."
OpenAI also said it had "avenues for employees to express their concerns including an anonymous integrity hotline" and a newly formed Safety and Security Committee led by members of the board and executives, including CEO Sam Altman.
The criticism of OpenAI, which was first released to the Times, comes as questions are growing around Altman's leadership of the company.
OpenAI has unveiled a wave of new products, though the company insists they will only get released to the public after thorough testing.
An unveiling of a human-like chatbot caused a controversy when Hollywood star Scarlett Johansson complained that it closely resembled her voice.
She had previously turned down an offer from Altman to work with the company.
St.Ch.Baker--CPN