-
Kenya's economy faces climate change risks: World Bank
-
Cash handouts, fare hikes as Philippines battles soaring fuel costs
-
Indonesia weighs response to price pressures from Middle East war
-
In Hollywood, AI's no match for creativity, say top executives
-
Nvidia chief expects revenue of $1 trillion through 2027
-
Nvidia making AI module for outer space
-
Migrant workers bear brunt of Iran attacks in Gulf
-
Trump vows to 'take' Cuba as island reels from oil embargo
-
Equities rise on oil easing, with focus on Iran war and central banks
-
Nvidia rides 'claw' craze with AI agent platform
-
Damaged Russian tanker has 700 tonnes of fuel on board: Moscow
-
Talks towards international panel to tackle 'inequality emergency' begin at UN
-
EU talks energy as oil price soars
-
Swiss government rejects proposal to limit immigration
-
Ingredients of life discovered in Ryugu asteroid samples
-
Why Iranian drones are hard to stop
-
France threatens to block funds for India over climate inaction
-
"So proud": Irish hometown hails Oscar winner Jessie Buckley
-
European bank battle heats up as UniCredit swoops for Commerzbank
-
Italian bank UniCredit makes bid for Germany's Commerzbank
-
AI to drive growth despite geopolitics, Taiwan's Foxconn says
-
Filipinas seek abortions online in largely Catholic nation
-
'One Battle After Another' wins best picture Oscar
-
South Koreans bask in Oscars triumph for 'KPop Demon Hunters'
-
'One Battle After Another' dominates Oscars
-
Norway's Oscar winner 'Sentimental Value': a failing father seeks redemption
-
Indonesia firms in palm oil fraud probe supplied fuel majors
-
Milan-Cortina Paralympics end as a 'beacon of unity'
-
It's 'Sinners' vs 'One Battle' as Oscars day arrives
-
Oscars night: latest developments
-
US Fed expected to hold rates steady as Iran war roils outlook
-
It's 'Sinners' v 'One Battle' as Oscars day arrives
-
US mayors push back against data center boom as AI backlash grows
-
Who covers AI business blunders? Some insurers cautiously step up
-
Election campaign deepens Congo's generational divide
-
Courchevel super-G cancelled due to snow and fog
-
Middle East turmoil revives Norway push for Arctic drilling
-
Iran, US threaten attacks on oil facilities
-
Oscars: the 10 nominees for best picture
-
Spielberg defends ballet, opera after Chalamet snub
-
Kharg Island bombed, Trump says US to escort ships through Hormuz soon
-
Jurors mull evidence in social media addiction trial
-
UK govt warns petrol retailers against 'unfair practices' during Iran war
-
Mideast war cuts Hormuz strait transit to 77 ships: maritime data firm
-
How will US oil sanctions waiver help Russia?
-
Oil stays above $100, stocks slide tracking Mideast war
-
How Iranians are communicating through internet blackout
-
Global shipping industry caught in storm of war
-
Why is the dollar profiting from Middle East war?
-
Oil dips under $100, stocks back in green tracking Mideast war
OpenAI forms AI safety committee after key departures
OpenAI, the company behind ChatGPT, announced the formation of a new safety committee on Tuesday, weeks after the departures of key executives raised questions about the firm's commitment to mitigating the dangers of artificial intelligence.
The company said the committee, which will include CEO Sam Altman, is being established as OpenAI begins training its next AI model, expected to surpass the capabilities of the GPT-4 system powering ChatGPT.
"While we are proud to build and release industry-leading models on both capabilities and safety, we welcome a robust debate at this important juncture," OpenAI stated.
Comprised of board members and executives, the committee will spend the next 90 days comprehensively evaluating and bolstering OpenAI's processes and safeguards around advanced AI development.
OpenAI stated it will also consult outside experts during this review period, including former US cybersecurity officials Rob Joyce, who previously led efforts at the National Security Agency, and John Carlin, a former senior Justice Department official.
Over the three-month span, the group will scrutinize OpenAI's current AI safety protocols and develop recommendations for potential enhancements or additions.
After this 90-day review, the committee's findings will be presented to the full OpenAI board before being publicly released.
The committee's formation comes on the heels of recent executive departures that stoked concerns about OpenAI's AI safety priorities.
Earlier this month, the company dissolved its "superalignment" team dedicated to mitigating long-term AI risks.
In announcing his exit, team co-lead Jan Leike criticized OpenAI for prioritizing "shiny new products" over vital safety work in a series of posts on X, the platform previously known as Twitter.
"Over the past few months, my team has been sailing against the wind," Leike said.
OpenAI has also faced controversy over an AI voice some claimed closely mimicked actress Scarlett Johansson, though the company denied attempting to impersonate the Hollywood star.
Y.Ibrahim--CPN