-
Kenya's economy faces climate change risks: World Bank
-
Nvidia rides 'claw' craze with AI agent platform
-
Damaged Russian tanker has 700 tonnes of fuel on board: Moscow
-
Talks towards international panel to tackle 'inequality emergency' begin at UN
-
EU talks energy as oil price soars
-
Swiss government rejects proposal to limit immigration
-
Ingredients of life discovered in Ryugu asteroid samples
-
Why Iranian drones are hard to stop
-
France threatens to block funds for India over climate inaction
-
"So proud": Irish hometown hails Oscar winner Jessie Buckley
-
European bank battle heats up as UniCredit swoops for Commerzbank
-
Italian bank UniCredit makes bid for Germany's Commerzbank
-
AI to drive growth despite geopolitics, Taiwan's Foxconn says
-
Filipinas seek abortions online in largely Catholic nation
-
'One Battle After Another' wins best picture Oscar
-
South Koreans bask in Oscars triumph for 'KPop Demon Hunters'
-
'One Battle After Another' dominates Oscars
-
Norway's Oscar winner 'Sentimental Value': a failing father seeks redemption
-
Indonesia firms in palm oil fraud probe supplied fuel majors
-
Milan-Cortina Paralympics end as a 'beacon of unity'
-
It's 'Sinners' vs 'One Battle' as Oscars day arrives
-
Oscars night: latest developments
-
US Fed expected to hold rates steady as Iran war roils outlook
-
It's 'Sinners' v 'One Battle' as Oscars day arrives
-
US mayors push back against data center boom as AI backlash grows
-
Who covers AI business blunders? Some insurers cautiously step up
-
Election campaign deepens Congo's generational divide
-
Courchevel super-G cancelled due to snow and fog
-
Middle East turmoil revives Norway push for Arctic drilling
-
Iran, US threaten attacks on oil facilities
-
Oscars: the 10 nominees for best picture
-
Spielberg defends ballet, opera after Chalamet snub
-
Kharg Island bombed, Trump says US to escort ships through Hormuz soon
-
Jurors mull evidence in social media addiction trial
-
UK govt warns petrol retailers against 'unfair practices' during Iran war
-
Mideast war cuts Hormuz strait transit to 77 ships: maritime data firm
-
How will US oil sanctions waiver help Russia?
-
Oil stays above $100, stocks slide tracking Mideast war
-
How Iranians are communicating through internet blackout
-
Global shipping industry caught in storm of war
-
Why is the dollar profiting from Middle East war?
-
Oil dips under $100, stocks back in green tracking Mideast war
-
US Fed's preferred inflation gauge edges down
-
Deadly blast rocks Iran as leaders attend rally in show of defiance
-
Moscow pushes US to ease more oil sanctions
-
AI agent 'lobster fever' grips China despite risks
-
Thousands of Chinese boats mass at sea, raising questions
-
Casting directors finally get their due at Oscars
-
Fantastic Mr Stowaway: fox sails from Britain to New York port
-
US jury to begin deliberations in social media addiction trial
OpenAI releases reasoning AI with eye on safety, accuracy
ChatGPT creator OpenAI on Thursday released a new series of artificial intelligence models designed to spend more time thinking -- in hopes that generative AI chatbots provide more accurate and beneficial responses.
The new models, known as OpenAI o1-Preview, are designed to tackle complex tasks and solve more challenging problems in science, coding and mathematics -- something that earlier models have been criticized for failing to provide consistently.
Unlike their predecessors, these models have been trained to refine their thinking processes, try different methods and recognize mistakes before they deploy a final answer.
The new release comes as OpenAI is raising funds that could see it valued around $150 billion, which would make it one of the world's most valuable private companies, according to US media.
Investors include Microsoft and Nvidia, and could also include a $7 billion investment from MGX, a United Arab Emirates-backed investment fund, The Information reported.
OpenAI CEO Sam Altman hailed the models as "a new paradigm: AI that can do general-purpose complex reasoning."
However, he cautioned that the technology "is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it."
OpenAI's push to improve "thinking" in its model is a response to the persistent problem of "hallucinations" in AI chatbots.
This refers to their tendency to generate persuasive but incorrect content that has somewhat cooled the excitement over ChatGPT-style AI features among business customers
"We have noticed that this model hallucinates less," OpenAI researcher Jerry Tworek told The Verge.
But "we can't say we solved hallucinations," he added.
The Microsoft-backed company said that in tests, the models performed comparably to PhD students on difficult tasks in physics, chemistry and biology.
They also excelled in mathematics and coding, achieving an 83 percent success rate on a qualifying exam for the International Mathematics Olympiad, compared to 13 percent for GPT-4o, its most advanced general use model.
OpenAI said that the new reasoning capabilities could be used for healthcare researchers to annotate cell sequencing data, physicists to generate complex formulas, or computer developers to build and execute multistep designs.
The company also said that the models survived rigorous jailbreaking tests and could better withstand attempts to circumvent its guardrails.
OpenAI said its strengthened safety measures also included recent agreements with the US and UK AI Safety Institutes, which were granted early access to the models for evaluation and testing.
X.Cheung--CPN