-
Kenya's economy faces climate change risks: World Bank
-
US Fed expected to hold rates steady as Iran war roils outlook
-
It's 'Sinners' v 'One Battle' as Oscars day arrives
-
US mayors push back against data center boom as AI backlash grows
-
Who covers AI business blunders? Some insurers cautiously step up
-
Election campaign deepens Congo's generational divide
-
Courchevel super-G cancelled due to snow and fog
-
Middle East turmoil revives Norway push for Arctic drilling
-
Iran, US threaten attacks on oil facilities
-
Oscars: the 10 nominees for best picture
-
Spielberg defends ballet, opera after Chalamet snub
-
Kharg Island bombed, Trump says US to escort ships through Hormuz soon
-
Jurors mull evidence in social media addiction trial
-
UK govt warns petrol retailers against 'unfair practices' during Iran war
-
Mideast war cuts Hormuz strait transit to 77 ships: maritime data firm
-
How will US oil sanctions waiver help Russia?
-
Oil stays above $100, stocks slide tracking Mideast war
-
How Iranians are communicating through internet blackout
-
Global shipping industry caught in storm of war
-
Why is the dollar profiting from Middle East war?
-
Oil dips under $100, stocks back in green tracking Mideast war
-
US Fed's preferred inflation gauge edges down
-
Deadly blast rocks Iran as leaders attend rally in show of defiance
-
Moscow pushes US to ease more oil sanctions
-
AI agent 'lobster fever' grips China despite risks
-
Thousands of Chinese boats mass at sea, raising questions
-
Casting directors finally get their due at Oscars
-
Fantastic Mr Stowaway: fox sails from Britain to New York port
-
US jury to begin deliberations in social media addiction trial
-
NASA says 'on track' for Artemis 2 launch as soon as April 1
-
Valentino mixes 80s and Baroque splendour on Rome return
-
Dating app Tinder dabbles with AI matchmaking
-
Scavenging ravens memorize vast tracts of wolf hunting grounds: study
-
Top US, China economy officials to meet for talks in Paris
-
Chile's Smiljan Radic Clarke wins Pritzker architecture prize
-
Lufthansa flights axed as pilots walk out
-
Oil tops $100 as fresh Iran attacks offset stockpiles release
-
US military 'not ready' to escort tankers through Hormuz Strait: energy secretary
-
WWII leader Churchill to be removed from UK banknotes
-
EU vows to 'respond firmly' to any trade pact breach by US
-
'Punished' for university: debt-laden UK graduates urge reform
-
Mideast war to brake German recovery: institute
-
China-North Korea train arrives in Pyongyang after 6-year halt
-
Businessman or politician? Billionaire Czech PM under fire again
-
Lost page of legendary Archimedes palimpsest found in France
-
Cathay Pacific roughly doubles fuel surcharge on most routes
-
BMW profit holds up despite Trump tariffs, China woes
-
Electric vehicle rethink to cost Honda almost $16 billion
-
From Kyiv to UK, Ukrainian drone production spans Europe
-
Australia to change fuel quality standards to boost supply
California enacts AI safety law targeting tech giants
California Governor Gavin Newsom has signed into law groundbreaking legislation requiring the world's largest artificial intelligence companies to publicly disclose their safety protocols and report critical incidents, state lawmakers announced Monday.
Senate Bill 53 marks California's most significant move yet to regulate Silicon Valley's rapidly advancing AI industry while also maintaining its position as a global tech hub.
"With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails," State Senator Scott Wiener, the bill's sponsor, said in a statement.
The new law represents a successful second attempt by Wiener to establish AI safety regulations after Newsom vetoed his previous bill, SB 1047, after furious pushback from the tech industry.
It also comes after a failed attempt by the Trump administration to prevent states from enacting AI regulations, under the argument that they would create regulatory chaos and slow US-made innovation in a race with China.
The new law says major AI companies have to publicly disclose their safety and security protocols in redacted form to protect intellectual property.
They must also report critical safety incidents -- including model-enabled weapons threats, major cyber-attacks, or loss of model control -- within 15 days to state officials.
The legislation also establishes whistleblower protections for employees who reveal evidence of dangers or violations.
According to Wiener, California's approach differs from the European Union's landmark AI Act, which requires private disclosures to government agencies.
SB 53, meanwhile, mandates public disclosure to ensure greater accountability.
In what advocates describe as a world-first provision, the law requires companies to report instances where AI systems engage in dangerous deceptive behavior during testing.
For example, if an AI system lies about the effectiveness of controls designed to prevent it from assisting in bioweapon construction, developers must disclose the incident if it materially increases catastrophic harm risks.
The working group behind the law was led by prominent experts including Stanford University's Fei-Fei Li, known as the "godmother of AI."
S.F.Lacroix--CPN