-
Kenya's economy faces climate change risks: World Bank
-
Nvidia making AI module for outer space
-
Migrant workers bear brunt of Iran attacks in Gulf
-
Trump vows to 'take' Cuba as island reels from oil embargo
-
Equities rise on oil easing, with focus on Iran war and central banks
-
Nvidia rides 'claw' craze with AI agent platform
-
Damaged Russian tanker has 700 tonnes of fuel on board: Moscow
-
Talks towards international panel to tackle 'inequality emergency' begin at UN
-
EU talks energy as oil price soars
-
Swiss government rejects proposal to limit immigration
-
Ingredients of life discovered in Ryugu asteroid samples
-
Why Iranian drones are hard to stop
-
France threatens to block funds for India over climate inaction
-
"So proud": Irish hometown hails Oscar winner Jessie Buckley
-
European bank battle heats up as UniCredit swoops for Commerzbank
-
Italian bank UniCredit makes bid for Germany's Commerzbank
-
AI to drive growth despite geopolitics, Taiwan's Foxconn says
-
Filipinas seek abortions online in largely Catholic nation
-
'One Battle After Another' wins best picture Oscar
-
South Koreans bask in Oscars triumph for 'KPop Demon Hunters'
-
'One Battle After Another' dominates Oscars
-
Norway's Oscar winner 'Sentimental Value': a failing father seeks redemption
-
Indonesia firms in palm oil fraud probe supplied fuel majors
-
Milan-Cortina Paralympics end as a 'beacon of unity'
-
It's 'Sinners' vs 'One Battle' as Oscars day arrives
-
Oscars night: latest developments
-
US Fed expected to hold rates steady as Iran war roils outlook
-
It's 'Sinners' v 'One Battle' as Oscars day arrives
-
US mayors push back against data center boom as AI backlash grows
-
Who covers AI business blunders? Some insurers cautiously step up
-
Election campaign deepens Congo's generational divide
-
Courchevel super-G cancelled due to snow and fog
-
Middle East turmoil revives Norway push for Arctic drilling
-
Iran, US threaten attacks on oil facilities
-
Oscars: the 10 nominees for best picture
-
Spielberg defends ballet, opera after Chalamet snub
-
Kharg Island bombed, Trump says US to escort ships through Hormuz soon
-
Jurors mull evidence in social media addiction trial
-
UK govt warns petrol retailers against 'unfair practices' during Iran war
-
Mideast war cuts Hormuz strait transit to 77 ships: maritime data firm
-
How will US oil sanctions waiver help Russia?
-
Oil stays above $100, stocks slide tracking Mideast war
-
How Iranians are communicating through internet blackout
-
Global shipping industry caught in storm of war
-
Why is the dollar profiting from Middle East war?
-
Oil dips under $100, stocks back in green tracking Mideast war
-
US Fed's preferred inflation gauge edges down
-
Deadly blast rocks Iran as leaders attend rally in show of defiance
-
Moscow pushes US to ease more oil sanctions
-
AI agent 'lobster fever' grips China despite risks
Mark Zuckerberg, AI's 'open source' evangelist
Mark Zuckerberg, the founder of Facebook and CEO of Meta, has become an unexpected evangelist for open source technology when it comes to developing artificial intelligence, pitting him against OpenAI and Google.
The 40-year-old tech tycoon laid out his vision in an open letter titled "Open Source AI is the Path Forward" this week. Here is what you need to know about the open versus closed model AI debate.
What is 'open source'?
The history of computer technology has long pitted open source aficionados against companies clinging to their intellectual property.
"Open source" refers to software development where the program code is made freely available to the public, allowing developers to tinker and build on it as they wish.
Many of the internet's foundational technologies, such as the Linux operating system and the Apache web server, are products of open source development.
However, open source is not without challenges. Maintaining large projects, ensuring consistent quality, and managing a wide range of contributors can be complex.
Finally, almost by definition, keeping open source projects financially sustainable is a challenge.
Why is Meta AI 'open source'?
Zuckerberg is probably the last person you would expect to embrace open source.
The company maintains total control over its Instagram and Facebook platforms, leaving little to no leeway for outside developers or researchers to tinker around.
The Cambridge Analytica scandal, in which an outside vendor was revealed in 2018 to be using the platform to gather user information for nefarious practices, only made the company more protective.
Meta's sudden embrace of the open source ethos is driven by its bitterness towards Apple, whose iPhone rules keep a tight control on what Meta and all outside apps can do on their devices.
"One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms," Zuckerberg said.
“Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up if...competitors were not able to constrain what we could build,” he wrote.
That concern has now spread to generative AI, but this time it is Microsoft-backed OpenAI and Google that are the closed-fence culprits that charge developers and keep a tight lid on their AI technology.
Doubters argue that Meta is embracing open source because it came late to the AI party, and is seeking to blow open the field with free access to a powerful model.
- What is Llama? -
Meta's open source LLaMA 3.1 (for Large Language Model Meta AI) is the company’s latest version of its generative AI technology that can spew out human standard content in just seconds.
Performance-wise, it can be compared to OpenAI’s GPT-4 or Google’s Gemini, and like those models is "trained" before deployment by ingesting data from the internet.
But unlike those models, developers can access the technology for free, and make adaptations as they see fit for their specific use cases.
Meta says that LLaMA 3.1 is as good as the best models out there, but unlike its main rivals, it only deals with text, with the company saying it will later match the others with images, audio and video.
- Security threat -
In the rivalry over generative AI, defenders of the closed model argue that the Meta way is dangerous, as it allows bad actors to weaponize the powerful technology.
In Washington, lobbyists argue over the distinction, with opponents to open source insisting that models like Llama can be weaponized by countries like China.
Meta argues that transparency assures a more level playing field and that a world of closed models will ensure that only a few big companies, and a powerhouse nation like China, will be in control.
Startups, universities, and small businesses will "miss out on opportunities," Zuckerberg said.
Ng.A.Adebayo--CPN