Category: Top News

  • OpenAI’s and Anthropic’s CEOs Compete Over Contracts In a Deeply Personal Fight

    OpenAI’s and Anthropic’s CEOs Compete Over Contracts In a Deeply Personal Fight

    IBL News | New York

    OpenAI and Anthropic, with headquarters two miles apart in San Francisco, are ferociously competing over Pentagon contracts in a deeply personal fight, The New York Times analyzed.

    OpenAI has created the fastest-growing consumer app in tech history, with over 900 million people using its ChatGPT and nine million paying businesses. Its revenue is expected to top $25 billion this year while it holds more than $100 billion in the bank.

    But in a few months, Anthropic, OpenAI’s smaller rival, has added thousands of big businesses as customers, more than doubling its expected revenue this year to $19 billion, up from $9 billion last year.

    Anthropic’s smartphone app soared to the No. 1 spot in Apple’s App Store downloads after OpenAI jumped in with its own Pentagon deal. Days later, the War Secretary labeled Anthropic a “supply chain risk,” a declaration that prevents its technology from being used in any defense contract work.

    Now, Anthropic is facing new adversaries in President Trump and officials in his administration.
    “Well, I fired Anthropic,” Donald J. Trump said in an interview last week. “Anthropic is in trouble,” he added.

    However, Anthropic aims to I.P.O. before OpenAI does this year, seeking an early advantage with investors.

    Other companies, like Google, Microsoft, Meta, and a wide range of start-ups around the world, are also vying for AI leadership.

    Anthropic’s CEO, Dr. Dario Amodei, was vice president of research at OpenAI, but he had concerns over safety and thought Sam Altman was moving fast to commercialize the technology. He quit and took a group of OpenAI researchers with him to create Anthropic, a for-profit company that vows to meet certain standards for social impact and accountability.

    Now, they both dislike each other, and their beliefs about how AI should be developed have direct implications for the companies’ businesses.

     

  • Microsoft Unveils Copilot Health, a Free AI Tool That Can Access Medical Data

    Microsoft Unveils Copilot Health, a Free AI Tool That Can Access Medical Data

    IBL News | New York

    Microsoft unveiled Copilot Health, a free AI health tool in the Copilot app that can access medical records and health data (with the user’s consent) and provide personalized advice about conditions or symptoms, informed by the user’s disease history, test results, medications, doctors’ visit notes, and biometric data recorded by wearable devices, such as Apple Watch and Fitbit.

    The company said that this service could especially benefit those managing chronic medical conditions.

    Imported data is encrypted and firewalled from the rest of the app to address privacy concerns.

    It plugs into information from more than 50,000 U.S. hospitals and provider organizations, including lab results from those institutions or through Function Health.

    Data is pulled by vendor HealthEx, which adheres to the federal initiative known as TEFCA, a nationwide framework for accessing health records. The data is then streamed into Copilot Health. Microsoft said users can manage and delete their information.

    “Data privacy something that Microsoft is uniquely placed to do with our scale, with our regulatory experience, with the kind of trust and confidence that people have in our security and the history that we have as a mature, stable player,” Microsoft AI Chief Executive Mustafa Suleyman said.

    For users who don’t plug in their personal data, the AI concierge doctor tool can provide more generalized answers.

    Eventually, Microsoft plans to charge users for the feature.

    Healthcare is becoming more competitive in the AI world, with Microsoft trailing competitors such as OpenAI’s ChatGPT and Google’s Gemini.

    Microsoft has been building its AI health capabilities, trying to achieve “medical superintelligence,” an AI capable of providing high-quality insights across medical disciplines.

  • The Trump Administration Releases a National Legislative Framework for AI 

    The Trump Administration Releases a National Legislative Framework for AI 

    IBL News | New York

    President Trump unveiled an AI legislative framework yesterday, following its goal of “winning the AI race” for economic competitiveness and national security. This federal AI framework also seeks to prevent states from enacting AI legislation.

    It explicitly calls on Congress to preempt state laws, create age-gating requirements, streamline permitting to enable data centers to generate power on-site, combat AI-enabled scams, address AI-related national security concerns, and ensure that Americans’ creativity continues to propel the country.

    By preempting state AI laws, the new legislation would centralize power in Washington once the framework becomes law and the President signs it.

    The White House admits that some Americans feel uncertain about how this transformative technology will affect issues they care about, like their children’s well-being or their monthly electricity bill.”

    It also said that it is proposing guardrails to ensure that AI can pursue truth and accuracy without limitation.

    “The Administration wants American workers to participate in and reap the rewards of AI-driven growth, encouraging Congress to further workforce development and skills training programs, expanding opportunities across sectors, and creating new jobs in an AI-powered economy.”

    This light-touch regulatory approach is championed by “accelerationists,” such as the White House AI czar and venture capitalist David Sacks.

    Notably, New York’s RAISE Act and California’s SB-53 seek to ensure that large AI companies have specific safety protocols in place and adhere to them.

    Many in the AI industry are celebrating this Trump administration’s direction because it gives them broader liberties to innovate without the threat of regulation.

  • OpenAI Launches ‘Signals’, a Data Portal to Show How AI Is Being Used Across the Economy

    OpenAI Launches ‘Signals’, a Data Portal to Show How AI Is Being Used Across the Economy

    IBL News | New York

    OpenAI launched a data portal designed to show governments, employers, and workers how generative AI is being used across the global economy.

    This portal, OpenAI Signals, emerges amid rising concern about how AI is changing jobs, productivity, and skill requirements, and about how institutions respond to those shifts.

    Signals draws on data from more than 800 million OpenAI users, one million business customers, and four million developers using its API, including activity related to ChatGPT.

    Chris Lehane, Chief Global Affairs Officer at OpenAI, said the company wants to ground policy discussions and inform workforce policy, training, and access.

    “The goal is to democratize AI so today’s workers can fully access the technology’s tools, shape how it’s used on the job, and share in its economic benefits,” he explained.

    Lehane also acknowledged disruption ahead, “At the same time, we’re clear-eyed about the disruption ahead. Work will change, and some jobs will be lost. Preparing workers for that reality isn’t optional; it’s essential.”

    “We’ve seen what happens when America waits too long to support American workers through technological change,” Lehane wrote. “With AI, we can’t afford to repeat that mistake. Acting early is how we ensure more Americans can use these tools and share in the gains.”

    OpenAI is also working with the US Department of Labor on its upcoming AI Workforce Hub, aiming to inform workforce training and AI literacy initiatives.

    OpenAI is discussing potential future legislation with policymakers that could encourage AI labs to share similar data. The company plans to convene experts and stakeholders in March to discuss how workers can be better supported in an AI-driven economy.

  • Some CEOs Are Delivering Bleak Warnings About the Disruption, Fueling the Anti-AI Movement

    Some CEOs Are Delivering Bleak Warnings About the Disruption, Fueling the Anti-AI Movement

    IBL News | New York

    OpenAI’s Sam Altman and Palantir’s Alex Karp both delivered bleak warnings about the disruption AI could bring, fueling the AI-fear narrative.

    Specifically, Altman said AI is unpopular, but it will be treated like a utility someday, one that people will pay for.

    Meanwhile, Palantir’s Karp warned on CNBC of AI’s extreme societal disruption, a negative impact on “the economic and therefore political power of highly educated, often female voters, who vote mostly Democrat,” while boosting the relative position of vocationally trained, working-class people (often men).

    Karp framed the disruption as necessary for national security, linking AI to military superiority and to preserving U.S. power in a global tech race.

    Anthropic CEO Dario Amodei has warned that AI could wipe out huge swaths of white-collar jobs. He argued that the responsible path forward is to build the most powerful AI with strong guardrails before less careful competitors do. Anthropic raised $30 billion in February at a $380 billion valuation.

    Privately, several AI CEOs told Axios they’re nervous an anti-AI wave could hit hard enough to power a “ban AI” movement heading into 2028.

    “They’re scaring the bejeezus out of the public,” White House AI czar David Sacks said on the “All-In Podcast,” referring to a slew of recent comments from AI CEOs.

    Anyway, AI is getting scarier and more unpopular as the technology improves and elections approach.

    Only 26% of voters view AI positively, making it even less popular than ICE, according to an NBC News poll of 1,000 voters.

  • Meta, Amazon, and Oracle Plan Mass Layoffs as They Pour Billions Into AI Development

    Meta, Amazon, and Oracle Plan Mass Layoffs as They Pour Billions Into AI Development

    IBL News | New York 

    Meta, Amazon, and Oracle are collectively planning tens of thousands of layoffs in 2026 as they leverage AI-driven efficiency gains, betting that remaining employees, boosted by AI tools, will offset productivity losses.

    Meta is cutting 20% of its workforce (~15,000 jobs) despite doubling its AI spend to $135 billion, Amazon is planning 14,000 additional cuts via AI efficiency measures, and Oracle is eliminating 10% of its staff while raising $50 billion for AI data center infrastructure.

    No date has been set for the cuts, and the magnitude has not been finalized.

    If Meta settles on the 20% figure, the layoffs will be the company’s most ​significant since a restructuring in late 2022 and early 2023. It employed nearly 79,000 people as ​of December 31, according to its latest filing.

    Over the last year, CEO Mark Zuckerberg has been pushing Meta to ​compete more forcefully in generative AI. The company has offered huge pay packages, some worth hundreds of millions of dollars over ​four years, to court top AI researchers to a new superintelligence team.

    The company has said it plans to invest $600 billion in building data centers by 2028. ⁠Earlier this week, it acquired Moltbook, a social networking platform built for AI agents.

    In December 2025, Meta acquired AI agent startup Manus for over $2 billion to accelerate its AI innovation, specifically to enhance autonomous, multi-step task automation.

  • NVIDIA Launched NemoClaw, Its OpenClaw with Guardrails

    NVIDIA Launched NemoClaw, Its OpenClaw with Guardrails

    IBL News | New York

    NVIDIA’s annual GTC 2026 keynote kicked off with Jensen Huang announcing new partnerships with OpenClaw, Uber Autonomous Cars, and Disney’s Imagineering Research and Development Lab, where he introduced Disney’s latest lifelike robot, Olaf from the movie Frozen.

    In San Jose, California, at its annual GTC conference, Nvidia yesterday announced the Nvidia Agent Toolkit, which brings together open models, runtimes, open skills, and blueprints to build long-running, secure, and performant autonomous agents. This is the next generation of the company’s previously called Nvidia NeMo Agent Toolkit.

    However, Nvidia’s NemoClaw got most of the attention.

    NVIDIA’s NemoClaw combines the OpenClaw agent platform with components of its Agent Toolkit to add privacy and security controls.

    In its announcement, Nvidia calls it “the Nvidia NemoClaw stack for the OpenClaw agent platform.”

    NVIDIA says it developed NemoClaw in collaboration with OpenClaw founder Peter Steinberger, who remains the maintainer of OpenClaw even after he joined OpenAI earlier this year.

    “Every company now needs to have an OpenClaw strategy,” Nvidia CEO Jensen Huang said in his keynote. To him, “OpenClaw — and claws in general — are going to be as important as Linux, Kubernetes, HTML, and other fundamental tools.”

    NemoClaw can use any coding agent. While OpenClaw handles the runtime, memory, and skills, NemoClaw adds new and existing open source models, tools, and frameworks from NVIDIA.

    It can use, for example, Nvidia’s own Nemotron models (or any other model running locally or in the cloud), the company’s Dynamo inference engine, and a new open-source security runtime called OpenShell that is at the core of the Agent Toolkit.

    “NVIDIA OpenShell is a new open source safety and security runtime for agents,” Kari Briski, Nvidia’s VP of generative AI software. said in a press conference ahead of today’s announcement. “OpenShell provides the missing infrastructure layer beneath claws to give them the access they need to be productive, while enforcing policy-based security network and privacy guardrails.”

    This security layer is at the core of the announcement. As claws gain access to corporate tools and data, OpenShell is designed to be the policy-enforcement layer that keeps them within bounds by combining security, network, and privacy guardrails.

    In its announcement, NVIDIA argues, “This provides the missing infrastructure layer beneath claws to give them the access they need to be productive, while enforcing policy-based security, network, and privacy guardrails.”

    “NemoClaw installs OpenClaw with Nemotron models and the Nvidia OpenShield runtime in a single command,” Briski explained. “This provides a foundation for agents to develop and learn new skills to complete tasks according to defined privacy and security guardrails. It adds security and privacy to run personal, always-on AI assistance anywhere.”

    NVIDIA is working with Cisco, CrowdStrike, Google, Microsoft Security, and TrandAI to bring OpenShell compatibility to their security tools

    NemoClaw can run in the cloud as well as locally on RTX PCs and Nvidia’s own desktop supercomputers, such as DGX Spark and DGX Station. Some enterprising developers will likely find ways to run this on the Mac minis they bought for OpenClaw, too.

    During a press conference ahead of today’s announcement, Briski framed this move as part of Nvidia’s ongoing engagement with the open source community.

    “Nvidia NemoClaw, which you’ve all already seen in the news, is Nvidia’s contribution to the open claw community to help take the incredible OpenClaw phenomenon to the next level. Just like we’ve done for Pytorch, Kubernetes, OpenGL, and more,“ she says.

    Some tech companies have started to provide NemoClaw implementations, among them, ibl.ai, the parent company of the iblnews.org service.

     

     

  • Anthropic Launches Its Institute Focused Economical Implications, but Also on “Democratic Leadership in AI”

    Anthropic Launches Its Institute Focused Economical Implications, but Also on “Democratic Leadership in AI”

    IBL News | New York

    Amid a conflict with the Trump administration, which has resulted in a blacklist and a lawsuit, Anthropic announced this month a new internal think tank focused on researching AI’s large scale economical and workforce implications, national security, and control.

    The Anthropic Institute will be led by Jack Clark, the company’s co-founder and current head of public policy.

    Anthropic will also open its planned office in Washington, D.C. Its public policy team, now managed by Sarah Heck, formerly head of external affairs, tripled in size in 2025. It focuses on issues such as “democratic leadership in AI”, national security, energy, and AI infrastructure.

    This month, Anthropic sued the US government over its designation as a supply-chain risk. This designation bars users from Anthropic’s tech in their own work with the Department of War.

  • Participants Highlighted How Deeply AI Is Shaping Education At 2026 SXSW EDU

    Participants Highlighted How Deeply AI Is Shaping Education At 2026 SXSW EDU

    IBL News | Austin, Texas

    The 2026 SXSW EDU 2026 conference wrapped this week in Austin, Texas, with a clear message for schools, colleges, and education companies: AI is already reshaping learning—personalizing student experiences, automating tasks, and helping educators focus on teaching, but the challenge is how to ensure that this technology amplifies human connection, not replaces it.

    The event brought together thousands of educators, edtech founders, students, and policymakers for four days of panels, workshops, competitions, and networking across venues including the Hilton Austin Downtown, Austin Marriott, Courtyard Marriott, Westin, and Fairmont.

    The Austin Convention Center was under construction, and coverage forced attendees to move uncomfortably from one hotel to another to attend talks.

    At this year’s conference, held March 9–12 in Austin, the concern about how deeply and quickly AI is now shaping education was clear. Dozens of panels explored everything from adaptive tutoring and authentic assessment to algorithmic bias, teacher burnout, and AI literacy.

    The educational conference featured 300+ sessions and workshops, 50+ exhibitions, 120+ mentorship and networking opportunities, and 15+ films and performances.

    Highlighted sessions were “Improving Young Minds & Mental Wellbeing in the AI Era” and “Keeping Teachers at the Center of AI in Schools,” which featured MagicSchool AI founder Adeel Khan and Martha Salazar-Zamora, tackling the tension between AI adoption and preserving the human role of teachers.

    Other sessions explored AI literacy (“What Does it Mean to be Literate in the Age of AI?”) and HBCU AI pathways (“Building AI Pathways for HBCU Talent & Community Impact”).

    The 2026 SXSW EDU show spotlighted innovation through its two signature competitions. On March 10, SXSW EDU named Apprentos of New York the winner of its Launch Startup Competition, with ShareTheBoard taking the Impact Award and Rézme winning the Community Choice Award.

    Organizers said the competition drew the largest application pool in its history, suggesting continued investor and founder interest in education technology despite a tougher funding environment.

    A day later, the conference turned to student-led innovation. Immunova AI, a team from Singapore, won the Student Impact Challenge for its nonprofit open-source AI platform designed to support early cancer detection and diagnosis by integrating scans, genetic data, and patient records.

    Beyond AI, the week’s programming reflected a wide definition of what education innovation now includes.

    Panels and talks covered topics ranging from early childhood to public-school optimism, classroom accessibility, workforce development, civic engagement, and the future of literacy.

    Other notable sessions included “EdTech Has Been Building for the Wrong Person,” as well as panels on Gen Z educator recruitment, reimagining learning spaces through play, mental health in leadership, and the intersection of nuclear weapons policy and media education.

    The event closed with a Congress Avenue Block Party, a Crossover Day Mixer, and the SXSW EDU Beats & BBQ Social.

    • Keynote and featured sessions are now available on demand on YouTube.

  • NotebookLM Adds Cinematic Video by Combining Gemini 3, Nano Banana Pro, and Veo 3

    NotebookLM Adds Cinematic Video by Combining Gemini 3, Nano Banana Pro, and Veo 3

    IBL News | New York

    Google introduced a major update to its video-creation capabilities in NotebookLM for AI Ultra subscribers — Google’s highest AI subscription tier — on the web and mobile.

    The new feature, named Cinematic Video Overviews, adds immersive and enriched animated visuals to the clips.

    It uses a combination of three Google models: Gemini 3, Nano Banana Pro, and Veo 3.

    Google explained this way, “Gemini now acts as a creative director, making hundreds of structural and stylistic decisions to best tell the story with your sources.” “It determines the best narrative, visual style and format, and even refines its own work to ensure consistency.”