Category: Top News

  • Figma Announces the Beta Release of the Dev Mode MCP Server

    Figma Announces the Beta Release of the Dev Mode MCP Server

    IBL News | New York

    Figma is developing a tool that will translate designs into coded applications using the MCP (Model Context Protocol) with agentic coding systems such as Copilot in VS Code, Cursor, Windsurf, and Claude Code.

    This will reduce the amount of work it takes for AI coding tools to transform designs into functional applications.

    Figma plans to release a series of Dev Mode MCP Server updates “in the coming months,” including remote server capabilities and “deeper codebase integrations.”

    The Dev Mode MCP Server rollout, now in beta mode, follows the prompt-to-code Figma Make platform introduced in May, which became available to all Full seat Figma users this month.

    This allows users to create working applications by describing them. The Figma Sites Code Layers feature, which provides AI tools for turning designs into interactive website experiences, will be rolling out on June 12th.

    Until recently, the only way to provide design context to AI tools was to feed an image of a design or an API response to a chatbot. This has changed with the recent advent of the MCP standard for how applications provide context to LLMs.

    “Whether it’s creating new atomic components with the proper variables and stylings or building out multi-layer application flows, we believe this server will provide a more efficient and accurate design-to-code workflow,” said the company.

  • OpenAI Claims It Reached $10 Billion In Annual Revenue

    OpenAI Claims It Reached $10 Billion In Annual Revenue

    Mikel Amigot, IBL News | New York

    OpenAI claimed it reached $10 billion in annual recurring revenue (ARR), up from $5.5 billion the previous year, primarily driven by its ChatGPT business products and developer API, with three million paying business customers. The company aims to achieve $125 billion in revenue by 2029.

    An OpenAI spokesperson provided CNBC with these figures and stated that the company, which launched its flagship product ChatGPT two and a half years ago, is currently serving more than 500 million weekly active users.

    However, the San Francisco-based research lab refused to disclose its operating expenses or whether it is close to profitability. Last year, it lost about $5 billion.

    In March, OpenAI closed a $40 billion funding round, marking the largest private tech deal on record.

    According to today’s metrics, OpenAI is valued at approximately $157 billion, which is 30 times its revenue.

    This number highlights the growth expectations of some of its largest investors. OpenAI is backed by Japan’s SoftBank, Microsoft, Coatue, Altimeter, Thrive, and other notable venture capital firms.

  • WWDC 25: Apple Focuses on a New User Interface and Fails to Deliver a More Personalized Siri [Watch]

    WWDC 25: Apple Focuses on a New User Interface and Fails to Deliver a More Personalized Siri [Watch]

    Mikel Amigot, IBL News | New York

    At its annual Worldwide Developers Conference (WWDC 25), which kicked off yesterday, Apple announced updates to its software and services, including a new glassy, reflective, and transparent visual interface named “Liquid Glass”, as well as the next version of iOS, called iOS 26.

    Experts highlighted the fact that AI-powered Siri failed to introduce any additional personalization to its service, as promised last year when it said that Siri “would be able to understand your ‘personal context,’ like your relationships, communications, routine, and more.”

    Apple’s SVP of Software Engineering, Craig Federighi, said, “This work needed more time to reach our high-quality bar, and we look forward to sharing more about it in the coming year.”

    The failure to deliver worried investors, who noticed that Apple’s AI technology was lagging behind that of rivals, such as OpenAI, Google, and Anthropic.

    On the other hand, in the upcoming release of iOS 26, Apple updated its AI image generation app, Image Playground.

    At this year’s WWDC, the company made other AI promises such as developer access to the on-device foundation models, upgrades to Genmoji, and a Workout Buddy” fitness app for Apple Watch.

    The Liquid Glass display is translucent and behaves like glass, with the screen’s color informed by the user’s content, adapting to both light and dark environments. Additionally, alerts appear where the user taps, and context menus expand into a scannable list.

    Apple announced a new naming scheme. All its operating systems are to be called iOS 26, iPadOS 26, macOS 26, tvOS 26,

    Other improvements introduced referred to CarPlay, AirPods, Apple Wallet, and iMessages.

  • A Y Combinator Startup Launches an On-Demand CTO and Founding Engineer

    A Y Combinator Startup Launches an On-Demand CTO and Founding Engineer

    IBL News | New York

    Emergent, a Y Combinator-backed company, launched a virtual, on-demand CTO engineer this month that develops production-ready apps with backends, databases, and integrations.

    According to the company, this new vibe coding tool goes beyond agent prototypes and mockups, enabling the production of “full-stack applications with stunning UI and real backend, with no developers required.”

    At its core, Emergent is an integrated platform that autonomously builds, tests, and ships software. The user describes his product in plain language, and the platform handles the architecture, logic, and implementation.

    Emergent aims to democratize access to advanced software creation, as AI can truly understand what’s needed and build complete solutions.

    The start-up claimed that in two weeks, users have built over 10,000 apps—these range from landing pages to SaaS tools like AI notetakers, Slack bots, and Figma plugins.

    Emergent was founded by twin brothers, Indian-born and US-educated engineers, Mukund Jha (CEO) and Madhav Jha (CTO).

  • Web Search, Built on Links, Starts to Shift Away Toward LLM Platforms

    Web Search, Built on Links, Starts to Shift Away Toward LLM Platforms

    Mikel Amigot, IBL News | New York 

    Web search, built on links, started to shift away from traditional browsers toward LLM platforms in 2025, according to a report by Andreessen Horowitz.

    The foundation of the $80 billion+ SEO market just cracked with Apple’s announcement that AI-native search engines like Perplexity and Claude will be built into Safari, said the VC firm. This put Google’s distribution chokehold in question.

    “A new paradigm is emerging, one driven not by page rank, but by language models. We’re entering Act II of search: Generative Engine Optimization (GEO),” stated the report.

    Page ranks are determined by indexing sites based on keyword matching, content depth and breadth, backlinks, and user experience engagement.

    However, today, it’s not about ranking high on the results page. LLMs are the new interface for how people find information. Visibility is obtained by showing up directly in the answers of LLMs like GPT-4, Gemini, and Claude.

    Users’ queries are longer (averaging 23 words vs. 4), sessions are deeper (averaging 6 minutes), and responses provide personalized, multi-source synthesis, remembering and showing reasoning, rather than just relying on keywords.

    Additionally, the business model and incentives have changed. Google monetizes user traffic through ads; users are paid with their data and attention. In contrast, most LLMs are paywalled, subscription-driven services.

    However, an ad market may eventually emerge on top of LLM interfaces, but the rules, incentives, and participants would likely look very different than traditional search.

    New monitoring platforms, such as ProfoundGoodie, and Daydream, enable brands to analyze how they appear in AI-generated responses.

    Tools like Ahrefs’ Brand Radar track brand mentions in AI Overviews, enabling companies to understand how they’re framed and remembered by generative engines. Semrush has a dedicated AI toolkit designed to help brands track perception across generative platforms, optimize content for AI visibility, and respond quickly to emerging mentions in LLM outputs.

  • Mistral Announces Agents API, with Code Execution, Search, and MCP Support

    Mistral Announces Agents API, with Code Execution, Search, and MCP Support

    IBL News | New York

    Mistral last month announced a significant upgrade to its API, capturing the essential features now offered by leading large language models (LLMs).

    Mistral’s new Agents API resembles OpenAI’s Responses API, which was issued in March 2025. It includes:

    • Code execution. Mistral’s new Code Interpreter mechanism, with Python in a server-side sandbox.
    • Web search. Mistral offers two versions: a basic web_search classic search, and a and web_search_premium“, which enables access to both a search engine and two news agencies: AFP and AP.
    • Document library. It’s Mistral’s version of hosted RAG over “user-uploaded documents”.
    • Model Context Protocol support. Users can now include details of MCP servers in their API calls. The same new feature rollout across OpenAI (May 21st), Anthropic (May 22nd), and now Mistral (May 27th) within eight days of each other.

  • Manus Introduces Slides, Which Creates Structured Presentations on Google

    Manus Introduces Slides, Which Creates Structured Presentations on Google

    IBL News | New York

    This month, Manus introduced a feature that allows users to create structured slide presentations instantly.

    With a single prompt, Manus generates tailored slide decks.

    Interestingly, slides can be edited on the same screen and later exported as Google Slides.

    An example can be seen here.

  • Google Infuses in Gemini 2.5 Its Family of Models Fine-Tuned for Education, LearnLM

    Google Infuses in Gemini 2.5 Its Family of Models Fine-Tuned for Education, LearnLM

    IBL News | New York

    To make the learning process more active, engaging, and effective, Google announced at the I/O 2025 annual event that it’s infusing its family of models, fine-tuned for education, with Gemini 2.5 LearnLM.

    The Gemini’s multimodality feature allows remixing information into any format — audio, video, images, and text.

    Another Google tool that has been enhanced is NotebookLM, which enables users to upload sources for research and makes the system an expert, presenting the outcome as audio Audio Overviews and Mind Maps.

     

    Additionally, Google announced that it will soon introduce a feature that enables users to convert the content of their notebooks into educational videos.

    Google continues to enhance its new search modality, called AI Mode, with advanced reasoning, multimodality, web links, the ability to ask follow-up questions, and soon, Deep Search.

    In April, Google gave U.S. college students a free Gemini upgrade through 2026 final exams. The company is now expanding its offerings to students in Brazil, Indonesia, Japan, and the United Kingdom. Students in these countries will receive free access to the Google AI Pro plan for 15 months, helping them fine-tune their writing, study for exams, and get homework help, along with 2 TB of free storage, NotebookLM, and more.

    Students globally will also have the ability to create custom quizzes to help them prepare for exams by simply asking Gemini to “create a practice quiz…” on any topic, or base them on uploaded documents such as class notes.

    The quiz experience provides hints, offers explanations for both right and wrong answers, and provides a helpful summary at the end, highlighting areas of strength as well as those that may benefit from further study.

    Later this year, the search giant will introduce Sparkify, which will turn users’ questions or ideas into short animated videos through the latest Gemini and Veo models.

     

    With Project Astra, Google is prototyping a conversational, personalized tutor that can help with homework. The tools walk users through problems step-by-step, identify mistakes, and even generate diagrams to help explain concepts if they get stuck. This research project will be coming to Google products later this year. Android Trusted Testers can sign up for the waitlist to see a preview.

  • Google Announced an Initiative to Invest in AI Startups

    Google Announced an Initiative to Invest in AI Startups

    Mikel Amigot, IBL News | New York

    Google announced its AI Futures Fund this month. The fund will invest in startups from seed to late-stage using AI tools developed by DeepMind, the company’s R&D lab.

    Google’s support will also include early access to Google AI models from DeepMind, working with experts from DeepMind and Google Labs, and Cloud credits. Some startups will also have the opportunity to receive direct investment from Google.

    “When we come across companies that align with the fund’s thesis, we may choose to invest,” said a Google representative.

    AI Futures Fund already has some case studies, such as the meme-making platform Viggle and the webtoon app Toonsutra.

    Over the past few months, Google has committed to supporting the next generation of AI talent and scientific breakthroughs.

    • In November 2024, Google.org, the company’s charitable wing, announced a $20 million cash commitment to researchers and scientists working in AI.

    • In September, Google CEO Sundar Pichai announced the company was creating a $120 million Global AI Opportunity fund to help bring AI education and training to more places worldwide.

    • Google.org also launched a $20 million generative AI accelerator program to cut checks to nonprofits developing AI tech.

    • Google for Startups Founders Fund supports founders from an array of industries and backgrounds building companies, including AI companies.

  • An Avatar of Legendary Novelist Agatha Christie Teaches a Course at BBC

    An Avatar of Legendary Novelist Agatha Christie Teaches a Course at BBC

    IBL News | New York

    With the goal of teaching an online course on BBC, the legendary British novelist Agatha Christie, who died in 1976, has been animated or re-enacted with the help of a team of researchers and an AI-made digital prosthetic fitted over an actor’s performance (Vivien Keene).

    This Agatha Christie avatar has been created with permission from Christie’s family, who manage her estate.

    The course on BBC Maestro is an online lecture series similar to MasterClass, priced at $105.

    Amid a heated debate about the limits and the ethics of AI, the chief executive of BBC Maestro, Michael Levine, told The New York Times, “We are not trying to pretend, in any way, that this is Agatha somehow brought to life; this is just a representation of Agatha to teach her own craft.

    “We’re not speaking for her,” Agatha Christie’s family said. “We are collecting what she said and putting it out in a digestible and shareable format.”

    Some academics pointed out that even if the author’s family consented, Christie has not and cannot agree to the course. Therefore, it’s a deepfake.

    The NYT reported that AI technology has been used to talk to the dead, becoming a cottage industry for wealthy nostalgics.

    BBC Maestro Courses