Category: Top News

  • President Trump Will Dismantle Soon the U.S. Education Department

    President Trump Will Dismantle Soon the U.S. Education Department

    IBL News | New York

    President Trump will soon sign an executive order to shut down the U.S. Department of Education and put out job Linda McMahon, who the Senate confirmed on Monday.

    Eliminating the department will also require an act of Congress.

    Linda McMahon told employees her final mission was to eliminate bureaucratic bloat and turn over the agency to states.

    Trump adviser Elon Musk’s Department of Government Efficiency (DOGE) has already cut dozens of contracts dismissed as “woke and wasteful.”

    It also gutted the Institute of Education Sciences, which gathers data on the nation’s academic progress.

    The U.S. Department of Education’s main role is financial. Annually, it distributes billions in federal money to colleges and schools, manages the $1.5 trillion federal student loan portfolio, and oversees the Pell Grant, which provides aid to students below a certain income threshold.

    These tasks are expected to be assigned to another agency.

    The Education Department also plays an essential regulatory role in services for students, ranging from those with disabilities to low-income and homeless kids.

    Trump has vowed to cut off federal money for schools and colleges that push “critical race theory, transgender insanity, and other inappropriate racial, sexual or political content” and to reward states and schools that end teacher tenure and support universal school choice programs.

    Federal funding makes up a small portion of public school budgets — roughly 14%.

    Colleges and universities rely more on it through research grants and federal financial aid that helps students pay their tuition.

  • Amazon Unveiled Its New, Re-Architectured, AI-Powered ‘Alexa+’

    Amazon Unveiled Its New, Re-Architectured, AI-Powered ‘Alexa+’

    IBL News | New York

    Aiming to catch up in generative AI for everyday users, Amazon will launch Alexa+ this month, its enhanced, re-architected AI virtual assistant, set to be “more conversational, helpful in booking concert tickets, coordinating calendars and suggesting food to be delivered.”

    The e-commerce giant said that Alexa is undergoing its most significant overhaul since debuting in 2014, when it became a symbol of Amazon’s innovation.

    Alexa+ will cost $19.99 a month or free for Amazon’s Prime customers (which costs $14.99 monthly). It will begin rolling out next month.

    In recent years, Alexa has fallen behind other virtual assistants. Its growth has stagnated in the United States as people have been turning to the assistant for only a few main tasks, such as setting timers and alarms, playing music, and asking questions about the weather and sports scores.

    At a demo event, Amazon executives demonstrated how Alexa+ could identify who was speaking and know the person’s preferences, such as favorite sports teams, musicians, and foods. They also showed how a device powered by Alexa+ can suggest a restaurant, book a reservation on OpenTable, order an Uber, and send a calendar invitation.

    Bringing generative AI to Alexa faced challenges that a chatbot does not, such as serving multiple users in a household, needing to distinguish who is speaking, and personalizing the responses.

  • OpenAI Provides $50M in Compute, API Access, Tools, and Research Funds to Fifteen Universities

    OpenAI Provides $50M in Compute, API Access, Tools, and Research Funds to Fifteen Universities

    IBL News | New York

    OpenAI announced yesterday that it is committing $50 million in research grants, computing funding, and API access to a consortium of 15 research universities called NextGenAI.

    The initiative follows the commercial offer of ChatGPT Edu for universities, launched in May 2024.

    The institutions in the NextGenAI consortium are Caltech, the California State University system, Duke University, the University of Georgia, Harvard University, Howard University, Massachusetts Institute of Technology (MIT), the University of Michigan, the University of Mississippi, the Ohio State University, the University of Oxford, Sciences Po, Texas A&M University, Boston Children’s Hospital, the Boston Public Library.

    OpenAI mentioned the following examples of universities using their tools and funding. The distribution of funds was not specified.

    • “The Ohio State University is leveraging AI to accelerate the fields of digital health, advanced therapeutics, manufacturing, energy, mobility, and agriculture, while educators are using AI to create advanced learning models.
    • Harvard University and Boston Children’s Hospital researchers use OpenAI tools and NextGenAI funding to reduce patients’ time to find the correct diagnosis, especially for rare orphan diseases, and improve AI alignment with human values in medical decision-making.
    • Duke University scientists are using AI to pioneer metascience research, identifying the fields of science where AI can have the greatest benefit.
    • Texas A&M is using NextGenAI resources to fuel its Generative AI Literacy Initiative, providing hands-on training to enhance the responsible use of AI in academic settings.
    • MIT students and faculty will be able to use OpenAI’s API and compute funding to train and fine-tune their own AI models and develop new applications.
    • Howard will use AI to develop curricula, experiment with new teaching methods, improve university operations, and give students hands-on AI experience to prepare them as future leaders.
    • University of Oxford is leveraging AI for a broad research agenda, education, and university operations—its renowned Bodleian Library is digitizing rare texts and using OpenAI’s API to transcribe them, making centuries-old knowledge newly searchable by scholars worldwide.
    • University of Mississippi is exploring new ways to integrate AI into their core mission of education, research, and service, and to advance AI-driven solutions that benefit their students, faculty, and the broader community.
    • Boston Public Library, America’s first large free municipal public library, is digitizing public domain materials and using AI to make their information more accessible to patrons from all walks of life.”

  • An AI Platform for Teaching American Sign Language (ASL) Helps Breaking Down Communication Barriers

    An AI Platform for Teaching American Sign Language (ASL) Helps Breaking Down Communication Barriers

    IBL News | New York

    Nvidia unveiled an AI platform for teaching American Sign Language (ASL), the third most prevalent language in the country after English and Spanish. The goal is to break down communication barriers between the deaf and hearing communities.

    In the U.S., around 11 million people are deaf or have significant hearing loss.

    Developed in partnership with the American Society for Deaf Children and the creative agency Hello Monday, this interactive web platform, named Signs, supports ASL learning and the development of accessible AI applications.

    A 3D avatar demonstrates signs and uses an AI tool that analyzes webcam footage to receive real-time feedback on their signing.

    Users of any skill level can contribute by signing specific words to help build a video dataset for ASL.

    Nvidia aims to grow this dataset to 400,000 video clips representing 1,000 signed words, aiming to have a high-quality visual dictionary and teaching tool. Fluent ASL users and interpreters are participating to ensure the accuracy of each sign.

    This dataset, which is starting with initial set of 100 signs, will be used to further develop AI applications. It will be available to the public as a resource for building AI agents, digital applications and video conferencing tools.

    The dataset behind Signs is planned for release later this year.

    person signing the word "vegetable" using Signs AI platform

    While Signs currently focuses on hand movements and finger positions for each sign, ASL also incorporates facial expressions and head movements to convey meaning. The Nvidia team behind Signs is exploring how these non-manual signals can be tracked and integrated into future platform versions.

    This team is also investigating how other nuances, like regional variations and slang terms, can be represented in Signs to enrich its ASL database — and working with researchers at the Rochester Institute of Technology’s Center for Accessibility and Inclusion Research to evaluate and further improve the user experience of the Signs platform for deaf and hard-of-hearing users.

    “Improving ASL accessibility is an ongoing effort,” said Anders Jessen, founding partner of Hello Monday/DEPT, which built the Signs web platform and previously worked with the American Society for Deaf Children on Fingerspelling.xyz. This application taught users the ASL alphabet. “Signs can serve the need for advanced AI tools that help transcend communication barriers between the deaf and hearing communities.”


    • Start learning or contributing with Signs at signs-ai.com

    Nvidia’s trustworthy AI initiatives.

    Nvidia GTC, March 17-21 in San Jose, Signs live demo.

  • OpenAI Released GPT-4.5, Its Largest AI Model Yet

    OpenAI Released GPT-4.5, Its Largest AI Model Yet

    IBL News | New York

    OpenAI announced the release of GPT‑4.5, its largest model to date, trained on Microsoft Azure AI supercomputers, as a research preview. Subscribers to ChatGPT Pro, OpenAI’s $200-a-month plan, can access the service now.

    OpenAI CEO Sam Altman said that the company was forced to stagger the rollout of its newest model, GPT-4.5, because OpenAI is “out of GPUs.”

    The San Francisco-based research lab explained that GPT‑4.5 has “an improved ability to recognize patterns, draw connections, and generate creative insights without reasoning.”

    “Early testing shows that interacting with GPT‑4.5 feels more natural. Its broader knowledge base, improved ability to follow user intent, and greater “EQ” make it useful for tasks like improving writing, programming, and solving practical problems. We also expect it to hallucinate less.”

    Compared to OpenAI o1 and OpenAI o3‑mini, GPT‑4.5 is a more general-purpose, innately smarter model.

    OpenAI believes reasoning will be a core capability of future models and the two approaches to scaling—pre-training and reasoning—will complement each other.

    GPT-4.5 is wildly expensive to run, OpenAI says. The company is charging $75 per million tokens (~750,000 words) fed into the model and $150 per million tokens generated by the model. That’s 30x the input cost and 15x the output cost of OpenAI’s workhorse GPT-4o model.

    Compare that to GPT-4o, which costs just $2.50 per million input tokens and $10 per million output tokens.

  • Microsoft Announces It Will Close the Skype Service on May 5, 2025

    Microsoft Announces It Will Close the Skype Service on May 5, 2025

    IBL News | New York

    Microsoft announced it will close on May 5 2025 Skype, the internet-based phone and video service that was once the dominant way of staying connected in the mid-2000s.

    The company will give users ten weeks to decide what to do with their accounts while encouraging them to move to Microsoft Teams.

    Skype users will be able to enter their account credentials into Teams, and chats, contacts, and history will automatically appear in the app.

    During the transition period, Teams users can call and chat with Skype users and Skype users can do the same with Teams users.

    “With Teams, users have access to many of the same core features they use in Skype, such as one-on-one calls and group calls, messaging, and file sharing. Additionally, Teams offers enhanced features like hosting meetings, managing calendars, and building and joining communities for free,” the company explained.

    To get started,

    • Download Teams from the official Microsoft Teams website.
    • Log in with your Skype credentials.
    • Start using Teams with all Skype chats and contacts ready to go.

    Microsoft acquired this messaging and calling app 14 years ago, nine years after its creation, for $8.5 billion in cash, marking its largest acquisition ever. Skype launched in 2003 in Estonia and quickly caught on as a way to make free calls worldwide.

    Microsoft integrated the service into its other products, such as Office and its ill-fated mobile operating service, Windows Phone.

  • Google Offers An Expansive Free Coding AI Tool to Compete with GitHub Copilot

    Google Offers An Expansive Free Coding AI Tool to Compete with GitHub Copilot

    IBL News | New York

    Google announced a free Gemini Code Assist for individuals in public preview. It is powered by the Gemini 2.0 model and “with the latest AI capabilities.” It can generate entire code blocks and supports 38 programming languages.

    The free coding tool can be installed in Visual Studio Code, GitHub, and JetBrains developer environments.

    Developers can instruct Gemini Code Assist using a chat interface by asking, for example, to “build me a simple HTML form with fields for name, email, and message, and then add a ‘submit’ button.”

    With this offer, Google targets GitHub Copilot, its most direct competitor. GitHub Copilot also provides a free tier of 2,000 code completions and 50 monthly Copilot Chat messages.

    Meanwhile, Google offers up to 180,000 code completions per month, “a ceiling so high that even today’s most dedicated professional developers would be hard-pressed to exceed it,” said Ryan J. Salva, Google’s senior director of product management.

    The free Individual tier doesn’t include advanced business-focused features available in the Standard and Enterprise versions, such as productivity metrics, integrations with Google Cloud BigQuery services, or customized responses using private code data.

  • Anthropic Introduced ‘Claude 3.7 Sonnet’ and ‘Claude Code’

    Anthropic Introduced ‘Claude 3.7 Sonnet’ and ‘Claude Code’

    IBL News | New York

    Anthropic released Claude 3.7 Sonnet, its most advanced AI model, this week.

    According to the company:

    • “Claude 3.7 Sonnet, the first hybrid reasoning model on the market, can produce near-instant responses or extended, step-by-step thinking that is made visible to the user.”

    • “API users also have fine-grained control over how long the model can think for.”

    • “Claude 3.7 Sonnet shows particularly strong improvements in coding and front-end web development.”

    Reasoning models like o3-mini, R1, Google’s Gemini 2.0 Flash Thinking, and xAI’s Grok 3 (Think) use more time and computing power before answering questions.

    Claude 3.7 Sonnet is now available on all Claude plans—including Free, Pro, Team, and Enterprise—and the Anthropic APIAmazon Bedrock, and Google Cloud’s Vertex AI. Extended thinking mode is available on all surfaces except the free Claude tier.

    Its price is the same as its predecessors (the company skipped a number): $3 per million input tokens and $15 per million output tokens, which include thinking tokens.

    That makes it more expensive than OpenAI’s o3-mini ($1.10 per 1 million input tokens/$4.40 per 1 million output tokens), and DeepSeek’s R1 (55 cents per 1 million input tokens/$2.19 per 1 million output tokens), but o3-mini and R1 are strictly reasoning models — not hybrids like Claude 3.7 Sonnet.

    In addition, Anthropic introduced, as a preview, Claude Code, a command line tool for agentic coding.

    The company said:

    “Early testing demonstrated Claude’s leadership in coding capabilities across the board: Cursor noted Claude is once again best-in-class for real-world coding tasks, with significant improvements in areas ranging from handling complex codebases to advanced tool use. Cognition found it far better than any other model at planning code changes and handling full-stack updates. Vercel highlighted Claude’s exceptional precision for complex agent workflows, while Replit has successfully deployed Claude to build sophisticated web apps and dashboards from scratch, where other models stall. In Canva’s evaluations, Claude consistently produced production-ready code with superior design taste and drastically reduced errors.”

     

  • YouTube Integrated AI Video Generator for Its Shorts’ Creators

    YouTube Integrated AI Video Generator for Its Shorts’ Creators

    IBL News | New York

    YouTube integrated this month for its Shorts creators Google DeepMind’s latest text-to-video model generator, Veo 2.

    Veo 2, which is Google’s response to OpenAI’s Sora, allows users to generate AI backgrounds for their Shorts through a feature called Dream Screen. [See an example below]

    To use Veo 2 in YouTube Shorts, creators can open the Shorts camera, select Green Screen, and then navigate to Dream Screen, where they can input a text prompt to generate a video.

     

    YouTube uses a watermark tool called SynthID to indicate that videos are generated using AI.

    YouTube is also launching another capability powered by Veo 2, which allows users to generate standalone video clips via text prompts that can be added to any Shorts.

    To create a clip to add to any Short, users can open the Shorts camera, tap Add, and Create at the top. After inputting their prompt, they select their image, tap Create video, and choose their desired length.

    These features were available in the U.S., Canada, Australia, and New Zealand. YouTube plans to expand access later.

     

  • Researchers at Stanford and the University of Washington Trained a Model Similar to OpenAI’s o1 and DeepSeek’s R1

    Researchers at Stanford and the University of Washington Trained a Model Similar to OpenAI’s o1 and DeepSeek’s R1

    IBL News | New York

    Researchers at Stanford and the University of Washington said in a paper released this month that they were able to train an AI reasoning model called s1, which performed similarly to OpenAI’s o1 and DeepSeek’s R1 on math and coding.

    The s1 model, along with the data and code, is available on GitHub. According to the researchers, its training costs less than $50 in cloud computing credits.

    This team started with an off-the-shelf base model and then fine-tuned it through distillation, a process for extracting the “reasoning” capabilities from another AI model by training on its answers.

    The model was distilled from Gemini 2.0 Flash Thinking Experimental, offered for free via the Google AI Studio platform.

    Distillation is the same approach Berkeley researchers used to create an AI reasoning model for around $450 last month.

    OpenAI has accused DeepSeek of improperly harvesting data from its API for model distillation.

    Distillation is a suitable method for cheaply re-creating an AI model’s capabilities, but it doesn’t create new AI models.

    The s1 paper suggested that reasoning models can be distilled with a relatively small dataset using supervised fine-tuning (SFT), in which an AI model is explicitly instructed to mimic certain behaviors in a dataset.

    More specifically, s1 was based on a small, free AI model from Alibaba-owned Chinese AI lab Qwen. To train s1, the researchers created a dataset of just 1,000 carefully curated questions paired with answers to those questions and the “thinking” process behind each answer from Google’s Gemini 2.0 Flash Thinking Experimental.

    After training s1, which took less than 30 minutes using 16 Nvidia H100 GPUs, s1 achieved strong performance on specific AI benchmarks.

    Per the paper, researchers used a nifty trick to get s1 to double-check its work and extend its “thinking” time: They told it to wait. Adding the word “wait” during s1’s reasoning helped the model arrive at slightly more accurate answers,

    Experts said that s1 raises fundamental questions about the commoditization of AI models.