Category: Views

  • Elon Musk’s xAI Releases Its First Coding Assistant “Grok Code Fast 1”

    Elon Musk’s xAI Releases Its First Coding Assistant “Grok Code Fast 1”

    IBL News | New York

    Elon Musk’s xAI released its first coding assistant model, Grok Code Fast 1, this week, marking the company’s entry into the competitive software development market segment. It is available for free use for a limited time, with select launch partners, including GitHub Copilot and Windsurf. 

    Grok Code Fast 1 is one of the fastest coding models currently available.

    • It shows an impressive processing speed of up to 92 tokens per second.

    • It features a 256,000-token context window.

    • It is powered by a mixture-of-experts architecture with 314 billion parameters, designed specifically for agentic coding workflows with visible reasoning traces.

    xAI positioned it also as an alternative to existing solutions.

    This release aligns with xAI’s broader strategy of open-sourcing various versions of its Grok models, including the base models of Grok-1 and Grok 2.5.

     

     

  • Anthropic Creates a Higher Ed Advisory Board and AI Fluency Courses

    Anthropic Creates a Higher Ed Advisory Board and AI Fluency Courses

    IBL News | New York

    Anthropic, the company behind the AI chatbot Claude, announced the creation of a Higher Education Advisory Board made up of academic leaders, along with three new AI Fluency courses.

    This Higher Education Advisory Board will be chaired by Rick Levin, who previously led Yale University and Coursera. He said, “Our role is to advise the company as it develops ethically sound policies and products that will enable learners, teachers, and administrators to benefit from AI’s transformative potential while upholding the highest standards of academic integrity and protecting student privacy.”

    Other Board members come from academia as well:

    • David Leebron, Former President of Rice University.
    • James DeVaney, Special Advisor to the President, Associate Vice Provost for Academic Innovation, and Founding Executive Director of the Center for Academic Innovation at the University of Michigan.
    • Julie Schell, Assistant Vice Provost of Academic Technology at the University of Texas, Austin.
    • Matthew Rascoff, Vice Provost for Digital Education at Stanford University.
    • Yolanda Watson Spiva, President of Complete College America.

    Anthropic has also developed three new courses that build on its existing AI Fluency course. These classes are designed to address the need for practical frameworks for thoughtful AI integration.

    Each course, co-developed with Professor Rick Dakan of Ringling College of Art and Design and Professor Joseph Feller of University College Cork, is available under a Creative Commons license, so any institution can adapt them.

    • AI Fluency for Educators helps faculty integrate AI into their teaching practice, from creating materials and assessments to enhancing classroom discussions. Built on experience from early adopters, it shows what works in real classrooms.

    • AI Fluency for Students teaches responsible AI collaboration for coursework and career planning. Students learn to work with AI while developing their own critical thinking skillsand write their own personal commitment to responsible AI use

    • Teaching AI Fluency supports educators who want to bring AI literacy to their campuses and classrooms. It includes frameworks for instruction and assessment, plus curriculum considerations for preparing students for a more AI-enhanced world.

    Anthropic is not alone in targeting higher education. OpenAI launched ChatGPT Edu, a version of its chatbot customized for universities. It includes administrative controls, enterprise-grade authentication, and features like “Study Mode,” which walks students through problems step by step.

    Highlighting its “commercial data protection” framework,  Microsoft embedded Copilot for Education into Office 365.

    Google doubled down on its education footprint with Gemini in Classroom and Gemini for Education, designed to help teachers generate differentiated materials and give students tutoring experiences.

  • Grok 4 Was Made Freely Accessible to All Users

    Grok 4 Was Made Freely Accessible to All Users

    IBL News | New York

    Elon Musk-owned xAI made Grok 4 freely accessible to all users worldwide this month, in an attempt to compete with rival platforms OpenAI’s GPT-5.

    However, the company said that access to Grok 4 Heavy, its most advanced model, remains exclusive to SuperGrok Heavy subscribers.

    Grok 4 features a dual system: Auto Mode and Expert Mode. In Auto Mode, the AI automatically decides if a user prompt requires deeper reasoning or a simple response. Expert Mode allows users to manually trigger a more in-depth answer if the initial response isn’t satisfactory.

    In addition to Grok 4, xAI also rolled out Grok Imagine, a free AI video generation tool, with a limited number of queries and currently available only in the United States.

    However, it presents challenges such as AI misuse. In this regard, the BBC highlighted Grok 4 misuse in creating explicit deepfake videos of celebrities like Taylor Swift and Sydney Sweeney, raising concerns about content moderation and responsible AI use.

  • China’s Leadership In Open-Source AI Technology Raises Alarm in the U.S.

    China’s Leadership In Open-Source AI Technology Raises Alarm in the U.S.

    IBL News | New York

    China’s adoption and leadership in open-source AI technology is worrying U.S. policymakers and Silicon Valley companies, who are keeping the models proprietary.

    Chinese advances in open source are coming one after another this year, with DeepSeek, Alibaba’s Qween, Moonshot, Z.ai, and MiniMax.

    The open source or open weight models all have versions that are free for users to download and modify.

    In the past, Microsoft’s Windows operating system for desktops, Google’s search engine, and the iOS and Android operating systems for smartphones were a few of the examples of proprietary models’ dominance.

    In its AI action plan released in July, the Trump administration acknowledged that open-source models “could become global standards in some areas of business and in academic research.”

    The report called on the U.S. to build “leading open models founded on American values.”

    For now, open-source initiatives have had slim gains. Proprietary models have spent hundreds of millions of dollars developing free access to models.

    • Many businesses like open-source AI because they can freely adapt it and put it on their computer systems, keeping sensitive information in-house. Moreover, they can avoid being locked into any one model.

    • Researchers have long embraced open source as a way of accelerating the development of emerging technology, since it allows every user to see the code and suggest improvements.

    • Fearing being cut off from American technologies, the Chinese government has encouraged open-source research and development not only in AI but also in operating systems, semiconductor architecture, and engineering software.

    • Meanwhile, the Trump administration worries that if Chinese AI models dominate the globe, Beijing will figure out a way to exploit it for geopolitical advantage.

    • Engineers in Asia said Chinese models were often more sophisticated in understanding their local languages and catching cultural nuances, as they are trained with more data in Chinese, which shares similarities with some other Asian languages.

    WSJ: China’s Lead in Open-Source AI Jolts Washington and Silicon Valley

  • As ChatGPT and Claude, Gemini Will Remember Users’ Past Chats

    As ChatGPT and Claude, Gemini Will Remember Users’ Past Chats

    IBL News | New York

    Google rolled out an update for its Gemini that allows its chatbot to remember users’ past conversations and chats.

    The new feature matches OpenAI’s ChatGPT and Anthropic’s Claude memory feature.

    With the setting turned on, Google’s Gemini automatically recalls users’ key details and preferences and uses them to personalize the output, with more natural and relevant conversations.

    In addition, the Gemini app also introduced a new privacy feature called Temporary Chats, which gives more control over data.

    At I/O, Google introduced its vision for a Gemini assistant that learns and truly understands the user, not one that just responds to your prompt in the same way that it would to anyone else’s prompt.

    At first, personalized conversations will be available when using our 2.5 Pro model in select countries, and Google plans to expand the feature to our 2.5 Flash model and more countries in the weeks ahead.

    Also, Anthropic has introduced a similar feature for Claude solves the problem of referencing information from other conversations with the AI chatbot.

    Anthropic said Claude users can toggle the behavior with this setting.

    Claude’s memory feature is only available for Enterprise, Team, and Max subscribers for now.

  • Claude.ai Introduces a “Learning Style” on Its Chatbot

    Claude.ai Introduces a “Learning Style” on Its Chatbot

    IBL News | New York

    Anthropic’s Claude.ai chatbot introduced a Learning style this week, making it available to everyone.

    When users turn the Learning style feature on, the Claude.ai chatbot employs a Socratic approach, guiding students through questions instead of providing them with straight answers.

    The experience here is similar to the one Anthropic offers with Claude for Education.

    OpenAI features a similar solution, called Study Mode, and Google does so with the Guided Learning functionality.

    In addition, Anthropic is offering on Claude Code an Explanatory mode, which generates summaries as it works, allowing the user a chance to better understand what it’s doing.

    Drew Bent, education lead at Anthropic, explained, “Learning mode is designed to help all of those audiences not just complete tasks, but also help them grow and learn in the process and better understand their code base.”

    In practice, a “really good engineering manager” won’t necessarily write most of the code on a project, but he will develop a keen eye for how everything fits together and what sections of code might need some more work.

  • OpenAI and Anthropic Offered Free Access to Their Chatbots to the U.S. Government

    OpenAI and Anthropic Offered Free Access to Their Chatbots to the U.S. Government

    IBL News | New York

    OpenAI’s ChatGPT Enterprise and Anthropic’s Claude will be offered for free to the Federal Government.

    The move comes after OpenAI, Anthropic, and Google DeepMind have added to the General Services Administration’s list of approved AI vendors that can sell their services to civilian federal agencies.

    They have been granted up to $200 million by the Department of Defense to advance U.S. national security capabilities.

    In the case of Anthropic, this company has decided to extend the offer to “all three branches” of the U.S. government, including the legislative and judiciary branches, for one year.

    “We believe the U.S. public sector should have access to the most advanced AI capabilities to tackle complex challenges, from scientific research to constituent services,” Anthropic said in a statement.

    Anthropic will offer both Claude for Enterprise and Claude for Government. The latter supports the security baseline FedRAMP High workloads, so that federal workers can use Claude for handling sensitive unclassified work.

    In addition to being certified for FedRAMP High, Claude exhibits its existing secure infrastructure via partnerships with AWS, Google Cloud, and Palantir.

    In its press release, the company noted that Claude is already being used at Lawrence Livermore National Laboratory to accelerate scientific discoveries, and also by the District of Columbia Department of Health to help residents access health services in multiple languages.

    OpenAI’s official FedRAMP High offering is tied to Azure Government Cloud only. This company said that it is working to reduce its reliance on Azure to embrace a more diversified infrastructure approach.

  • OpenAI Reactivates 4o the Model Picker for Paid Users

    OpenAI Reactivates 4o the Model Picker for Paid Users

    IBL News | New York

    OpenAI introduced new “Auto,” “Fast,” and “Thinking” settings for GPT-5 this week, contradicting its statement that would simplify the promise of “one size fits all” experience, with a sole AI model.

    Now ChatGPT users can select from the model picker. The Auto setting works like GPT-5’s model router that OpenAI initially announced. However, the company is also giving users options to circumnavigate it, allowing them to access fast and slow responding AI models directly.

    Sam Altman, CEO at OpenAI, made the announcement on a post on X on Tuesday.

    Alongside GPT-5’s new modes, 4o is back in the model picker for all paid users by default.

    These paid users also now have a “Show additional models” toggle in ChatGPT web settings, which will add models like o3, 4.1, and GPT-5 Thinking mini.

    4.5 is only available to Pro users, as it costs a lot of GPUs.

    The deprecation of GPT-4o and other AI models in ChatGPT sparked a backlash among users who had grown attached to the AI models’ responses and personalities in ways that OpenAI had not anticipated.

    In the future, Altman says the company will give users plenty of advance notice if it ever deprecates GPT-4o. “If we ever do deprecate it, we will give plenty of notice.”

  • ChatGPT Releases Two “Best-In-Class” Open-Source Models

    ChatGPT Releases Two “Best-In-Class” Open-Source Models

    IBL News | New York

    OpenAI released two open-weight language models this week, available to download for free and under the Apache 2.0 license on Hugging Face: gpt-oss-120b and gpt-oss-20b. The last open-weight model released by OpenAI was GPT-2, back in 2019.

    “These models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment on consumer hardware,” said OpenAI CEO Sam Altman.

    • “The gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU.”

    • “The gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks and can run on edge devices with just 16 GB of memory, making it ideal for on-device use cases, local inference, or rapid iteration without costly infrastructure.”

    • “These models are compatible with our Responses API and are designed to be used within agentic workflows with exceptional instruction following, tool use like web search or Python code execution, and reasoning capabilities.”

    OpenAI trained the models on a mostly English, text-only dataset, with a focus on STEM, coding, and general knowledge.

    Therefore, these new text-only models are not multimodal, but they can browse the web, call cloud-based models to help with tasks, execute code, and navigate software as an AI agent. The smaller of the two models, gpt-oss-20b, is compact enough to run locally on a consumer device with more than 16 GB of memory.

    OpenAI defined its release as “best-in-class open models”, highlighting that they work anywhere: locally, on-device, or through third-party inference providers. The company partnered with several deployment platforms such as Azure, Hugging Face, vLLM, Ollama, llama.cpp, LM Studio, AWS, Fireworks, Together AI, Baseten, Databricks, Vercel, Cloudflare, and OpenRouter.

    The fact that the “weights” are publicly available means that any developer can peek at the internal parameters to get an idea of how it processes information. They can work as a complement to OpenAI’s paid services. Unlike ChatGPT, you can run a gpt-oss model without an internet connection and behind a firewall.

    With Apache 2.0, models can be used for commercial purposes, redistributed, and included as part of other licensed software.

    Open-weight model releases from Alibaba’s Qwen as well as Mistral, also operate under Apache 2.0.

    To try the models, OpenAI released an open model playground⁠, alongside guides.

    In the U.S., the open-weight leader has been Meta. The tech giant released the first of its Llama series of models back in 2023, with Meta’s most recent release, Llama 4, arriving a few months ago.

    Also, the Chinese startup DeepSeek released its cheap-to-run model that was open-weight this year.

    Microsoft Azure: OpenAI’s open‑source model: gpt‑oss on Azure AI Foundry and Windows AI Foundry

    NVIDIA: OpenAI’s New Open Models Accelerated Locally on NVIDIA GeForce RTX and RTX PRO GPUs

  • Microsoft Prepares the Launch of a Virtual Character that Interacts With The User

    Microsoft Prepares the Launch of a Virtual Character that Interacts With The User

    IBL News | New York

    Microsoft is preparing the launch of a new Copilot virtual character that will interact in real-time with the user.

    It will be a highly personalized AI assistant that will have an identity, expressions, voice, and conversational memory, according to Microsoft’s AI CEO, Mustafa Suleyman.

    The virtual character responds to queries, smiles, nods, and even acts surprised depending on the conversation.

    The company provided a glimpse of the Copilot’s identity, as shown below.

    Mustafa Suleyman already worked at Inflection AI on a personalized chatbot called Pi. Most of the Inflection AI team joined Microsoft.