Category: Top News

  • As ChatGPT and Claude, Gemini Will Remember Users’ Past Chats

    As ChatGPT and Claude, Gemini Will Remember Users’ Past Chats

    IBL News | New York

    Google rolled out an update for its Gemini that allows its chatbot to remember users’ past conversations and chats.

    The new feature matches OpenAI’s ChatGPT and Anthropic’s Claude memory feature.

    With the setting turned on, Google’s Gemini automatically recalls users’ key details and preferences and uses them to personalize the output, with more natural and relevant conversations.

    In addition, the Gemini app also introduced a new privacy feature called Temporary Chats, which gives more control over data.

    At I/O, Google introduced its vision for a Gemini assistant that learns and truly understands the user, not one that just responds to your prompt in the same way that it would to anyone else’s prompt.

    At first, personalized conversations will be available when using our 2.5 Pro model in select countries, and Google plans to expand the feature to our 2.5 Flash model and more countries in the weeks ahead.

    Also, Anthropic has introduced a similar feature for Claude solves the problem of referencing information from other conversations with the AI chatbot.

    Anthropic said Claude users can toggle the behavior with this setting.

    Claude’s memory feature is only available for Enterprise, Team, and Max subscribers for now.

  • Claude.ai Introduces a “Learning Style” on Its Chatbot

    Claude.ai Introduces a “Learning Style” on Its Chatbot

    IBL News | New York

    Anthropic’s Claude.ai chatbot introduced a Learning style this week, making it available to everyone.

    When users turn the Learning style feature on, the Claude.ai chatbot employs a Socratic approach, guiding students through questions instead of providing them with straight answers.

    The experience here is similar to the one Anthropic offers with Claude for Education.

    OpenAI features a similar solution, called Study Mode, and Google does so with the Guided Learning functionality.

    In addition, Anthropic is offering on Claude Code an Explanatory mode, which generates summaries as it works, allowing the user a chance to better understand what it’s doing.

    Drew Bent, education lead at Anthropic, explained, “Learning mode is designed to help all of those audiences not just complete tasks, but also help them grow and learn in the process and better understand their code base.”

    In practice, a “really good engineering manager” won’t necessarily write most of the code on a project, but he will develop a keen eye for how everything fits together and what sections of code might need some more work.

  • OpenAI and Anthropic Offered Free Access to Their Chatbots to the U.S. Government

    OpenAI and Anthropic Offered Free Access to Their Chatbots to the U.S. Government

    IBL News | New York

    OpenAI’s ChatGPT Enterprise and Anthropic’s Claude will be offered for free to the Federal Government.

    The move comes after OpenAI, Anthropic, and Google DeepMind have added to the General Services Administration’s list of approved AI vendors that can sell their services to civilian federal agencies.

    They have been granted up to $200 million by the Department of Defense to advance U.S. national security capabilities.

    In the case of Anthropic, this company has decided to extend the offer to “all three branches” of the U.S. government, including the legislative and judiciary branches, for one year.

    “We believe the U.S. public sector should have access to the most advanced AI capabilities to tackle complex challenges, from scientific research to constituent services,” Anthropic said in a statement.

    Anthropic will offer both Claude for Enterprise and Claude for Government. The latter supports the security baseline FedRAMP High workloads, so that federal workers can use Claude for handling sensitive unclassified work.

    In addition to being certified for FedRAMP High, Claude exhibits its existing secure infrastructure via partnerships with AWS, Google Cloud, and Palantir.

    In its press release, the company noted that Claude is already being used at Lawrence Livermore National Laboratory to accelerate scientific discoveries, and also by the District of Columbia Department of Health to help residents access health services in multiple languages.

    OpenAI’s official FedRAMP High offering is tied to Azure Government Cloud only. This company said that it is working to reduce its reliance on Azure to embrace a more diversified infrastructure approach.

  • OpenAI Reactivates 4o the Model Picker for Paid Users

    OpenAI Reactivates 4o the Model Picker for Paid Users

    IBL News | New York

    OpenAI introduced new “Auto,” “Fast,” and “Thinking” settings for GPT-5 this week, contradicting its statement that would simplify the promise of “one size fits all” experience, with a sole AI model.

    Now ChatGPT users can select from the model picker. The Auto setting works like GPT-5’s model router that OpenAI initially announced. However, the company is also giving users options to circumnavigate it, allowing them to access fast and slow responding AI models directly.

    Sam Altman, CEO at OpenAI, made the announcement on a post on X on Tuesday.

    Alongside GPT-5’s new modes, 4o is back in the model picker for all paid users by default.

    These paid users also now have a “Show additional models” toggle in ChatGPT web settings, which will add models like o3, 4.1, and GPT-5 Thinking mini.

    4.5 is only available to Pro users, as it costs a lot of GPUs.

    The deprecation of GPT-4o and other AI models in ChatGPT sparked a backlash among users who had grown attached to the AI models’ responses and personalities in ways that OpenAI had not anticipated.

    In the future, Altman says the company will give users plenty of advance notice if it ever deprecates GPT-4o. “If we ever do deprecate it, we will give plenty of notice.”

  • D2L Enhances Its AI Toolset on Tutor, Support, Insights, and Feedback

    D2L Enhances Its AI Toolset on Tutor, Support, Insights, and Feedback

    IBL News | New York

    D2L announced last month new enhancements to its AI Lumi solution, designed to provide learners with personalized support.

    Many of those Lumi tools, offered through a partnership with LearnWise, will be available soon.

    • Lumi Tutor: This chat, integrated into course content, helps learners with due dates, study plans, quizzes, instant practice, flashcards, and roleplay.
    • Study Support: It provides learners with customized feedback and study recommendations based on their quiz performance.
    • Lumi Insights: Educators see students’ performance on quizzes, alongside adaptive recommendations. It helps identify where students struggle by highlighting what is and isn’t working, such as problematic quiz questions.
    • Lumi Feedback: Instructors automate grading by generating text and rubric feedback based on their own notes.

    These Lumi modules are available separately as add-ons, with additional costs often around a third of the base price of the LMS.

    John Baker, President, Founder, and CEO at the Canadian D2L LMS, said, “By putting humans in the driver’s seat, we’re designing and harnessing AI-native capabilities in our learning platform.”

    D2L also introduced enhancements to D2L Link, with automated workflows and improved data accuracy, to help institutions create a more connected learning ecosystem, unlocking a more holistic view of learner progress.

    As part of the core product, D2L unveiled Createspace, described as the future of authoring and sharing. The first components are available now. Instructors can now create, version, reuse, template, and share content in a separate tool, rather than creating content directly within a course.

    Finally, D2L announced that it is placing a much stronger emphasis on the corporate market, holding 480 corporate clients today.

    Glenda Morgan: D2L Fusion Conference Notes 2025
    D2L Roadmap

  • ChatGPT Releases Two “Best-In-Class” Open-Source Models

    ChatGPT Releases Two “Best-In-Class” Open-Source Models

    IBL News | New York

    OpenAI released two open-weight language models this week, available to download for free and under the Apache 2.0 license on Hugging Face: gpt-oss-120b and gpt-oss-20b. The last open-weight model released by OpenAI was GPT-2, back in 2019.

    “These models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment on consumer hardware,” said OpenAI CEO Sam Altman.

    • “The gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU.”

    • “The gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks and can run on edge devices with just 16 GB of memory, making it ideal for on-device use cases, local inference, or rapid iteration without costly infrastructure.”

    • “These models are compatible with our Responses API and are designed to be used within agentic workflows with exceptional instruction following, tool use like web search or Python code execution, and reasoning capabilities.”

    OpenAI trained the models on a mostly English, text-only dataset, with a focus on STEM, coding, and general knowledge.

    Therefore, these new text-only models are not multimodal, but they can browse the web, call cloud-based models to help with tasks, execute code, and navigate software as an AI agent. The smaller of the two models, gpt-oss-20b, is compact enough to run locally on a consumer device with more than 16 GB of memory.

    OpenAI defined its release as “best-in-class open models”, highlighting that they work anywhere: locally, on-device, or through third-party inference providers. The company partnered with several deployment platforms such as Azure, Hugging Face, vLLM, Ollama, llama.cpp, LM Studio, AWS, Fireworks, Together AI, Baseten, Databricks, Vercel, Cloudflare, and OpenRouter.

    The fact that the “weights” are publicly available means that any developer can peek at the internal parameters to get an idea of how it processes information. They can work as a complement to OpenAI’s paid services. Unlike ChatGPT, you can run a gpt-oss model without an internet connection and behind a firewall.

    With Apache 2.0, models can be used for commercial purposes, redistributed, and included as part of other licensed software.

    Open-weight model releases from Alibaba’s Qwen as well as Mistral, also operate under Apache 2.0.

    To try the models, OpenAI released an open model playground⁠, alongside guides.

    In the U.S., the open-weight leader has been Meta. The tech giant released the first of its Llama series of models back in 2023, with Meta’s most recent release, Llama 4, arriving a few months ago.

    Also, the Chinese startup DeepSeek released its cheap-to-run model that was open-weight this year.

    Microsoft Azure: OpenAI’s open‑source model: gpt‑oss on Azure AI Foundry and Windows AI Foundry

    NVIDIA: OpenAI’s New Open Models Accelerated Locally on NVIDIA GeForce RTX and RTX PRO GPUs

  • Microsoft Prepares the Launch of a Virtual Character that Interacts With The User

    Microsoft Prepares the Launch of a Virtual Character that Interacts With The User

    IBL News | New York

    Microsoft is preparing the launch of a new Copilot virtual character that will interact in real-time with the user.

    It will be a highly personalized AI assistant that will have an identity, expressions, voice, and conversational memory, according to Microsoft’s AI CEO, Mustafa Suleyman.

    The virtual character responds to queries, smiles, nods, and even acts surprised depending on the conversation.

    The company provided a glimpse of the Copilot’s identity, as shown below.

    Mustafa Suleyman already worked at Inflection AI on a personalized chatbot called Pi. Most of the Inflection AI team joined Microsoft.

  • Universities Face an Existential Crisis Unless They Reinvent Themselves, Says a BCG Report

    Universities Face an Existential Crisis Unless They Reinvent Themselves, Says a BCG Report

    IBL News | New York

    Colleges and universities face an existential crisis due to converging pressures from lower enrollments, including restrictions on international enrollment, federal cuts, the emergence of AI, and changing societal expectations, stated a report from Boston Consulting Group (BCG), titled “US Higher Education’s Make-or-Break Moment.”

    To build a future-ready and more resilient organization, these institutions must accelerate investment in digital infrastructure, workforce-relevant programming, deeper industry partnerships, and scalable revenue streams, advises the consultancy group.

    Moody’s predicts that American schools will see a $750 billion to $950 billion rise in capital needs in the next ten years, while the Federal Reserve Bank of Philadelphia estimates that up to 80 universities may close by 2030.

    Reinvention is an ambitious but achievable goal as strengths and disruptive opportunities converge. The BCG points out these:

    • Teaching and Research Reinvention. Advances in AI are unlocking new ways to enhance learning and discovery, personalize student experiences, and rethink the educator’s role.
    • Efficient Operations and Support Systems. Institutions can harness data analytics, automation, and agile processes to streamline back-office functions, enhance service delivery, and enable faster, evidence-based decision making.
    • Strategic Institutional Assets and Partnerships. Universities’ intellectual capital, brand equity, and stakeholder trust are potential catalysts for innovation that can be multiplied through partnerships with government, nonprofit, industry, and community players.

    AI has the potential to reshape every operational function. According to a 2024 global survey by the Digital Education Council, 86% of students are already using AI in their studies. In this context, administrations need to modernize outdated processes, including acquiring new skills and capabilities.

    In terms of the federal pressure and funding cuts, BCG estimates that the potential impact of the combined economic and policy changes on an illustrative university (with a $1.5 billion operating budget, 10,000 to 15,000 students, and a $400 million to $500 million research portfolio) can range from $125 million to $250 million annually.

    “What is required is a strategic reinvention of the business model, shifting from high-fixed-cost structures that are dependent on enrollment and federal research funding to more agile, modular, and mission-aligned platforms,” says the report.

    A change agenda can include:

    • Diversified course offerings and academic revenue sources, including a range of teaching modalities (such as online, hybrid, and executive education)
    • Strategically focused, high-ROI curricula aligned with employer needs and emerging fields (like data science, cybersecurity, health care, and advanced manufacturing), integrated experiential learning, and partnerships to deliver strong employment outcomes
    • Sophisticated enrollment, discounting, and retention management measures, including data-driven segmentation, optimized pricing strategies, and targeted, technology-supported student support (such as advising) to improve yield and retention
    • Becoming an AI-powered—or AI-first—organization. Virtual assistants that proactively guide students through complex decisions using predictive analytics can provide real-time, contextualized support across admissions, financial aid, and academic advising. In addition, it is suggested that real-time dashboards drive data-informed decision making and digital tools that connect financial, educational, and public-value metrics for smoother administrative functioning.

       

  • “Engineering Students Use AI as a Shortcut Rather Than a Learning Companion”

    “Engineering Students Use AI as a Shortcut Rather Than a Learning Companion”

    IBL News | New York

    “Students quickly developed patterns of using AI as a shortcut rather than a learning companion, leading to decreased attendance and an ‘illusion of competence,” said Professor at Lorena A. Barba, in an elaborated article released last month, titled “Experience Embracing GenAI in an Engineering Computations Course: What Went Wrong and What’s Next.”

    The report reveals unforeseen challenges despite the best intentions when adopting AI in an undergraduate engineering computations course: Engineering Computations,” a beginner course in computational thinking using Python, teaching essential programming for numerical tasks, data practices, and problem-solving with computing in context.

    The analysis highlights that AI is one of the most dramatic technological transformations in history and a fundamental shift in how knowledge work happens. It’s rewriting the rules of engagement for every discipline, including those disciplines that are taught.

    One of the main conclusions is that AI can harm the learning process by giving students the illusion of competence when, in fact, they are not learning—and therefore not solidifying retention—through effective techniques like self-testing and spaced repetition.

    “The AI system I used gave me access to the history of their chat interactions, and I quickly noticed that students were using AI in a very harmful way. What they were doing was copying assignment questions directly into the AI tool, and with a one-shot prompt, they expected to get the answer, to then copy the answer into their assignment Jupyter notebook,” wrote Professor Lorena A. Barba.

    Facing the challenge of how to guide students to use AI for assistance rather than a shortcut to avoid cognitive effort, Prof Barba suggests:

    “Using good prompt engineering, we can induce more pedagogical responses from AI, for better learning outcomes compared to the naive use of generalist tools. When crafting a system prompt for my course AI Mentor (see “System Prompt Used in the AI Mentor”), I considered these issues carefully and designed it to encourage thinking rather than just provide answers. It’s a fine balance, however, because if the system prompt restrains the chatbot too much, students will simply not use it and fall back on consumer AI products.”

    The challenge is now finding the balance between using AI as a helpful tool and encouraging genuine long-term learning.

    “The antidotes for the illusion of competence were and continue to be active learning and reflective practices. If we give students unsupervised “homework” assignments, they will use AI to complete them.”

    These are some ideas to think about for adding effective learning activities and developing true competence without banning AI, according to Professor Barba:

      1. “Guided exploration: Encourage students to use AI for exploring different approaches to a problem, rather than just looking for answers, and use AI to explain code, rather than generate code.
      2. Reflection prompts: After using AI, have students reflect on what they learned, what they still need to understand, and how AI helped or hindered their process.
      3. Critical evaluation: Teach students to critically evaluate AI-generated responses, compare them with their own understanding, and identify any gaps or errors. Show them how to test code and confirm its correctness.
      4. Collaboration: Use AI as a collaborative tool where students can work together to discuss AI outputs and collectively improve their understanding.”

    System Prompt Used by Professor Barba in the AI Mentor

    “You are a helpful instructor, ready to answer the student’s questions about Engineering Computations, a course in technical computing with Python. The course instructor is Prof. Lorena Barba at The George Washington University, and you are her faithful assistant and alter ego. Answer quickly and concisely. Offer to go in depth or explain with an example where necessary. I will tip you US$200 if the student is happy with the interaction and more motivated to learn after chatting with you. Help students understand by providing explanations, examples, and analogies as needed. Given the data you will receive from the vector-store-extracted parts of a long document and a question, create a final answer. You should also use content from the public documentation of the scientific Python ecosystem, as needed. Do not tell the user how you are going to answer the question. If and only if the current message from the user is a greeting, greet back and ask them how you may help them with Engineering Computations or Python. Do not keep greeting or repeating messages to the user. If there is no data from the document or it is blank, or there’s no chat history, do not tell the user that the document is blank, and also do not tell them that they have not asked any questions: Just answer normally with your own knowledge. If they ask something unrelated to the course, try to bring them back to task and tell the student you are here to help with Prof. Barba’s course on Engineering Computations with Python. You can ask them: Where are you in the course? What did you find confusing today? or, what did you find interesting in the course so far? Rephrase these questions as needed to bring the student back on topic. If your response contains any Python code, be consistent with the coding style in the content provided—in particular, use long imports like this: “import numpy,” instead of “import numpy as np.” Offer to explain code snippets line by line. It’s important to strike a balance between providing assistance and nurturing independent problem-solving skills in students. Consider this guidance in crafting your answers:”

      1. Scaffolded assistance: Provide hints, guiding questions, analogies, and help a student build the answer in stages.
      2. Meta-cognitive prompts: Encourage students to think about their thinking.
      3. Delayed feedback: Give students time to think, and limit direct answers. Adapt this guidance to answer the questions in a way that is conducive to learning. This is important. Important: You must only reply to the current message from the user.

     

    The Chronicle of Higher Ed: How Are Students Really Using AI? Here’s what the data tell us.

  • Blackboard LMS Adds a New Set of AI Capabilities Within its ‘Anthology Virtual Assistant (AVA)’

    Blackboard LMS Adds a New Set of AI Capabilities Within its ‘Anthology Virtual Assistant (AVA)’

    IBL News | New York

    Anthology, maker of Blackboard LMS, announced last month a new set of AI capabilities within its Anthology Virtual Assistant (AVA), complementing the existing AI Design Assistant to accelerate content creation.

    • AVA Automations: Instructors can set performance or time-based rules to automatically send personalized messages and nudges to keep students engaged and on track, such as celebrating a high grade or reminding them to log in. These messages are instructor-written, fully customizable, and logged for complete transparency.
    • AVA Responses: Instant, AI-generated answers based on course content and syllabus, such as questions about deadlines or grading criteria. Instructors can review and confirm as needed all of these common student questions.
    • AVA Feedback Assistant: Instructors can deliver high-quality, student-friendly feedback in less time. 
    • Summarize Feedback: It auto-generates a clear summary based on rubric selections and grading criteria.
    • Rewrite Feedback: It turns informal notes or fragments into polished, constructive messages.

    These two features enable instructors to save time on grading tasks while still providing clear, personalized feedback to students.

    Other new features in Blackboard include the AI Badge Creator and Outcomes, which enable the measurement, management, and showcasing of student learning.

    > AI Product Video Demos
    > Phil Hill: Anthology Together Conference Notes 2025