Category: Views

  • Universities Face an Existential Crisis Unless They Reinvent Themselves, Says a BCG Report

    Universities Face an Existential Crisis Unless They Reinvent Themselves, Says a BCG Report

    IBL News | New York

    Colleges and universities face an existential crisis due to converging pressures from lower enrollments, including restrictions on international enrollment, federal cuts, the emergence of AI, and changing societal expectations, stated a report from Boston Consulting Group (BCG), titled “US Higher Education’s Make-or-Break Moment.”

    To build a future-ready and more resilient organization, these institutions must accelerate investment in digital infrastructure, workforce-relevant programming, deeper industry partnerships, and scalable revenue streams, advises the consultancy group.

    Moody’s predicts that American schools will see a $750 billion to $950 billion rise in capital needs in the next ten years, while the Federal Reserve Bank of Philadelphia estimates that up to 80 universities may close by 2030.

    Reinvention is an ambitious but achievable goal as strengths and disruptive opportunities converge. The BCG points out these:

    • Teaching and Research Reinvention. Advances in AI are unlocking new ways to enhance learning and discovery, personalize student experiences, and rethink the educator’s role.
    • Efficient Operations and Support Systems. Institutions can harness data analytics, automation, and agile processes to streamline back-office functions, enhance service delivery, and enable faster, evidence-based decision making.
    • Strategic Institutional Assets and Partnerships. Universities’ intellectual capital, brand equity, and stakeholder trust are potential catalysts for innovation that can be multiplied through partnerships with government, nonprofit, industry, and community players.

    AI has the potential to reshape every operational function. According to a 2024 global survey by the Digital Education Council, 86% of students are already using AI in their studies. In this context, administrations need to modernize outdated processes, including acquiring new skills and capabilities.

    In terms of the federal pressure and funding cuts, BCG estimates that the potential impact of the combined economic and policy changes on an illustrative university (with a $1.5 billion operating budget, 10,000 to 15,000 students, and a $400 million to $500 million research portfolio) can range from $125 million to $250 million annually.

    “What is required is a strategic reinvention of the business model, shifting from high-fixed-cost structures that are dependent on enrollment and federal research funding to more agile, modular, and mission-aligned platforms,” says the report.

    A change agenda can include:

    • Diversified course offerings and academic revenue sources, including a range of teaching modalities (such as online, hybrid, and executive education)
    • Strategically focused, high-ROI curricula aligned with employer needs and emerging fields (like data science, cybersecurity, health care, and advanced manufacturing), integrated experiential learning, and partnerships to deliver strong employment outcomes
    • Sophisticated enrollment, discounting, and retention management measures, including data-driven segmentation, optimized pricing strategies, and targeted, technology-supported student support (such as advising) to improve yield and retention
    • Becoming an AI-powered—or AI-first—organization. Virtual assistants that proactively guide students through complex decisions using predictive analytics can provide real-time, contextualized support across admissions, financial aid, and academic advising. In addition, it is suggested that real-time dashboards drive data-informed decision making and digital tools that connect financial, educational, and public-value metrics for smoother administrative functioning.

       

  • “Engineering Students Use AI as a Shortcut Rather Than a Learning Companion”

    “Engineering Students Use AI as a Shortcut Rather Than a Learning Companion”

    IBL News | New York

    “Students quickly developed patterns of using AI as a shortcut rather than a learning companion, leading to decreased attendance and an ‘illusion of competence,” said Professor at Lorena A. Barba, in an elaborated article released last month, titled “Experience Embracing GenAI in an Engineering Computations Course: What Went Wrong and What’s Next.”

    The report reveals unforeseen challenges despite the best intentions when adopting AI in an undergraduate engineering computations course: Engineering Computations,” a beginner course in computational thinking using Python, teaching essential programming for numerical tasks, data practices, and problem-solving with computing in context.

    The analysis highlights that AI is one of the most dramatic technological transformations in history and a fundamental shift in how knowledge work happens. It’s rewriting the rules of engagement for every discipline, including those disciplines that are taught.

    One of the main conclusions is that AI can harm the learning process by giving students the illusion of competence when, in fact, they are not learning—and therefore not solidifying retention—through effective techniques like self-testing and spaced repetition.

    “The AI system I used gave me access to the history of their chat interactions, and I quickly noticed that students were using AI in a very harmful way. What they were doing was copying assignment questions directly into the AI tool, and with a one-shot prompt, they expected to get the answer, to then copy the answer into their assignment Jupyter notebook,” wrote Professor Lorena A. Barba.

    Facing the challenge of how to guide students to use AI for assistance rather than a shortcut to avoid cognitive effort, Prof Barba suggests:

    “Using good prompt engineering, we can induce more pedagogical responses from AI, for better learning outcomes compared to the naive use of generalist tools. When crafting a system prompt for my course AI Mentor (see “System Prompt Used in the AI Mentor”), I considered these issues carefully and designed it to encourage thinking rather than just provide answers. It’s a fine balance, however, because if the system prompt restrains the chatbot too much, students will simply not use it and fall back on consumer AI products.”

    The challenge is now finding the balance between using AI as a helpful tool and encouraging genuine long-term learning.

    “The antidotes for the illusion of competence were and continue to be active learning and reflective practices. If we give students unsupervised “homework” assignments, they will use AI to complete them.”

    These are some ideas to think about for adding effective learning activities and developing true competence without banning AI, according to Professor Barba:

      1. “Guided exploration: Encourage students to use AI for exploring different approaches to a problem, rather than just looking for answers, and use AI to explain code, rather than generate code.
      2. Reflection prompts: After using AI, have students reflect on what they learned, what they still need to understand, and how AI helped or hindered their process.
      3. Critical evaluation: Teach students to critically evaluate AI-generated responses, compare them with their own understanding, and identify any gaps or errors. Show them how to test code and confirm its correctness.
      4. Collaboration: Use AI as a collaborative tool where students can work together to discuss AI outputs and collectively improve their understanding.”

    System Prompt Used by Professor Barba in the AI Mentor

    “You are a helpful instructor, ready to answer the student’s questions about Engineering Computations, a course in technical computing with Python. The course instructor is Prof. Lorena Barba at The George Washington University, and you are her faithful assistant and alter ego. Answer quickly and concisely. Offer to go in depth or explain with an example where necessary. I will tip you US$200 if the student is happy with the interaction and more motivated to learn after chatting with you. Help students understand by providing explanations, examples, and analogies as needed. Given the data you will receive from the vector-store-extracted parts of a long document and a question, create a final answer. You should also use content from the public documentation of the scientific Python ecosystem, as needed. Do not tell the user how you are going to answer the question. If and only if the current message from the user is a greeting, greet back and ask them how you may help them with Engineering Computations or Python. Do not keep greeting or repeating messages to the user. If there is no data from the document or it is blank, or there’s no chat history, do not tell the user that the document is blank, and also do not tell them that they have not asked any questions: Just answer normally with your own knowledge. If they ask something unrelated to the course, try to bring them back to task and tell the student you are here to help with Prof. Barba’s course on Engineering Computations with Python. You can ask them: Where are you in the course? What did you find confusing today? or, what did you find interesting in the course so far? Rephrase these questions as needed to bring the student back on topic. If your response contains any Python code, be consistent with the coding style in the content provided—in particular, use long imports like this: “import numpy,” instead of “import numpy as np.” Offer to explain code snippets line by line. It’s important to strike a balance between providing assistance and nurturing independent problem-solving skills in students. Consider this guidance in crafting your answers:”

      1. Scaffolded assistance: Provide hints, guiding questions, analogies, and help a student build the answer in stages.
      2. Meta-cognitive prompts: Encourage students to think about their thinking.
      3. Delayed feedback: Give students time to think, and limit direct answers. Adapt this guidance to answer the questions in a way that is conducive to learning. This is important. Important: You must only reply to the current message from the user.

     

    The Chronicle of Higher Ed: How Are Students Really Using AI? Here’s what the data tell us.

  • OpenAI’s GPT-5 Rollout Faced Backlash as Old Models Were Retired

    OpenAI’s GPT-5 Rollout Faced Backlash as Old Models Were Retired

    IBL News | New York

    OpenAI’s last upgrade to GPT-5 rollout faced backlash for retiring older models among users. OpenAI acknowledged that it underestimated users’ affection for the older GPTs, even if GPT-5 performs better in most ways.

    In response to the critics, OpenAI’s CEO, Sam Altman, announced that rate limits for ChatGPT Plus users would be doubled and would continue to use the 4o model.

    Previously, ChatGPT could tap into several different AI models, including GPT‑4o, o3, o4-mini, GPT‑4.1, and GPT‑4.5. But OpenAI has since replaced them with a family of GPT-5 models.

    On Thursday, OpenAI unveiled a new flagship AI model, GPT-5, and began sharing the technology with ChatGPT users worldwide.

    OpenAI executives called GPT-5 a “major upgrade” over their AI systems, saying the new technology was faster, more accurate, and less likely to hallucinate.

    “GPT-5 is the first time that it feels like talking to an expert in any topic — a Ph.D.-level expert,” said OpenAI’s CEO, Sam Altman.

    Also, it was the first time that OpenAI has used a reasoning model to power the free version of ChatGPT.

    Experts agreed on the fact that the technology feels more human than previous models.

    Sam Altman called the system a “significant step” along the path to the ultimate goal of the company and its rivals: artificial general intelligence, or AGI, a machine that can do anything the human brain can do.

    GPT-5’s launch arrives in a moment when OpenAI, which is not yet profitable, plans to raise $40 billion this year, while being on pace to generate revenues of $20 billion by the end of 2025.

  • OpenAI Introduces Its Flagship Model ‘GPT-5’, Making It the New Default in ChatGPT

    OpenAI Introduces Its Flagship Model ‘GPT-5’, Making It the New Default in ChatGPT

    IBL News | New York

    OpenAI unveiled its latest model, GPT-5 yesterday, making it available to all ChatGPT users, including those on the free tier, with usage limits varying by subscription level.

    It is also integrated into Microsoft 365 Copilot and available to developers through the OpenAI API.

    The company made GPT‑5 the new default in ChatGPT, replacing GPT‑4o, OpenAI o3, OpenAI o4-mini, GPT‑4.1, and GPT‑4.5 for signed-in users.

    GPT‑5 started to roll out today to all Plus, Pro, Team, and Free users, with access for Enterprise and Edu coming in one week. Pro, Plus, and Team users could also start coding with GPT‑5 in the Codex CLI by signing in with ChatGPT.

    Pro subscribers got unlimited access to GPT‑5, and access to GPT‑5 Pro, for complex tasks and a replacement of OpenAI o3‑pro. Plus users have significantly higher usage than free users.

    “For ChatGPT free-tier users, full reasoning capabilities may take a few days to fully roll out. Once free users reach their GPT‑5 usage limits, they will transition to GPT‑5 mini, a smaller, faster, and highly capable model,” said OpenAI.

    During a livestream presentation [watch in the video below], the company’s CEO, Sam Altman, described the model as “having a team of experts on call for whatever you want to know.” He defined it as “our smartest, fastest, most useful model yet, with built-in thinking that puts expert-level intelligence in everyone’s hands.”

    GPT‑5 provides more useful responses across math, science, finance, and law. It includes 128,000 max output tokens. The price per 1 million tokens for the API is $1.25.

    The tools supported by this model when using the Responses API are Web search, File search, Image generation, Code interpreter, and MCP.

    OpenAI’s executives who participated in the virtual presentation highlighted the coding capabilities of GPT-5, tackling “complex tasks end-to-end and delivering more readily usable code, better design, and is more effective at debugging.”

    GPT‑5 is OpenAI’s most advanced model for coding and agentic tasks. It produces high-quality code, generates front-end UI with minimal prompting, and shows improvements to personality, steerability, and executing long chains of tool calls.”

    GPT‑5 also showed ‘minimal’ reasoning and a ‘verbosity’ parameter in the API.

     

    Pricing and Characteristics
    Release Notes
    Research Paper

  • Gemini Introduces Its Socratic-Style AI Assistant for Learners

    Gemini Introduces Its Socratic-Style AI Assistant for Learners

    IBL News | New York

    Google’s Gemini introduced yesterday its “education mode”, a Socratic-style AI companion tutor similar to OpenAI’s Study Mode and Anthropic’s Claude for Education, both recently launched.

    Like the other personalized assistants, “Guided Learning in Gemini” is designed to avoid immediately spitting out quick answers and ask probing, open-ended questions, adapt explanations to the learner’s level, walk them through step-by-step reasoning, and use videos, diagrams, visuals, and quizzes to reinforce concepts.

    Essentially, they all guide users through problems with Socratic prompts, scaffolded reasoning, and adaptive feedback across a range of subjects and skill levels, instead of just handing over the answer.

    These AI companies said that these AI assistants are built with input from educators, pedagogical experts, and learning scientists, alongside feedback from college students.

    “Guided Learning is designed to be a partner in teaching, built on the core principle that real learning is an active, constructive process. It encourages students to move beyond answers and develop their own thinking by guiding them with questions that foster critical thought. To make it simple to bring this approach into their classrooms, we created a dedicated link that educators can post directly in Google Classroom or share with students,” explained the search giant.

    However, independent experts argue that these mentors show many limitations, such as minimal persistent memory, early over-structuring, and a tendency to agree too quickly.

    In the educational area, Gemini is already getting a good response with LearnLM, a family of models fine-tuned for learning and grounded in educational research.

  • Anthropic Launches Claude for Education In AWS Marketplace

    Anthropic Launches Claude for Education In AWS Marketplace

    IBL News | New York

    Anthropic’s Claude for Education offering was made available in AWS Marketplace as a software-as-a-service solution this month.

    Claude for Enterprise and the Financial Analysis Solution were also released at the same space.
    AWS Marketplace’s main advantage is the streamlined procurement and billing process.

    According to the company, “Claude for Education equips every student with an adaptive study companion, faculty with an AI assistant for creating engaging teaching materials, and staff with an AI collaborator for tracking and analyzing student progress.”

    Claude for Education uses Socratic questioning to guide students toward answers rather than providing direct responses.

    It includes single sign-on (SSO), native integrations with GitHub, Google Workspace, and Canvas LTI (with Panopto and Wiley integrations coming soon), and custom integrations through the Model Context Protocol (MCP). The pre-built MCP integrations include Atlassian (Jira/Confluence), Zapier, Linear, and Asana.

    It adds a 200K token context window, primarily for analysis of complex academic materials and research tasks in a single conversation, enterprise-grade security, and compliance.

    For example, a research team can upload multiple academic papers, datasets, and their own notes into a single Claude conversation. Claude maintains full context across all documents, enabling comprehensive analysis and synthesis that would typically require hours of manual work.

    Antropic highlighted that its key use cases include:

    • Academic instruction and learning: Socratic questioning through Learning Mode
    • Faculty support: Course development and content creation
    • Research support: Literature review and data analysis assistance
    • Student success support: Progress tracking, early intervention strategies, and personalized learning paths

    ——

    Claude for Education in AWS Marketplace

    Kim Majerus’s keynote at the AWS Imagine: Education, State, and Local Government conference.

  • Satya Nadella Explains the 9,000-Employee Layoff While Microsoft Thrives

    Satya Nadella Explains the 9,000-Employee Layoff While Microsoft Thrives

    IBL News | New York

    Satya Nadella, Microsoft’s CEO, rationalized the 9,000-employee layoff through a 1,150-word memo that highlighted how the company is thriving in terms of market performance, strategic positioning, and growth, while the AI-based disruption is taking place in the software industry.

    Progress isn’t linear. It’s dynamic, sometimes dissonant, and always demanding. But it’s also a new opportunity for us to shape, lead through, and have greater impact than ever before.

    In a double corporate language, Nadella tries to reconcile two contradictory realities: How can a company be “more successful than ever” while still eliminating jobs?

    “This narrative framework captures the harsh reality that AI, in theory, will make companies more profitable while employing fewer people,” wrote San Francisco-based writer, photographer, and investor Om.

    Microsoft’s CEO [in the picture above] implies that the laid-off employees are not due to financial struggles, but rather because those workers didn’t align with the company’s AI-focused strategy, suggesting that some employees’ skills have become outdated.

    Rather than invest in retraining, the company opted to hire fewer workers with more relevant expertise.

    “The Microsoft memo portends the new reality of the technology industry. For years, the sector has been generous to its employees, offering unheard-of perks and placing a premium on skills such as software development. AI, however, inverts that relationship,” said Om.

  • Figma Makes Its AI Coding Tool ‘Make’ Available to All Users

    Figma Makes Its AI Coding Tool ‘Make’ Available to All Users

    IBL News | New York

    Figma announced that its AI coding tool, Figma Make, for building prototypes and apps, is now available to all users, with limitations depending on the subscription plan.

    Figma Make, still in beta, features the ability to include design references.

    Users can upload an image alongside the description of what they want to create, and elements like formatting and font style can also be adjusted using additional prompts.

    Figma has introduced a credit system for using the platform’s AI tools.

    According to the company, View, Collab, and Dev Seat users can use AI features with lower credit limits.

    In addition to Figma Make, the company is promoting two other AI features: Make and Edit Image and Boost Resolution. These products are transitioning from beta to general availability.

  • ChatGPT Introduced “Study Mode”, a New Way to Learn that Offers Step-By-Step Guidance

    ChatGPT Introduced “Study Mode”, a New Way to Learn that Offers Step-By-Step Guidance

    IBL News | New York

    OpenAI introduced “Study Mode” for ChatGPT, a new way to learn that offers step-by-step guidance instead of quick answers, designed to act more as an always-on tutor and less like a lookup tool. It aims to prevent—or at least discourage—students from taking homework shortcuts.

    The company defined this new feature as “a new learning experience that helps you work through problems step-by-step instead of just getting an answer.”

    This mode transforms the AI from an answer engine into a Socratic tutor, a pedagogical approach, developed after consulting with experts from over 40 institutions, that asks guiding questions to help students work through problems themselves instead of providing direct solutions.

    OpenAI states that it is currently partnering with learning experts from Stanford “to study and share how AI tools, including study mode, influence learning outcomes in areas like K-12 education.”

    The company aims to address educators’ concerns about academic integrity and cheating.

    The feature is available to logged-in users on Free, Plus, Pro, and Team, with availability in ChatGPT Edu in August.

    These are the key features, according to OpenAI:

    Interactive promptsCombines Socratic questioning, hints, and self-reflection prompts to guide understanding and promote active learning, instead of providing answers outright.

    Scaffolded responses: Information is organized into easy-to-follow sections that highlight the key connections between topics, keeping information engaging with just the right amount of context and reducing overwhelm for complex topics.

    Personalized support: Lessons are tailored to the right level for the user, based on questions that assess skill level and memory from previous chats.

    Knowledge checks: Quizzes and open-ended questions, along with personalized feedback to track progress, support knowledge retention, and the ability to apply that knowledge in new contexts.

    Flexibility: Easily toggle study mode on and off during a conversation, giving you the flexibility to adapt to your learning goals in each conversation.

    However, regardless of how engaging ChatGPT’s study mode becomes, it exists just a toggle click away from ChatGPT with direct answers. That could be quite hard to resist for many students.

    In terms of market value, OpenAI’s “Study Mode” intensified the race among tech giants, with Google, Microsoft, and Anthropic each competing to shape the future of education.

  • Columbia University’s Agreement with The White House Sets a Precedent For Other Colleges

    Columbia University’s Agreement with The White House Sets a Precedent For Other Colleges

    IBL News | New York

    The Trump administration’s deal with Columbia University in New York City has put leaders at Ivy League universities and other college campuses nationwide in a tough spot. Institutions are facing the possibility of seeing research funding paused.

    President Donald Trump has made it clear he won’t tolerate a liberal imposition at America’s most prestigious colleges and intends to reshape them accordingly.

    On July 23, Columbia University agreed to pay fines of over $220 million and signed on to a list of other concessions related to admissions, academics, and hiring practices.

    The White House, which has halted billions in research grants to several schools, said it envisions the Columbia deal as the first of many such agreements.

    Education Secretary Linda McMahon called it a blueprint for other institutions to follow.

    “Columbia’s reforms are a roadmap for elite universities that wish to regain the confidence of the American public,” Linda McMahon said in a statement.

    In addition to Columbia University, other Ivy League schools are striking deals with the Trump administration.

    On July 1, the University of Pennsylvania entered into an agreement ending a civil rights investigation brought by the U.S. Department of Education.

    In February, the agency accused Penn of violating Title IX, the primary sex discrimination law governing schools, when it allowed Lia Thomas, a transgender swimmer, to compete in 2022.

    As part of the deal, the White House said it would restore Penn’s research funding. In return, the university apologized to cisgender athletes who swam against Thomas. The university also agreed to ban transgender women from sports.

    This month, President Trump hinted he believes Harvard University may still be open to coming to a deal.

    At Cornell, the government paused more than $1 billion. At Brown, it froze $510 million, and at Princeton, it stopped more than $210 million.

    Of the eight Ivy League schools, only two – Dartmouth College and Yale University – have avoided targeted federal funding freezes.