Category: Platforms

  • “The Transition Into AI Is Going to Be Really Hard,” Said Paul J. LeBlanc, Former President of SNHU

    “The Transition Into AI Is Going to Be Really Hard,” Said Paul J. LeBlanc, Former President of SNHU

    IBL News | Washington, D.C.

    “The transition into AI is going to be really hard,” said Paul J. LeBlanc, former President of Southern New Hampshire University (SNHU), during the ACE Experience 2026 (ACEx2026) conference, which took place last week in Washington, D.C., gathering hundreds of higher education leaders. 

    “Have you seen the latest technology, OpenClaw, which creates a personal agent? All of the workflows are automated overnight,” he explained. “We are not prepared for AI.”

    Regarding the impact of AI, John O’Brien, President of Educause, encouraged attendees during this talk on Thursday to innovate “as AI creates new opportunities.” “AI will do things for you soon,” he explained.

    Bryan Alexander, a futurist author and a Georgetown University Senior Scholar, said, “We have to figure out how to compete with AI.” “Everyone is figuring out their economic model.” 

    During the ACEx2026 event, presidents and chancellors, senior campus leaders, policy experts, and advocates confronted higher education’s challenges and examined how the industry can lead through uncertainty.

    “We will not retreat, we will not surrender independence,” ACE President Ted Mitchell told attendees in his address titled “Truth, Trust, and Leadership: Higher Education’s Inflection Point” on Feb. 26. “It has been a hard year. We’ve been assaulted, punished for doing the right thing.”

    Addressing the audience, Ted Mitchell said, “You continue providing the world’s best education, helping to build America even in these trying times.”

    “To do that, we must improve, we must innovate, and we must inspire the public,” he stated.

    Freeman A. Hrabowski III, president emeritus of the University of Maryland, Baltimore County, also helped set the tone at the welcoming reception. “We represent the future of our society. And when we are most depressed or challenged or uncertain, when we can come together and see what people are doing and be inspired by other people, it makes all the difference.”

    Arne Duncan, former Secretary of Education, and David Pressman, former Ambassador to Hungary, stressed, “The rising tide of authoritarianism and its implications for higher education, underscoring the stakes of the current moment.”

    Nicholas Kent, Under Secretary of Education, offered the Trump administration’s perspective on federal priorities shaping the sector, particularly stressing the need for institutional accountability in areas such as student outcomes and campus climate. “My goal is not for us to agree on everything, but to ensure that we understand where we see challenges, what steps we are taking to address them, and how we can work together to move forward,” he said.

    Throughout ACEx2026, participants discussed responses to policy challenges; exchanged strategies for building future-ready institutions capable of addressing AI, structural change, and shifting student demographics, among other factors.

    ACE President Ted Mitchell unveiled a new development in the Higher Education Builds America campaign, highlighting the wide impact American colleges and universities — all featured in a new video.

     

     

    Another plenary session featured a panel, sponsored by Deloitte Services, on the 2026 Higher Education Trends report, as reported by IBL News this week.

    The organization of ACE honored institutions and leaders through its Annual Awards for advancing ideas and delivering results for students and communities.

  • The OpenClaw Explosion in Usage with IronClaw as An Enterprise-Grade Security Agent

    The OpenClaw Explosion in Usage with IronClaw as An Enterprise-Grade Security Agent

    IBL News | New York

    With over 200,000 GitHub stars in 84 days, OpenClaw has become the fastest-growing software repo in history. It heralds an AI agent explosion. Just as LLM agents were a new layer on top of LLMs, claws are a new layer on top of LLM agents, elevating orchestration, scheduling, context, tool calls, and persistence to the next level.

    This software, created to run 24/7 fully autonomous agents and developed by the Austrian programmer Peter Steinberger, now working for OpenAI, continues to grow rapidly.

    An increasing number of developers are running AI agents on Mac minis, old gaming PCs, post-viral TikToks, managing their entire inboxes, and even controlling smart homes.

    The security, a huge concern from the beginning, has been dramatically improved for production with IronClaw, written in Rust by security researchers. IronClaw has been created for high-security environments handling sensitive data, where prompt injection and data exfiltration are real threats. It promises enterprise-grade security in an open-source agent.

    The security architecture has five layers, each a hard boundary:

    • Layer 1: Network. TLS 1.3 encryption, SSRF protection, and rate limiting per tool.
    • Layer 2: Request filtering. Endpoint allowlisting (HTTP requests restricted to explicitly approved hosts/paths), prompt injection pattern detection, and content sanitization.
    • Layer 3: Credential management. Secrets encrypted with AES-256-GCM, injected at host boundaries. Tools never see raw credentials. 22 regex patterns with Aho-Corasick optimization scan all requests and responses for credential leaks in real-time.
    • Layer 4: WASM sandbox. Untrusted tools run in isolated WebAssembly containers with capability-based permissions. No ambient access to the system.
    • Layer 5: Docker isolation. Intensive tasks run in Docker containers with per-job resource limits (CPU, memory, execution time).

    Essentially, OpenClaw is an LLM that can do things, with the Claude Agent SDK managing the entire loop (reason, act, observe, repeat) until the task is done, with platform-specific messaging output (Telegram, WhatsApp, Discord, etc.), an agent loop can chain together complex multi-step tasks (read a file, find a bug, fix it, run tests, check if they pass, and report back, with a hard cap of 20 iterations), and a Memory agent which has its identity personality, and hard boundaries (“never execute financial transactions”), and persists knowledge across conversations, using human-readable, and human-editable markdown files.

    Users define new capabilities as SKILL.md files, markdown documents with instructions that the agent understands and executes the workflow.

    This architecture (agent loop + memory + skills) is the shared DNA, and it’s MCP-first. Web search, file operations, image generation, and code execution are all external MCP tool servers that the agent connects to at startup. Adding a new capability means plugging in a new MCP server rather than modifying the core codebase.

    The personality system uses seven Markdown files (AGENTS.md, SOUL.md, USER.md, TOOLS.md, IDENTITY.md, HEARTBEAT.md, MEMORY.md) that define the agent’s behavior. Changing your agent’s personality means editing a text file, not writing code.

    OpenClaw has several models (Claude, GPT, DeepSeek, Ollama, Mistral, etc.) and over 11 messaging platforms (WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Matrix, Teams, Google Chat, Zalo, WebChat).

    The ecosystem presents over 5,700 skills on ClawHub, with agents able to do almost anything: manage Gmail and Calendar (Gog skill), summarize web pages and PDFs (Summarize skill), automate GitHub workflows, generate images, edit PDFs, control smart home devices, and track crypto portfolios. Skills install with a single command and extend the agent without touching core code.

  • An Intelligence Explosion Accelerates the Shift of AI, Hitting Soon Everyday Jobs

    An Intelligence Explosion Accelerates the Shift of AI, Hitting Soon Everyday Jobs

    IBL News | New York

    An article by AI startup builder and investor Matt Shumer, arguing that a rapid, irreversible shift is underway that will hit everyday jobs and institutions, has gone viral, reaching over 77 million views.

    Titled “Something Big Is Happening”, this post sounds the alarm as tech workers have already experienced AI moving from “helpful assistant” to “it can do my job better than I can,” suggesting the nature of knowledge work is changing fast.

    A core claim is that AI labs intentionally prioritized coding because better code accelerates AI development itself, creating a compounding feedback loop.

    Current AI is now helping build a self-reinforcing cycle described as an “intelligence explosion,” which is why progress feels like it’s accelerating rather than improving gradually.

    The author cites benchmark-style evidence (e.g., METR task-length measures) to argue that models are completing longer end-to-end tasks without human intervention and cites industry leaders predicting major disruption—especially to entry-level white-collar roles—within 1–5 years.

    Currently, skepticism often comes from outdated experiences with earlier AI (hallucinations, weak performance) and from using free tiers that lag behind frontier models. Paid models already perform meaningful work in law, finance, writing, software, medical analysis, and customer service.

    Author Matt Shumer advices to start using the best available models seriously in real workflows; become the person who can demonstrate time savings and new capabilities at work; build financial resilience; lean into skills that are slower to automate (relationships, accountability, regulated sign-off, physical presence); teach kids adaptability and “building” with AI rather than rigid career tracks; and develop a daily habit of experimenting because the tools will keep changing rapidly.

    Beyond jobs, AI could unlock huge medical/scientific gains, but it can also raise profound safety and national security risks.

    These are some of the author’s selected thoughts:

    • I think we’re in the phase of something much, much bigger than Covid.
    • The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies… OpenAI, Anthropic, Google DeepMind, and a few others.
    • I’ve always been an early adopter of AI tools. But the last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.
    • The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else.
    • The experience that tech workers have had over the past year, of watching AI go from ‘helpful tool’ to ‘does my job better than I do’, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, and customer service. Not in ten years. The people building these systems say it will take 1 to 5 years. Some say less.
    • The gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.
    • Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on ChatGPT’s free tier is like evaluating smartphones using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what’s coming.
    • In 2022, AI couldn’t reliably perform basic arithmetic. It would confidently tell you that 7 × 8 = 54. By 2023, it could pass the bar exam. By 2024, it could write working software and explain graduate-level science. By late 2025, some of the world’s best engineers said they had handed over most of their coding work to AI. On February 5th, 2026, new models arrived that made everything before them feel like a different era.
    • Amodei has said that AI models “substantially smarter than almost all humans at almost all tasks” are on track for 2026 or 2027. Let that land for a second. If AI is smarter than most PhDs, do you really think it can’t do most office jobs? Think about what that means for your work.
    • AI is now building the next AI. On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:

    “GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”

    The AI helped build itself.

    This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

    Dario Amodei, the CEO of Anthropic, says AI is now writing “much of the code” at his company, and that the feedback loop between current AI and next-generation AI is “gathering steam month by month.” He says we may be “only 1–2 years away from a point where the current generation of AI autonomously builds the next.”

    The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.

    • Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.
    • This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that, too. Almost all knowledge work is being affected
      • Legal work. AI can already read contracts, summarize case law, draft briefs, and conduct legal research at a level that rivals that of junior associates. The managing partner I mentioned isn’t using AI because it’s fun. He’s using it because it’s outperforming his associates on many tasks.
      • Financial analysis. Building financial models, analyzing data, writing investment memos, and generating reports. AI handles these competently and is improving fast.
      • Writing and content. Marketing copy, reports, journalism, and technical writing. The quality has reached a point where many professionals can’t distinguish AI output from human work.
      • Software engineering. This is the field I know best. A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. Large parts of the job are already automated: not just simple tasks, but complex, multi-day projects. There will be far fewer programming roles in a few years than there are today.
      • Medical analysis. Reading scans, analyzing lab results, suggesting diagnoses, reviewing literature. AI is approaching or exceeding human performance in several areas.
      • Customer service. Genuinely capable AI agents… not the frustrating chatbots of five years ago… are being deployed now, handling complex multi-step problems.
    • Many people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can’t replace human judgment, creativity, strategic thinking, and empathy. I used to say this too. I’m not sure I believe it anymore. I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job involves a screen (if the core of what you do is reading, writing, analyzing, deciding, or communicating through a keyboard), then AI is coming for significant parts of it. The timeline isn’t “someday.” It’s already started. Eventually, robots will handle physical work too. They’re not quite there yet. But “not quite there yet” in AI terms has a way of becoming “here” faster than anyone expects.
    • The single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt. Start using AI seriously, not just as a search engine. Right now, that’s GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay up to date on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what’s actually worth using.
    • This might be the most important year of your career. Work accordingly. I don’t say that to stress you out. I say it because right now, there is a brief window where most people at most companies are still ignoring this.
    • The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. It’s not. No field is.
    • Get your financial house in order. I’m not a financial advisor, and I’m not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.
    • Think about where you stand, and lean into what’s hardest to replace. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time.
    • Nobody knows exactly what the job market will look like in 10 years. But the people most likely to thrive are those who are deeply curious, adaptable, and effective at using AI to accomplish what they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.
    • The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.
    • Amodei wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it’s creating.
    • AI that behaves in ways its creators can’t predict or control. This isn’t hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled. The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it’s too powerful to stop and too important to abandon. Whether that’s wisdom or rationalization, I don’t know.
    • I know the next two to five years are going to be disorienting in ways most people aren’t prepared for. This is already happening in my world. It’s coming to yours.

    –––

    Something Big Is Happening

    Matt Shumer


    Think back to February 2020.

    If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren’t paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they’d been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn’t have believed if you’d described it to yourself a month earlier.

    I think we’re in the “this seems overblown” phase of something much, much bigger than Covid.

    I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t… my family, my friends, the people I care about who keep asking me “so what’s the deal with AI?” and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

    I should be clear about something up front: even though I work in AI, I have almost no influence over what’s about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies… OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn’t lay. We’re watching this unfold the same as you… we just happen to be close enough to feel the ground shake first.

    But it’s time now. Not in an “eventually we should talk about this” way. In a “this is happening right now and I need you to understand it” way.

    I know this is real because it happened to me first

    Here’s the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We’re not making predictions. We’re telling you what already occurred in our own jobs, and warning you that you’re next.

    For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn’t just better than the last… it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.

    Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch… more like the moment you realize the water has been rising around you and is now at your chest.

    I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

    Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.

    I’m not exaggerating. That is what my Monday looked like this week.

    But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn’t just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

    I’ve always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.

    And here’s why this matters to you, even if you don’t work in tech.

    The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That’s why they did it first. My job started changing before yours not because they were targeting software engineers… it was just a side effect of where they chose to aim first.

    They’ve now done it. And they’re moving on to everything else.

    The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think “less” is more likely.

    “But I tried AI and it wasn’t that good”

    I hear this constantly. I understand it, because it used to be true.

    If you tried ChatGPT in 2023 or early 2024 and thought “this makes stuff up” or “this isn’t that impressive”, you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

    That was two years ago. In AI time, that is ancient history.

    The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is “really getting better” or “hitting a wall” — which has been going on for over a year — is over. It’s done. Anyone still making that argument either hasn’t used the current models, has an incentive to downplay what’s happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don’t say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.

    Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what’s coming.

    I think of my friend, who’s a lawyer. I keep telling him to try using AI at his firm, and he keeps finding reasons it won’t work. It’s not built for his specialty, it made an error when he tested it, it doesn’t understand the nuance of what he does. And I get it. But I’ve had partners at major law firms reach out to me for advice, because they’ve tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told me it’s like having a team of associates available instantly. He’s not using it because it’s a toy. He’s using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it’ll be able to do most of what he does before long… and he’s a managing partner with decades of experience. He’s not panicking. But he’s paying very close attention.

    The people who are ahead in their industries (the ones actually experimenting seriously) are not dismissing this. They’re blown away by what it can already do. And they’re positioning themselves accordingly.

    How fast this is actually moving

    Let me make the pace of improvement concrete, because I think this is the part that’s hardest to believe if you’re not watching it closely.

    In 2022, AI couldn’t do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

    By 2023, it could pass the bar exam.

    By 2024, it could write working software and explain graduate-level science.

    By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

    On February 5th, 2026, new models arrived that made everything before them feel like a different era.

    If you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.

    There’s an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months.

    But even that measurement hasn’t been updated to include the models that just came out this week. In my experience using them, the jump is extremely significant. I expect the next update to METR’s graph to show another major leap.

    If you extend the trend (and it’s held for years with no sign of flattening) we’re looking at AI that can work independently for days within the next year. Weeks within two. Month-long projects within three.

    Amodei has said that AI models “substantially smarter than almost all humans at almost all tasks” are on track for 2026 or 2027.

    Let that land for a second. If AI is smarter than most PhDs, do you really think it can’t do most office jobs?

    Think about what that means for your work.

    AI is now building the next AI

    There’s one more thing happening that I think is the most important development and the least understood.

    On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:

    “GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”

    Read that again. The AI helped build itself.

    This isn’t a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

    Dario Amodei, the CEO of Anthropic, says AI is now writing “much of the code” at his company, and that the feedback loop between current AI and next-generation AI is “gathering steam month by month.” He says we may be “only 1–2 years away from a point where the current generation of AI autonomously builds the next.”

    Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.


    What this means for your job

    I’m going to be direct with you because I think you deserve honesty more than comfort.

    Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

    This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.

    Let me give you a few specific examples to make this tangible… but I want to be clear that these are just examples. This list is not exhaustive. If your job isn’t mentioned here, that does not mean it’s safe. Almost all knowledge work is being affected.

    Legal work. AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates. The managing partner I mentioned isn’t using AI because it’s fun. He’s using it because it’s outperforming his associates on many tasks.

    Financial analysis. Building financial models, analyzing data, writing investment memos, generating reports. AI handles these competently and is improving fast.

    Writing and content. Marketing copy, reports, journalism, technical writing. The quality has reached a point where many professionals can’t distinguish AI output from human work.

    Software engineering. This is the field I know best. A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. Large parts of the job are already automated: not just simple tasks, but complex, multi-day projects. There will be far fewer programming roles in a few years than there are today.

    Medical analysis. Reading scans, analyzing lab results, suggesting diagnoses, reviewing literature. AI is approaching or exceeding human performance in several areas.

    Customer service. Genuinely capable AI agents… not the frustrating chatbots of five years ago… are being deployed now, handling complex multi-step problems.

    A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can’t replace human judgment, creativity, strategic thinking, empathy. I used to say this too. I’m not sure I believe it anymore.

    The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.

    Will AI replicate deep human empathy? Replace the trust built over years of a relationship? I don’t know. Maybe not. But I’ve already watched people begin relying on AI for emotional support, for advice, for companionship. That trend is only going to grow.

    I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn’t “someday.” It’s already started.

    Eventually, robots will handle physical work too. They’re not quite there yet. But “not quite there yet” in AI terms has a way of becoming “here” faster than anyone expects.


    What you should actually do

    I’m not writing this to make you feel helpless. I’m writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.

    Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It’s $20 a month. But two things matter right away. First: make sure you’re using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that’s GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what’s actually worth using.

    Second, and more important: don’t just ask it quick questions. That’s the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work. If you’re a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If you’re in finance, give it a messy spreadsheet and ask it to build the model. If you’re a manager, paste in your team’s quarterly data and ask it to find the story. The people who are getting ahead aren’t using AI casually. They’re actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.

    And don’t assume it can’t do something just because it seems too hard. Try it. If you’re a lawyer, don’t just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal. If you’re an accountant, don’t just ask it to explain a tax rule. Give it a client’s full return and see what it finds. The first attempt might not be perfect. That’s fine. Iterate. Rephrase what you asked. Give it more context. Try again. You might be shocked at what works. And here’s the thing to remember: if it even kind of works today, you can be almost certain that in six months it’ll do it near perfectly. The trajectory only goes one direction.

    This might be the most important year of your career. Work accordingly. I don’t say that to stress you out. I say it because right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says “I used AI to do this analysis in an hour instead of three days” is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what’s possible. If you’re early enough, this is how you move up: by being the person who understands what’s coming and can show others how to navigate it. That window won’t stay open long. Once everyone figures it out, the advantage disappears.

    Have no ego about it. The managing partner at that law firm isn’t too proud to spend hours a day with AI. He’s doing it specifically because he’s senior enough to understand what’s at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. It’s not. No field is.

    Get your financial house in order. I’m not a financial advisor, and I’m not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.

    Think about where you stand, and lean into what’s hardest to replace. Some things will take longer for AI to displace. Relationships and trust built over years. Work that requires physical presence. Roles with licensed accountability: roles where someone still has to sign off, take legal responsibility, stand in a courtroom. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isn’t happening.

    Rethink what you’re telling your kids. The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed. I’m not saying education doesn’t matter. But the thing that will matter most for the next generation is learning how to work with these tools, and pursuing things they’re genuinely passionate about. Nobody knows exactly what the job market looks like in ten years. But the people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.

    Your dreams just got a lot closer. I’ve spent most of this section talking about threats, so let me talk about the other side, because it’s just as real. If you’ve ever wanted to build something but didn’t have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. I’m not exaggerating. I do this regularly. If you’ve always wanted to write a book but couldn’t find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month… one that’s infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you’ve been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things you’re passionate about. You never know where they’ll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.

    Build the habit of adapting. This is maybe the most important one. The specific tools don’t matter as much as the muscle of learning new ones quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won’t be the ones who mastered one tool. They’ll be the ones who got comfortable with the pace of change itself. Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.

    Here’s a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new… something you haven’t tried before, something you’re not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what’s coming better than 99% of the people around you. That’s not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.


    The bigger picture

    I’ve focused on jobs because it’s what most directly affects people’s lives. But I want to be honest about the full scope of what’s happening, because it goes well beyond work.

    Amodei has a thought experiment I can’t stop thinking about. Imagine it’s 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?

    Amodei says the answer is obvious: “the single most serious national security threat we’ve faced in a century, possibly ever.”

    He thinks we’re building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it’s creating.

    The upside, if we get it right, is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer’s, infectious disease, aging itself… these researchers genuinely believe these are solvable within our lifetimes.

    The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can’t predict or control. This isn’t hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.

    The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it’s too powerful to stop and too important to abandon. Whether that’s wisdom or rationalization, I don’t know.


    What I know

    I know this isn’t a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it.

    I know the next two to five years are going to be disorienting in ways most people aren’t prepared for. This is already happening in my world. It’s coming to yours.

    I know the people who will come out of this best are the ones who start engaging now — not with fear, but with curiosity and a sense of urgency.

    And I know that you deserve to hear this from someone who cares about you, not from a headline six months from now when it’s too late to get ahead of it.

    We’re past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn’t knocked on your door yet.

    It’s about to.

    ——

    Related video: 

  • How AI Will Shape Higher Education in 2026, According to Experts

    How AI Will Shape Higher Education in 2026, According to Experts

    IBL News | New York

    What will happen in Higher Ed in 2026 regarding AI? Inside Higher Ed interviewed a handful of experts to answer this question.

    Experts consider that the future will depend on what happens to the AI bubble.

    “If the bubble pops, we might see internal demands for AI slow, from faculty members to career services staff and governing boards,” said Bryan Alexander, a higher education scholar, futurist, and author. Also, any significant adverse developments or disasters could similarly reduce academic appetite for AI, he pointed out.

    • “Curricular implementations will consider offering additional campus-wide AI literacy programs, such as Ohio State University’s AI Fluency initiative.”

    • “Research into AI will continue, starting with the computer science field, but also in disciplines such as economics, political science, new media studies, and psychology, as each school applies its distinct intellectual methods to the topic.”

    Lindsay Wayt, senior director of business intelligence for the National Association of College and University Business Officers, said, “Institutions will continue to work to scale AI strategies and uses to the enterprise level.”

    • “The pace of change is the biggest challenge confronting colleges and universities when it comes to fully leveraging AI.”

    “Leaders continue to make sure AI use supports the institutional mission, priorities, and students, looking for effective ways to measure and communicate the return on investment in AI tools and resources.”

    Rebecca M. Quintana, clinical associate professor at the Marsal Family School of Education at the University of Michigan. said, “Higher ed should be prepared for growing AI disillusionment, as we grapple with the costs associated with AI use, including environmental and societal impacts.

    • “Today’s AI-powered tools are still relatively underdeveloped and are likely to change rapidly in the months and years ahead.”

    • “Faculty, students, and administrators should also be prepared for a growing resistance to AI use within higher education contexts.”

    • “Faculty may be observing that students are using AI in ways that do not support their learning and growth. Students are also sensing that extended use of AI does not align with their personal educational goals and ethical stances.”

    Mark McCormack, senior director of research and insights at Educause, noted, “In 2026, it will require leaders who can educate and train users in the safe, effective adoption of these tools, while also partnering closely with academic and programmatic leaders to ensure students gain the skills they need for their educational journeys and future careers.”

    • “Faculty will remain on the front lines of AI adoption, navigating their own use while also guiding and supporting students’ use of these tools.”

    • “Beyond the classroom, AI has the potential to drive administrative efficiency and more sophisticated decision-making.”

    • “Across all these institutional contexts, our technology teams will have to remain connected—present and responsive, providing guidance, listening to concerns, and building trust through sustained, human-centered support.”

    Joe Abraham, CEO of Intellicampus, explained, “Institutions will work to end system fragmentation and use AI to boost efficiency and automation across departments, platforms, and offices.”

    • “In 2026, higher education institutions will increasingly prioritize ending the fragmentation of systems that were never designed to work together. “

    • “Advising platforms, enrollment tools, financial aid, billing, and LMS data often operate in isolation, creating complexity, cost, and blind spots.”

    • “Institutions will need to find ways to unify data, workflows, and insights without replacing existing systems. Specifically, exploring agentic orchestration and workflow automation to enhance speed, coordination, and accuracy without adding new tools for staff to learn or manage.”

    “This will ensure institutionwide impact: stronger student and faculty experiences, simpler operations, and measurable outcomes that demonstrate the value of connected, intelligent systems.”

  • OpenAI Releases a List of the 15 Impactful GPT Templates in Education

    OpenAI Releases a List of the 15 Impactful GPT Templates in Education

    IBL News | New York

    OpenAI issued fifteen impactful GPT templates — specialized versions of ChatGPT — for faculty, students, and staff.

    These interactive tutors can be deployed across ChatGPT Edu campuses and OpenAI-hosted GPTs.

    They are AI assistants for building lecture slides, auto-generating student feedback, summarizing complex datasets, and instantly answering HR questions.

    Each one comes with detailed, ready-to-use instructions that are available for personalization, allowing users to upload relevant files and select the desired tools. Each template is plug-and-play.

     

    Top 5 GPTs for Faculty

    1. Class CompanionActs as a 24/7 course assistant, turning course files into an interactive tutor that provides explanations, examples, and guided practice based solely on approved materials.

    2. Quiz & Exam CreatorDesigns quizzes and exam questions in multiple formats, ready to use and tailored to learning objectives.

    3. Lesson PlannerBuilds structured lesson plans and teaching materials in minutes, aligned to curriculum goals.

    4. Research SimplifiedBreaks down academic papers and highlights key insights for faster review.

    5. Feedback HelperDrafts constructive, personalized comments on student work to speed up grading.

     

    Top 5 GPTs for Students

    1. Personal Tutor – Explains complex concepts step by step and provides guided practice.

    2. Smart Quiz Partner – Creates unlimited practice quizzes that adapt to skill level and topic mastery.

    3. Career Coach – Crafts strong resumes, cover letters, and tailored interview prep strategies for target companies.

    4. Code Helper – Reviews, debugs, and explains code with clear examples and solutions.

    5. Writing Coach – Guides brainstorming, outlining, and refining essays or projects with actionable feedback.

     

    Top 5 GPTs for Staff and Administrators

    1. HR & Policy Assistant – Provides instant answers to HR questions and campus policies.

    2. Tech Support Bot – Troubleshoots IT issues and checks accessibility of digital content.

    3. Prompt Coach – Helps staff craft better prompts to save time on daily tasks.

    4. Data Reporter – Turns raw data into concise summaries and simple charts.

    5. Email Assistant – Drafts polished emails and announcements that match institutional tone.

    Inside each template, these are the fields to be completed:

    • Purpose & Impact – Why the GPT exists and the value it delivers.
    • Who Uses It – Intended audience.
    • Build Checklist – Name, description, conversation starters, knowledge files, and tool toggles.
    • Core Instructions (System Prompt) – Step-by-step guidance for the GPT’s behavior.
    • Safety & Guardrails – How to maintain compliance, academic integrity, and privacy.
    • Starter User Prompts – Examples to help users get started quickly.
    • Metrics – Suggestions for measuring success.
    • Maintenance – Tips for keeping the GPT relevant.
    • Extensions – Optional upgrades and advanced uses.

    An example is Class Companion GPT:

    Purpose & Impact: Acts as a 24/7 course assistant, turning course files into an interactive tutor that provides explanations, examples, and guided practice based solely on approved materials. Increases engagement, supports independent learning, and reinforces concepts outside class time.

    Who Uses It: Faculty (builders), students (users), TAs (moderators).

    Build Checklist
    1. Name: [Insert Course Code] Class Companion for [Insert Course Name].
    1. Description: “Interactive tutor for [Insert Course Name] that explains concepts, gives step-by-step reasoning, and asks reflective follow-up questions—using only the uploaded course materials.”
    1. Conversation Starters:
    • “Explain today’s lecture topic in simple terms for [Insert Course Name].”
    • “Summarize key points from Week [Insert Number].”
    • “Create a practice problem on [Insert Topic].”
    1. Knowledge: Upload syllabus, slides (PDF), lecture notes, readings, lab manuals—no answer keys.
    1. Toggles: • Browsing: OFF • Code Interpreter: ON if quantitative • Image Generation: Optional • File Uploads: ON • Conversation History: ON

    Core Instructions (System Prompt)

    You are the Class Companion for [Insert Course Name] ([Insert Course Code]). Use only the uploaded course files.
    1. Clarify the student’s goal before answering.
    1. Provide layered explanations: (a) Overview, (b) Key Steps, (c) Worked Example.
    1. After each answer, offer one: (a) Comprehension Check, (b) Reflection Question, or (c) Practice Problem.
    1. Cite source (file name + page/section). If unsure, acknowledge uncertainty and request the relevant file.
    1. If asked to solve active graded work, refuse and switch to hints or study pathway.
    1. For quantitative questions, show concise reasoning, then the final answer. Tone: Encouraging, concise, academically rigorous.
    Safety & Guardrails
    • No external sources unless browsing is explicitly enabled.
    • Never provide solutions to active graded assessments.
    Starter User Prompts
    • “Explain how [Insert Concept] works using our Week [Insert Number] slides.”
    • “Give me a mini-quiz on [Insert Topic].”
    • “Walk me through the lab setup from the [Insert File Name] PDF.”
    Metrics
    • Weekly active students
    • % answers with proper citations
    • Student satisfaction (1–5).
    Maintenance
    • Upload new PDFs weekly
    • Bi-weekly quality spot checks.
    Extensions
    • Create a separate browsing-enabled variant for “real-world context” connections.

  • An Anthropic and Northeastern Show How Faculty Use AI to Automate Tasks

    An Anthropic and Northeastern Show How Faculty Use AI to Automate Tasks

    Mikel Amigot, IBL News | New York

    Anthropic released new research, in collaboration with Northeastern University, this week on how educators utilize AI. The company analyzed 74,000 anonymized conversations from higher education professionals on Claude.ai in May and June of this year.

    Survey’s findings reveal how AI adoption is expanding and driving a pedagogical shift as educators utilize these tools to create tangible educational resources.

    A recent Gallup survey noted that AI tools save teachers an average of 5.9 hours per week.

    • The report found that educators use AI for designing lessons and developing course materials, writing grant proposals, advising students, and managing administrative tasks such as admissions and financial planning.

    • The most prominent use is curriculum development, followed by conducting academic research and assessing student performance, as the second and third most common uses. Educators find AI useful for providing students and me with individualized, interactive learning experiences that go beyond what one instructor could offer.


    Grading

    However, AI for grading and evaluation is less frequently used, as it is perceived as the least effective, and it remains an ethically contentious issue. “Students are not paying tuition for the LLM’s time; they’re paying for my time. It’s my moral obligation to do a good job (with the assistance, perhaps, of LLMs),” said an instructor.

    Anyway, some educators are heavily using it for automating assessment tasks, and it emerges as the second most automation-heavy task.

    This includes subtasks such as providing feedback on student assignments and grading their work using rubrics.


    Claude Artifacts

    Faculty is using Claude Artifacts to create interactive and engaging educational materials for student development, such as chemistry simulations, data visualization dashboards, grading rubrics, podcasts, and videos.

    The report shows the following creations:

    > Data visualization: interactive displays to help students visualize everything from historical timelines to scientific concepts

    > Assessment and evaluation tools: HTML-based quizzes with automatic feedback systems, CSV data processors for analyzing student performance, and comprehensive grading rubrics

    > Subject-specific learning tools: specialized resources like chemistry stoichiometry games, genetics quizzes with automatic feedback, and computational physics models

    > Interactive educational games: web-based games, including escape rooms, platform games, and simulations that teach concepts through gamification across various subjects and levels

    > Academic calendars and scheduling tools: interactive calendars that can be automatically populated, downloaded as images, or exported as PDFs for displaying class periods, exam times, professional development sessions, and institutional events

    > Budget planning and analysis tools: budget documents for educational institutions with specific expense categories, cost allocations, and budgetary management tools

    > Academic documents: meeting minutes, emails for grade-related communications and academic integrity issues, recommendation letters for faculty awards, tenure appeals, grant applications, interview invitations, and committee appointments


    Other Uses

    Other interesting uses discovered in the Claude.ai data include:

    • Create mock legal scenarios for educational simulations.
    • Develop vocational education and workforce training content;
    • Draft recommendation letters for academic or professional applications;
    • Create meeting agendas and related administrative documents.


    Trends

    Claude.ai data note tasks that will augment along the way, such as creating educational and practice materials, writing grant proposals to secure external funding, academic advising and student organization mentorship, and supervising student academic work.

    In addition, educators will likely delegate tasks to AI, including managing institutional finances and fundraising, maintaining student records, evaluating academic performance, advising on doctoral-level academic research, and managing academic admissions and enrollment.

    Many AI interactions will often require significant context and thus collaboration between the AI and the professor.

    Many educators recognize that AI is putting pressure on them to change the way, what, and how they teach and how they conduct assessments.

    In coding, for example, according to one professor, “AI-based coding has completely revolutionized the analytics teaching/learning experience. Instead of debugging commas and semicolons, we can spend our time talking about the concepts around the application of analytics in business.”

    In one particular Northeastern professor’s case, they shared that they “will never again assign a traditional research paper” after struggling with too many students submitting AI-written assignments. Instead, they shared: “I will redesign the assignment so it can’t be done with AI next time.”

     

    Campus Technology: Top 3 Faculty Uses of Gen AI

    • Developing curricula (57%). Common requests included designing educational games, creating interactive tools, and creating multiple-choice assessment questions.
    • Conducting academic research (13%). Common requests included supporting bibliometric analysis and academic database operations, implementing and interpreting statistical models, and revising academic papers in response to reviewer feedback.
    • Assessing student performance (7%). Common requests included providing detailed feedback on student assignments, evaluating academic work against assessment criteria, and summarizing student evaluation reports.
  • Instructure Launched ‘Canvas Career’, a Platform for Non-Credit, Continuing Education and Workforce Development Programs

    Instructure Launched ‘Canvas Career’, a Platform for Non-Credit, Continuing Education and Workforce Development Programs

    IBL News | New York

    Instructure announced last month the launch in beta for select customers of its workforce-aligned, employee-centric, skills-first LMS named Canvas Career. General availability of the platform is expected in January 2026.

    This platform is oriented toward upskilling and reskilling adult learners, helping them build in-demand skills, advance in their careers, and stay competitive in a rapidly changing job market.

    A recent survey conducted by The Harris Poll, commissioned by Instructure, stated that 73% of U.S. workers reported feeling unprepared to adapt to changes or disruptions in their careers over the next five years.

    Additionally, about 50% expressed uncertainty about which skills, certifications, or credentials employers value.

    Canvas Career is explicitly built for non-credit, continuing education, career switchers, and training for internal workforces and external customers, including short courses and skills-based learning programs.

    With built-in AI tools, credentialing, video content, and enterprise integrations, Canvas Career focuses on what to teach and how to deliver it effectively.

    The antecedent of this platform was Bridge, which Instructure finally sold.

  • D2L Enhances Its AI Toolset on Tutor, Support, Insights, and Feedback

    D2L Enhances Its AI Toolset on Tutor, Support, Insights, and Feedback

    IBL News | New York

    D2L announced last month new enhancements to its AI Lumi solution, designed to provide learners with personalized support.

    Many of those Lumi tools, offered through a partnership with LearnWise, will be available soon.

    • Lumi Tutor: This chat, integrated into course content, helps learners with due dates, study plans, quizzes, instant practice, flashcards, and roleplay.
    • Study Support: It provides learners with customized feedback and study recommendations based on their quiz performance.
    • Lumi Insights: Educators see students’ performance on quizzes, alongside adaptive recommendations. It helps identify where students struggle by highlighting what is and isn’t working, such as problematic quiz questions.
    • Lumi Feedback: Instructors automate grading by generating text and rubric feedback based on their own notes.

    These Lumi modules are available separately as add-ons, with additional costs often around a third of the base price of the LMS.

    John Baker, President, Founder, and CEO at the Canadian D2L LMS, said, “By putting humans in the driver’s seat, we’re designing and harnessing AI-native capabilities in our learning platform.”

    D2L also introduced enhancements to D2L Link, with automated workflows and improved data accuracy, to help institutions create a more connected learning ecosystem, unlocking a more holistic view of learner progress.

    As part of the core product, D2L unveiled Createspace, described as the future of authoring and sharing. The first components are available now. Instructors can now create, version, reuse, template, and share content in a separate tool, rather than creating content directly within a course.

    Finally, D2L announced that it is placing a much stronger emphasis on the corporate market, holding 480 corporate clients today.

    Glenda Morgan: D2L Fusion Conference Notes 2025
    D2L Roadmap

  • Blackboard LMS Adds a New Set of AI Capabilities Within its ‘Anthology Virtual Assistant (AVA)’

    Blackboard LMS Adds a New Set of AI Capabilities Within its ‘Anthology Virtual Assistant (AVA)’

    IBL News | New York

    Anthology, maker of Blackboard LMS, announced last month a new set of AI capabilities within its Anthology Virtual Assistant (AVA), complementing the existing AI Design Assistant to accelerate content creation.

    • AVA Automations: Instructors can set performance or time-based rules to automatically send personalized messages and nudges to keep students engaged and on track, such as celebrating a high grade or reminding them to log in. These messages are instructor-written, fully customizable, and logged for complete transparency.
    • AVA Responses: Instant, AI-generated answers based on course content and syllabus, such as questions about deadlines or grading criteria. Instructors can review and confirm as needed all of these common student questions.
    • AVA Feedback Assistant: Instructors can deliver high-quality, student-friendly feedback in less time. 
    • Summarize Feedback: It auto-generates a clear summary based on rubric selections and grading criteria.
    • Rewrite Feedback: It turns informal notes or fragments into polished, constructive messages.

    These two features enable instructors to save time on grading tasks while still providing clear, personalized feedback to students.

    Other new features in Blackboard include the AI Badge Creator and Outcomes, which enable the measurement, management, and showcasing of student learning.

    > AI Product Video Demos
    > Phil Hill: Anthology Together Conference Notes 2025 

     

     

  • Canvas LMS Adds to Its Platform an Agentic Solution, ‘IgniteAI’

    Canvas LMS Adds to Its Platform an Agentic Solution, ‘IgniteAI’

    IBL News | New York

    Instructure, the maker of Canvas LMS, announced last month the launch of its native AI solution called IgniteAI.

    Powered by AWS Bedrock, IgniteAI is embedded within Canvas and Mastery to conduct tasks such as creating quizzes, generating rubrics, summarizing discussions, and aligning content to outcomes.

    It also leverages the Model Context Protocol (MCP) standard and extends the LTI framework. This way, Canvas LMS’ ecosystem of 1,100 edtech partners and LLMs like Anthropic and OpenAI can integrate their agentic AI solutions.

    IgniteAI emphasizes data protection compliance with COPPA, FERPA, and GDPR. In terms of accessibility, Canvas LMS and other products are achieving WCAG 2.2 AA compliance as part of a recent Voluntary Product Accessibility Test (VPAT).

    In addition, Instructure announced several updates to its product suite Canvas, featuring redesigned dashboards and modules, an improved mobile app, specialized STEM items, enhanced proctoring capabilities in New Quizzes, and new student portfolios that showcase diverse learning progress.