Author: IBL News

  • CEO of San Francisco tech company apologizes after AI chatbot goes rogue

    CEO of San Francisco tech company apologizes after AI chatbot goes rogue


    The CEO of San Francisco tech company Replit apologized after the AI deleted a production database — and then lied about it.

    Source: Youtube

  • Why you should care about AI interpretability

    Why you should care about AI interpretability


    The goal of mechanistic interpretability is to reverse engineer neural networks. Having direct, programmable access to the internal neurons of models unlocks new ways for developers and users to interact with AI — from more precise steering to guardrails to novel user interfaces.

    Source: Youtube

  • Alibaba Cloud founder expects big AI shakeup after OpenAI hype

    Alibaba Cloud founder expects big AI shakeup after OpenAI hype


    Alibaba Cloud founder expects big AI shakeup after OpenAI hype.

    Fuente: Youtube

  • Columbia University’s Agreement with The White House Sets a Precedent For Other Colleges

    Columbia University’s Agreement with The White House Sets a Precedent For Other Colleges

    IBL News | New York

    The Trump administration’s deal with Columbia University in New York City has put leaders at Ivy League universities and other college campuses nationwide in a tough spot. Institutions are facing the possibility of seeing research funding paused.

    President Donald Trump has made it clear he won’t tolerate a liberal imposition at America’s most prestigious colleges and intends to reshape them accordingly.

    On July 23, Columbia University agreed to pay fines of over $220 million and signed on to a list of other concessions related to admissions, academics, and hiring practices.

    The White House, which has halted billions in research grants to several schools, said it envisions the Columbia deal as the first of many such agreements.

    Education Secretary Linda McMahon called it a blueprint for other institutions to follow.

    “Columbia’s reforms are a roadmap for elite universities that wish to regain the confidence of the American public,” Linda McMahon said in a statement.

    In addition to Columbia University, other Ivy League schools are striking deals with the Trump administration.

    On July 1, the University of Pennsylvania entered into an agreement ending a civil rights investigation brought by the U.S. Department of Education.

    In February, the agency accused Penn of violating Title IX, the primary sex discrimination law governing schools, when it allowed Lia Thomas, a transgender swimmer, to compete in 2022.

    As part of the deal, the White House said it would restore Penn’s research funding. In return, the university apologized to cisgender athletes who swam against Thomas. The university also agreed to ban transgender women from sports.

    This month, President Trump hinted he believes Harvard University may still be open to coming to a deal.

    At Cornell, the government paused more than $1 billion. At Brown, it froze $510 million, and at Princeton, it stopped more than $210 million.

    Of the eight Ivy League schools, only two – Dartmouth College and Yale University – have avoided targeted federal funding freezes.

  • AI news videos blur line between real and fake reports

    AI news videos blur line between real and fake reports


    Hyper-realistic AI-generated news videos are flooding social media, making it harder to tell real reports from fakes. Experts warn the technology is advancing so quickly that misinformation can spread before it’s verified, raising new concerns about trust in what we see online.

    Source: Youtube

  • The level of investment in artificial intelligence among the big tech companies

    The level of investment in artificial intelligence among the big tech companies


    Jason Thomas, head of global research and investment strategy at Carlyle, examines the level of investment in artificial intelligence among the big tech companies and the significance of that spending on equity and bond markets and the overall US economy.

    Source: Youtube

  • How AI Is transforming arabic language learning

    How AI Is transforming arabic language learning


    Rasha takes us inside a classroom of the future—where artificial intelligence isn’t replacing teachers, but reimagining how Arabic is taught and learned.

    Source: Youtube

  • OpenAI Embeds Its Tool Into Canvas LMS, Allowing Instructors to Create Assignments With AI

    OpenAI Embeds Its Tool Into Canvas LMS, Allowing Instructors to Create Assignments With AI

    IBL News | New York

    OpenAI announced this week a partnership with Instructure’s Canvas LMS under its program called IgniteAI, to allow teachers to create AI-powered assignments and other instructional activities.

    Meanwhile, students can engage with the AI assistant, and as they interact, learning evidence is captured and returned to the Gradebook.

    Steve Daly, CEO of Instructure, said, “This collaboration with OpenAI showcases our ambitious vision: creating a future-ready ecosystem that fosters meaningful learning and achievement at every stage of education.”

    The first tool integrated into Canvas LMS is a new type of assignment called the LLM-Enabled Assignment, which allows teachers to define, through text prompts, how AI interacts with students, set specific learning goals and objectives, and determine what evidence of learning it should track.

    Through this tool, students submit their assignments and create visible learning evidence that teachers can use, as it’s mapped to the learning objectives, rubrics, and skills.

    Shiren Vijiasingam, Chief Product Officer at Instructure, said that “teachers will gain a high-level view of overall progress, key learning indicators, and potential gaps, each supported by clear evidence.” “They can then dive into specific indicators to see exactly where and how a student demonstrated the required understanding in the conversation.”

    “What’s powerful about this tool is that it enables educators to assess the student’s learning process — not just the final outcome,” said Vijiasingam. “This is only the first in a set of tools we will develop with OpenAI over the coming quarters.”

    Instructure announces the launch of IgniteAI agent at InstructureCon 25. 

    rProfessors: I watched Instructure’s Canvas AI demo last week. I have thoughts (Reddit, July 31, 2025)

    “I’ve seen this topic discussed a few times now in relation to Instructure’s recent press release about partnering with OpenAI on a new integration. I attended the InstructureCon conference last week, where among other things Instructure gave a tech demo of this integration to a crowd of about 2,500 people. I don’t think they’ve released video of this demo publicly yet, but it’s not like they made us sign an NDA or anything, so I figured I’d write up my notes. I’m recreating this based on hastily-written notes, so they may not be perfectly accurate recreations of what we were shown.

    During the demonstrations they made it clear that these were very much still in development, were not finished products, and were likely to change before being released. It was also a carefully controlled, partially pre-programmed tech demo. They did disclose which parts were happening live and which parts were pre-recorded or simulated.

    In the tech demo they showed off three major examples.

    1. Course Admin Assistant. This demo had a chat interface similar to every LLM, but its function was specifically limited to canvas functions. The example they showed was typing in a prompt like, “Emily Smith has an accommodation for a two-day extension on all assignments, please adjust her access accordingly,” and the AI was able to understand the request, access the “Assign To” function of every assignment in the class, and give the Emily student extended access.

    In the demo it never took any action without explicitly asking the instructor to approve the action. So it gave a summary of what it proposed to do, something like “I see twenty-five published assignments in this class that have end dates. Would you like me to give Emily separate “Assign to” Until Dates with two extra days of access in each of these assignments?” It’s not clear what other functions the AI would have access to in a canvas course, but I liked the workflow, and I liked that it kept the instructor in the loop at every stage of the process.

    The old “AI Sandwich,” principle. Every interaction with an AI tool should with a human and end with a human. I also liked that it was not engaging with student intellectual property at any point in this process, it was targeted solely at course administration settings.

    My analysis: I think this feature could be genuinely cool and useful, and a great use case for AI agents in Canvas. Streamline the administrative busywork so that the instructor can spend more time on instruction and feedback. Interesting. Promising. Want to see more.

    AI Assignment Assistant. Another function was a little more iffy, and again a tightly controlled demo that didn’t provide many details. The demo tech guy created a new blank Assignment in Canvas, and opened an AI assistant interface within that assignment. He prompted it with something like, “here is a PDF document of my lesson. turn it into an assignment that focuses on the Analysis level of Bloom’s Taxonomy,” and then he uploaded his document.

    We were not shown what the contents of the document looked like, so this is very vague, but it generated what looked like a competent-enough analysis paper assignment. One thing that I did like about this is that whenever the AI assistant generates any student-facing content, it surrounds it with a purple box that denotes AI-generated content, and that purple box doesn’t go away unless and until the instructor actually interacts with that content and modifies or approves it. So AI Sandwich again, you can’t just give it a prompt and walk away.

    The demo also showed the user asking for a grading rubric for the assignment, which the AI also populated directly into the Rubric tool, and again every level, criteria, etc. was highlighted in purple until the user interacted with that item.

    My analysis: This MIGHT useful in some circumstances, with the right guardrails. Plenty of instructors are already doing things like this anyway, in LLMs that have little to no privacy or intellectual property protections, so this could be better, or at least less harmful. But there’s a very big, very scary devil in the details here, and we don’t have any details yet. My unanswered questions about this part surrounds data and IP. What was the AI trained on in order to be able to analyze and take action on a lesson document? What did it do with that document as it created an assignment? Did that document then become part of its training data, or not? All unknown at this point.

    AI Conversation Assignment. They showed the user creating an “AI Conversation” assignment, in which the instructor set up a prompt, something like “You are to take on the role of the famous 20th century economist John Keynes, and have a conversation with the student about Supply and Demand.” Presumably you could give it a LOT of specific guidance on how the AI is to guide and respond to the conversation, but they didn’t show much detail.

    Then they showed a sequence of a student interacting with the AI Keynes inside of an LLM chat interface within a Canvas assignment. It showed the student trying to just game the AI and ask for the answer to the fundamental question, and the AI told it that the goal was learning, not getting the answer, or something like that. Of course, there’s nothing here that would stop a student from just copying and pasting the Canvas AI conversation into a different AI tool, and pasting the response back into Canvas. Then it’s just AI talking to AI, and nothing worthwhile is being accomplished.

    Then the part that I disliked the most was that it showed the instructor SpeedGrader view of this Conversation assignment, which showed a weird speedometer interface showing “how engaged” the student was in the conversation. It did allow the instructor to view the entire conversation transcript, but that was hidden underneath another button. Grossest of all, it gave the instructor the option of asking for the AI’s suggested grade and written feedback for the assignment. Again, AI output was purple and wanted instructor refinement, but… gross.

    My analysis: This example, I think, was pure fluff and hype. The worst impulses of AI boosterism. It wasn’t doing anything that you can’t already do in copilot or ChatGPT with a sufficient starting prompt. It paid lip service to academic integrity but didn’t show any actual integrity guardrails. The amount of AI agency being used was gross. The faith it put in the AI’s ability to actually generate accurate information without oversight is negligent. I think there’s a good chance that this particular function is either going to never see the light of day, or is going to be VERY different after it goes through some refinement and feedback processes.”

     

  • Lipscomb University offers campus-wide AI access

    Lipscomb University offers campus-wide AI access


    Lipscomb University has become one of the first private universities in the U.S. to launch campus-wide access to generative AI tools.

    Source: Youtube

  • Maryland store turns to AI to catch potential shoplifters

    Maryland store turns to AI to catch potential shoplifters


    Some local stores are turning to artificial intelligence to tackle the recent rise in shoplifting.

    Source: Youtube