Category: Top News

  • MIT Launches ‘Learn’, an AI-Enabled Non-Degree Platform with Free Courses and Resources

    MIT Launches ‘Learn’, an AI-Enabled Non-Degree Platform with Free Courses and Resources

    IBL News | New York

    The Massachusetts Institute of Technology introduced this month MIT Learn, an AI-based platform featuring a personalized website with over 12,700 non-degree learning resources, most of which are available for free.

    “MIT Learn marks the beginning of an ambitious project that aims to redefine how MIT distributes knowledge and connects with learners worldwide,” explained the institution.

    Created by MIT Open Learning, this lifelong learning platform features introductory and advanced courses, upskilling and reskilling programs, and other resources, including videos, podcasts, “all for every stage of your learning journey,” according to the institution.

    Open Learning’s product offerings comprise OpenCourseWare, MITx, and MicroMasters programs.

    The AI-enabled assistant, called “AskTim”, helps learners find courses and resources aligned with their personal and professional goals. It also provides a summary of a course’s structure, topics, and expectations, enabling more informed decisions before enrollment.

    This AI agent can answer users’ questions about lectures, create flashcards of key concepts, and provide instant summaries.

    The tutor guides the learner through problem sets, leading them toward the next step without revealing the answers.

    The platform features sophisticated search, browsing, and discovery capabilities, complemented by the “Ask Tim” bot.

    However, the AI assistant has been introduced in a limited set of courses and modules as the MIT Open Learning team wants to gather insights and improve the learning experience before expanding more broadly.

    For example, in signature courses such as Molecular Biology: DNA Replication and RepairGenetics: The Fundamentals, and Cell Biology: Transport and Signaling, learners can interact with an AI assistant by asking questions about a lecture, requesting flashcards of key concepts, and obtaining instant summaries.

    Dimitris Bertsimas, Vice Provost for Open Learning, explained, “MIT Learn elevates learning with personalized recommendations powered by AI, guiding each learner toward deeper understanding. It is a stepping stone toward a broader vision of making these opportunities even more accessible to global learners through one unified learning platform.”

    “MIT Learn is a whole new front door to the Institute,” added Christopher Capozzola, Senior Associate Dean for Open Learning. “It transforms how people engage with what we offer digitally.”

    Former Provost Cynthia Barnhart, who, under her direction, developed MIT Learn in cooperation with Sloan Executive Education and Professional Education, stated that this project is “the latest step in a long tradition of the Institute providing innovative ways for learners to access knowledge.” “This AI-enabled platform delivers on the Institute’s commitment to help people launch into learning journeys that can unlock life-changing opportunities.”

    MIT became the first higher education institution to provide educational resources free of charge to anyone in the world in 2001. Today, 24 years later, the institution advances with MIT Learn, its “mission to disseminate knowledge globally.”

  • Columbia University’s Agreement with The White House Sets a Precedent For Other Colleges

    Columbia University’s Agreement with The White House Sets a Precedent For Other Colleges

    IBL News | New York

    The Trump administration’s deal with Columbia University in New York City has put leaders at Ivy League universities and other college campuses nationwide in a tough spot. Institutions are facing the possibility of seeing research funding paused.

    President Donald Trump has made it clear he won’t tolerate a liberal imposition at America’s most prestigious colleges and intends to reshape them accordingly.

    On July 23, Columbia University agreed to pay fines of over $220 million and signed on to a list of other concessions related to admissions, academics, and hiring practices.

    The White House, which has halted billions in research grants to several schools, said it envisions the Columbia deal as the first of many such agreements.

    Education Secretary Linda McMahon called it a blueprint for other institutions to follow.

    “Columbia’s reforms are a roadmap for elite universities that wish to regain the confidence of the American public,” Linda McMahon said in a statement.

    In addition to Columbia University, other Ivy League schools are striking deals with the Trump administration.

    On July 1, the University of Pennsylvania entered into an agreement ending a civil rights investigation brought by the U.S. Department of Education.

    In February, the agency accused Penn of violating Title IX, the primary sex discrimination law governing schools, when it allowed Lia Thomas, a transgender swimmer, to compete in 2022.

    As part of the deal, the White House said it would restore Penn’s research funding. In return, the university apologized to cisgender athletes who swam against Thomas. The university also agreed to ban transgender women from sports.

    This month, President Trump hinted he believes Harvard University may still be open to coming to a deal.

    At Cornell, the government paused more than $1 billion. At Brown, it froze $510 million, and at Princeton, it stopped more than $210 million.

    Of the eight Ivy League schools, only two – Dartmouth College and Yale University – have avoided targeted federal funding freezes.

  • OpenAI Embeds Its Tool Into Canvas LMS, Allowing Instructors to Create Assignments With AI

    OpenAI Embeds Its Tool Into Canvas LMS, Allowing Instructors to Create Assignments With AI

    IBL News | New York

    OpenAI announced this week a partnership with Instructure’s Canvas LMS under its program called IgniteAI, to allow teachers to create AI-powered assignments and other instructional activities.

    Meanwhile, students can engage with the AI assistant, and as they interact, learning evidence is captured and returned to the Gradebook.

    Steve Daly, CEO of Instructure, said, “This collaboration with OpenAI showcases our ambitious vision: creating a future-ready ecosystem that fosters meaningful learning and achievement at every stage of education.”

    The first tool integrated into Canvas LMS is a new type of assignment called the LLM-Enabled Assignment, which allows teachers to define, through text prompts, how AI interacts with students, set specific learning goals and objectives, and determine what evidence of learning it should track.

    Through this tool, students submit their assignments and create visible learning evidence that teachers can use, as it’s mapped to the learning objectives, rubrics, and skills.

    Shiren Vijiasingam, Chief Product Officer at Instructure, said that “teachers will gain a high-level view of overall progress, key learning indicators, and potential gaps, each supported by clear evidence.” “They can then dive into specific indicators to see exactly where and how a student demonstrated the required understanding in the conversation.”

    “What’s powerful about this tool is that it enables educators to assess the student’s learning process — not just the final outcome,” said Vijiasingam. “This is only the first in a set of tools we will develop with OpenAI over the coming quarters.”

    Instructure announces the launch of IgniteAI agent at InstructureCon 25. 

    rProfessors: I watched Instructure’s Canvas AI demo last week. I have thoughts (Reddit, July 31, 2025)

    “I’ve seen this topic discussed a few times now in relation to Instructure’s recent press release about partnering with OpenAI on a new integration. I attended the InstructureCon conference last week, where among other things Instructure gave a tech demo of this integration to a crowd of about 2,500 people. I don’t think they’ve released video of this demo publicly yet, but it’s not like they made us sign an NDA or anything, so I figured I’d write up my notes. I’m recreating this based on hastily-written notes, so they may not be perfectly accurate recreations of what we were shown.

    During the demonstrations they made it clear that these were very much still in development, were not finished products, and were likely to change before being released. It was also a carefully controlled, partially pre-programmed tech demo. They did disclose which parts were happening live and which parts were pre-recorded or simulated.

    In the tech demo they showed off three major examples.

    1. Course Admin Assistant. This demo had a chat interface similar to every LLM, but its function was specifically limited to canvas functions. The example they showed was typing in a prompt like, “Emily Smith has an accommodation for a two-day extension on all assignments, please adjust her access accordingly,” and the AI was able to understand the request, access the “Assign To” function of every assignment in the class, and give the Emily student extended access.

    In the demo it never took any action without explicitly asking the instructor to approve the action. So it gave a summary of what it proposed to do, something like “I see twenty-five published assignments in this class that have end dates. Would you like me to give Emily separate “Assign to” Until Dates with two extra days of access in each of these assignments?” It’s not clear what other functions the AI would have access to in a canvas course, but I liked the workflow, and I liked that it kept the instructor in the loop at every stage of the process.

    The old “AI Sandwich,” principle. Every interaction with an AI tool should with a human and end with a human. I also liked that it was not engaging with student intellectual property at any point in this process, it was targeted solely at course administration settings.

    My analysis: I think this feature could be genuinely cool and useful, and a great use case for AI agents in Canvas. Streamline the administrative busywork so that the instructor can spend more time on instruction and feedback. Interesting. Promising. Want to see more.

    AI Assignment Assistant. Another function was a little more iffy, and again a tightly controlled demo that didn’t provide many details. The demo tech guy created a new blank Assignment in Canvas, and opened an AI assistant interface within that assignment. He prompted it with something like, “here is a PDF document of my lesson. turn it into an assignment that focuses on the Analysis level of Bloom’s Taxonomy,” and then he uploaded his document.

    We were not shown what the contents of the document looked like, so this is very vague, but it generated what looked like a competent-enough analysis paper assignment. One thing that I did like about this is that whenever the AI assistant generates any student-facing content, it surrounds it with a purple box that denotes AI-generated content, and that purple box doesn’t go away unless and until the instructor actually interacts with that content and modifies or approves it. So AI Sandwich again, you can’t just give it a prompt and walk away.

    The demo also showed the user asking for a grading rubric for the assignment, which the AI also populated directly into the Rubric tool, and again every level, criteria, etc. was highlighted in purple until the user interacted with that item.

    My analysis: This MIGHT useful in some circumstances, with the right guardrails. Plenty of instructors are already doing things like this anyway, in LLMs that have little to no privacy or intellectual property protections, so this could be better, or at least less harmful. But there’s a very big, very scary devil in the details here, and we don’t have any details yet. My unanswered questions about this part surrounds data and IP. What was the AI trained on in order to be able to analyze and take action on a lesson document? What did it do with that document as it created an assignment? Did that document then become part of its training data, or not? All unknown at this point.

    AI Conversation Assignment. They showed the user creating an “AI Conversation” assignment, in which the instructor set up a prompt, something like “You are to take on the role of the famous 20th century economist John Keynes, and have a conversation with the student about Supply and Demand.” Presumably you could give it a LOT of specific guidance on how the AI is to guide and respond to the conversation, but they didn’t show much detail.

    Then they showed a sequence of a student interacting with the AI Keynes inside of an LLM chat interface within a Canvas assignment. It showed the student trying to just game the AI and ask for the answer to the fundamental question, and the AI told it that the goal was learning, not getting the answer, or something like that. Of course, there’s nothing here that would stop a student from just copying and pasting the Canvas AI conversation into a different AI tool, and pasting the response back into Canvas. Then it’s just AI talking to AI, and nothing worthwhile is being accomplished.

    Then the part that I disliked the most was that it showed the instructor SpeedGrader view of this Conversation assignment, which showed a weird speedometer interface showing “how engaged” the student was in the conversation. It did allow the instructor to view the entire conversation transcript, but that was hidden underneath another button. Grossest of all, it gave the instructor the option of asking for the AI’s suggested grade and written feedback for the assignment. Again, AI output was purple and wanted instructor refinement, but… gross.

    My analysis: This example, I think, was pure fluff and hype. The worst impulses of AI boosterism. It wasn’t doing anything that you can’t already do in copilot or ChatGPT with a sufficient starting prompt. It paid lip service to academic integrity but didn’t show any actual integrity guardrails. The amount of AI agency being used was gross. The faith it put in the AI’s ability to actually generate accurate information without oversight is negligent. I think there’s a good chance that this particular function is either going to never see the light of day, or is going to be VERY different after it goes through some refinement and feedback processes.”

     

  • The White House Unveiled an AI Action Plan Aiming to Boost Innovation in the U.S.

    The White House Unveiled an AI Action Plan Aiming to Boost Innovation in the U.S.

    IBL News | New York

    The Trump administration unveiled a 28-page AI Action Plan that outlines over 90 policy actions for rapidly developing AI technology, aiming to boost U.S. innovation while removing “bureaucratic red tape” and “ideological bias.”

    The White House has positioned the expansion of AI infrastructure and investments in the United States as a way to stay ahead of China.

    “We believe we’re in an AI race, and we want the United States to win that race,” said David Sacks, the Trump administration’s Crypto Czar.

    This AI plan promises to build data centre infrastructure and promote American technology, but was panned by critics who consider it an ideological flex by the White House.

    The plan also calls for federal agencies to review and repeal policies that hinder AI development, and to encourage the use of AI in both the government and the private sector.

    President Donald Trump signed three related executive orders on Wednesday. One order promotes the international export of U.S.-developed AI technologies, while another aims to root out what the administration describes as “woke” or ideologically biased AI systems.

    “American development of AI systems must be free from ideological bias or engineered social agendas,” the White House said. “With the right government policies, the United States can solidify its position as the leader in AI and secure a brighter future for all Americans.”

    Crypto Czar Sacks added that the plan is partially focused on preventing AI technology from being “misused or stolen by malicious actors” and will “monitor for emerging and unforeseen risks from AI”.

    “AI is a revolutionary technology that’s going to have profound ramifications for both the economy and national security,” Sacks said. “It’s just very important that America continues to be the dominant power in AI.”

    Critics argued that the plan was a giveaway to Big Tech. “The White House AI Action plan was written by and for tech billionaires, and will not serve the interests of the broader public,” said Sarah Myers West, co-executive director of the AI Now Institute.

    In 2023, Trump’s predecessor, Joe Biden, signed an executive order that established safety and security standards governing the use of AI in the federal government—an order that Trump rescinded on the first day of his presidency in January.

    Days later, Trump signed an executive order that called for accelerated AI development, the removal of ideological bias, and today’s AI action plan, for which it sought public comment.

    Last month, Trump allowed technology giant Nvidia to resume sales of its high-end AI chips to China, reversing his administration’s prior ban on sales of Nvidia’s H20 chips to Beijing.

  • Google Makes Available Its Video Generation Model Veo 3 Available Via The Gemini API

    Google Makes Available Its Video Generation Model Veo 3 Available Via The Gemini API

    IBL News | New York

    Google made its Veo 3 high-resolution video and synchronized audio available to developers via the Gemini API.

    For now, the API is limited to text-to-video, but image-to-video support—already live in the Gemini app—is on the way.

    To help developers get started, Google AI Studio offers an SDK template and a starter app for quick prototyping. Access requires an active Google Cloud project with billing enabled.

    Veo 3 is Google’s first model that can generate high-resolution video and synchronized audio from a single text prompt. It creates visuals, dialogue, music, and sound effects simultaneously.

    Veo 3 handles a range of video generation tasks, from cinematic narratives to dynamic character animations, and also incorporates audio elements such as dialogue, music, and sound effects. Additionally, the model can simulate real-world physics for motion.

    Google posted several examples in Veo 3 in Google AI Studio.

    It’s priced at $0.75 per second for video and audio output, supporting 720p, 24fps video with audio in 16:9 format and up to 8 seconds long, one of the most expensive options on the market for AI video.

    Videos generated by Veo 3 models include a digital SynthID watermark.

    Prompt: Fluffy Characters Stop Motion: Inside a brightly colored, cozy kitchen made of felt and yarn. Professor Nibbles, a plump, fluffy hamster with oversized glasses, nervously stirs a bubbling pot on a miniature stove, muttering, “Just a little more… ‘essence of savory,’ as the recipe calls for.” The camera is a mid-shot, capturing his frantic stirring. Suddenly, the pot emits a loud “POP!” followed by a comical “whoosh” sound, and a geyser of iridescent green slime erupts, covering the entire kitchen. Professor Nibbles shrieks, “Oh, dear! Not again!” and scurries away, leaving a trail of tiny, panicked squeaks.

     

    Prompt: The sequence begins with an extreme close-up of a single gear, slowly turning and reflecting harsh sunlight. The camera gradually pulls back in a continuous movement, revealing this is but one component of a colossal, mechanical heart half-buried in a desolate, rust-colored desert. A sweeping aerial shot establishes its enormous scale and isolation in the barren landscape. The camera descends to capture pipes hissing steam and the rhythmic thumping that echoes across the empty plains. A subtle shake effect synchronizes with each massive heartbeat. A lateral tracking shot discovers tiny, robed figures scurrying across the metallic surface. The camera follows one such figure in a detailed tracking shot as they perform meticulous maintenance, polishing brass valves and tightening immense bolts. A complex movement circles the entire structure, capturing different maintenance teams working in precarious positions across its rusted exterior. The final shot begins tight on the meticulous work of one tiny figure before executing a dramatic pull-out that reveals the true scale of the heart and the minuscule size of its caretakers, tending to the vital organ of an unseen, sleeping giant that extends beyond the frame.

  • Anthropic Introduced ‘Claude for Financial Services’, With a Tool That Unifies Data

    Anthropic Introduced ‘Claude for Financial Services’, With a Tool That Unifies Data

    IBL News | New York

    Anthropic, the maker of the Claude.ai chatbot, introduced this month Claude for Financial Services, a solution for financial professionals to analyze markets, conduct research, and make investment decisions.

    The so-called Financial Analysis Solution unifies users’ financial data—from market feeds to internal data stored in platforms like Databricks and Snowflake—into a single interface. It allows access to critical data sources with direct hyperlinks to source materials.

    It also comes with pre-built MCP connectors to access financial data providers and enterprise platforms.

    Developers can build custom applications via the company’s API.

    Anthropic said it includes the company’s Claude 4 models, Claude Code, and Claude for Enterprise with expanded usage limits, implementation support, and other features.

    “Claude provides the complete platform for financial AI—from immediate deployment to custom development,” Anthropic said in a release.

    According to Anthropic, these real-time financial data providers include:

    • Box enables secure document management and data room analysis.
    • Daloopa supplies high-quality fundamentals and KPIs from all public filings, disclosures and presentations.
    • FactSet provides comprehensive equity prices, fundamentals, and consensus estimates.
    • Morningstar contributes valuation data and research analytics.
    • Palantir builds AI-driven platforms that help governments and enterprises integrate, analyze, and act on large-scale data to make critical operational decisions.
    • PitchBook delivers industry-leading private capital market data and research, empowering users to source investment and fundraising opportunities, conduct due diligence and benchmark performance, faster and with greater confidence.
    • S&P Global enables access to Capital IQ Financials, earnings call transcripts, and more–essentially your entire research workflow.
    • Databricks offers unified analytics for big data and AI workloads.
    • Snowflake provides an easy, connected, and trusted data and AI platform that allows global enterprises to unlock value across all of their data – including structured, unstructured, and semi-structured.

    The solution is available on AWS Marketplace for streamlined procurement and consolidated billing, while Google Cloud Marketplace availability is coming soon.


    > Mike Dion
    : The 3 Best Ways to Use Claude For Finance

  • Virtual YouTubers Get Millions of Views with AI-Powered Characters

    Virtual YouTubers Get Millions of Views with AI-Powered Characters

    IBL News | New York

    Virtual YouTubers, or VTubers, are gaining traction with the advances in video generation AI tools, fueling a new wave of creators.

    An example is Bloo, a fully AI-powered personality who plays popular games like Grand Theft Auto, Roblox, and Minecraft, boasting 2.5 million subscribers, 700 million views, and over seven figures in revenue.

    Bloo resembles a figure from a Pixar film or the video game Fortnite.

    This virtual character was created by Jordi van den Bussche, a 29-year-old YouTuber from Amsterdam, also known as Kwebbelkop. He created Bloo after finding himself unable to keep up with the demands of content creation. “It’s all about good vibes and engaging content,” he says.

    Van den Bussche (in the picture above) uses AI technology from ElevenLabs, OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude.

    Startup Hedra offers an AI tool that generates videos up to five minutes long for virtual characters that speak. It has raised $32 million in a funding round led by Andreessen Horowitz.

    The Hedra’s Character-3 and Google’s Veo 3 tools are already being used in many viral faceless channels, such as comedian Jon Lajoie’s Talking Baby Podcast, which features a hyper-realistic animated baby talking into a microphone. Another is Milla Sofia, a virtual singer and artist whose AI-generated music videos attract thousands of views.

    A Spain-based creator named GoldenHand publishes up to 80 videos per day across his network of channels. His content is audio-driven storytelling, paired with AI-generated images and subtitles. Everything after the initial idea is created entirely by AI. He recently launched a new platform, TubeChef, which provides creators with access to his system to automatically generate faceless AI videos starting at $18 per month. Noah Morris is another creator, with 18 faceless YouTube channels.

    There is also an increased volume of low-effort, low-quality, randomly generated content created using AI, which is flooding platforms such as TikTok, YouTube, and Instagram. This type of material is often referred to as “AI slop.”

     

  • Cluely, the Startup Behind the “Cheat On Everything” AI Tool Sees Its Revenue Skyrocket 

    Cluely, the Startup Behind the “Cheat On Everything” AI Tool Sees Its Revenue Skyrocket 

    IBL News | New York

    Cluely, the controversial startup that provides AI tools to “cheat on everything” regarding marketing or job interviews, reported a skyrocketed revenue of $7 million in ARR in the last weeks, making the company profitable.

    It recently obtained funding from big-league VCs like Andreessen Horowitz, Abstract Ventures, and Susa Ventures, and toned down its marketing to “Everything You Need. Before You Ask, this feels like cheating.”

    It offers an AI tool that, after analyzing online conversations, delivers real-time notes and suggests questions to ask by discreetly displaying the information on the user’s screen, invisible to others.

    For weeks leading up to the product reveal, Lee boasted that the company’s annual recurring revenue (ARR) exceeded $3 million and that the startup was profitable.

    Cluely itself was born of controversy after its founder posted in a viral X thread saying Columbia University suspended him because he and a co-founder developed a tool to cheat on job interviews for software engineers.

    The enterprise version of the product emphasizes the usefulness of taking notes during online conversations. This real-time notetaker is similar to the consumer offering, but it comes with additional features, including team management and enhanced security settings.

    The feature has been copied by other companies, such as Glass, which offers an open-source, free product with very similar functionality to Cluely.

  • OpenAI Debuted ChatGPT Agent, a Tool that Combines ‘Operator’ and ‘Deep Research’

    OpenAI Debuted ChatGPT Agent, a Tool that Combines ‘Operator’ and ‘Deep Research’

    IBL News | New York

    OpenAI debuted ChatGPT Agent yesterday, joining the hyped trend of agents —tools that autonomously complete multi-step tasks and workflows.

    This new agent, still in beta, can perform tasks, following users’ instructions, on its own OpenAI virtual computer by intelligently navigating websites, filtering results, requesting secure logins when needed, running code, conducting analysis, and even delivering editable slideshows and spreadsheets that summarize its findings.

    Some examples of handled requests provided by OpenAI include:

    “Look at my calendar and brief me on upcoming client meetings based on recent news.”
    • “Plan and buy ingredients to make a Japanese breakfast for four.”
    • “Analyze three competitors and create a slide deck.”

    At work, users can automate repetitive tasks, such as converting screenshots or dashboards into presentations composed of editable vector elements, rearranging meetings, planning and booking offsites, and updating spreadsheets with new financial data while maintaining the same formatting. In personal life, users can use it to plan and book travel itineraries, design and book entire dinner parties, or find specialists and schedule appointments.

    ChatGPT Agent is a unified agentic system that combines the strengths of three earlier breakthroughs: the Operator’s ability to interact with websites, deep research’s skill in synthesizing information, and ChatGPT’s intelligence and conversational fluency.

    The agent can also leverage ChatGPT connectors, which enable it to integrate with apps like Gmail and GitHub, allowing ChatGPT to find information relevant to your prompts and utilize it in its responses.

    OpenAI’s Pro, Plus, and Team paid users can activate ChatGPT Agent directly through the tools dropdown by selecting ‘agent mode’. Enterprise and Education users will gain access in the coming weeks. Pro users receive 400 messages per month. In comparison, other paid users are limited to 40 messages monthly, with additional usage available via flexible credit-based options.

    The service isn’t available in the EU and Switzerland.

  • Google Introduced ‘Gemini CLI’, an Open-Source AI Agent for Developers

    Google Introduced ‘Gemini CLI’, an Open-Source AI Agent for Developers

    IBL News | New York

    Google introduced an open-source AI agent for developers called Gemini CLI this month.

    It brings Gemini directly into the terminal for coding, problem-solving, deep research, video creation, and task management through prompts.

    Developers can access Gemini 2.5 Pro (and its massive 1 million token context window) free of charge with a personal Google account, or use a Google AI Studio or Vertex AI key for more access. At no charge, it’s allowed 60 model requests per minute and 1,000 requests per day.`

    Gemini CLI, now in preview, can be extended for the Model Context Protocol (MCP) or bundled extensions.

    Because Gemini CLI is fully open source (Apache 2.0), developers can inspect the code to understand how it works and verify its security implications.

    Google encourages developers to contribute to this project by reporting bugs, suggesting features, continually improving security practices, and submitting code improvements.