Category: Top News

  • Colleges and Universities Redesign their Institutional to Instruct Students on AI Literacy

    Colleges and Universities Redesign their Institutional to Instruct Students on AI Literacy

    IBL News | New York

    Colleges and universities are redesigning their institutional content to address the need for students to learn AI literacy, as machines may soon outperform humans in specific tasks.

    • The University of Pennsylvania offers bachelor’s and master’s engineering degrees in AI for careers in AI-powered industries.
    • The University of Florida is taking a more integrated approach by incorporating AI education into nearly every major. It also offers AI certificates in hospitality, geography, and public health.

    AI agents are expected to join the workforce in 2025. Educators are presenting AI as a collaborative partner in the workplace.

    In this new world, fostering critical thinking and resilience will be crucial to success.

    Learning about digital ethics, data privacy, and the societal impact of technology will equip students to make informed decisions about AI’s role in their lives and communities.

    Overall, students are encouraged to view education not as a finite process but as an infinite, lifelong learning journey of skill acquisition and adaptation.

    Higher Ed Dive: Why more colleges are embracing AI offerings

  • Hugging Face Researchers Launch a Community Project to Fully Open-Source DeepSeek’s R1

    Hugging Face Researchers Launch a Community Project to Fully Open-Source DeepSeek’s R1

    IBL News | New York

    Hugging Face researchers, led by Leandro von Werra, launched Open-R1, a model that seeks to build and fully open-source a duplicate of DeepSeek’s R1, as this Chinese version uses some tools that are not publically released.

    Their goal is to replicate R1 in a few weeks, relying in part on Hugging Face’s Science Cluster, a dedicated research server with 768 Nvidia H100 GPUs.

    This team of Hugging Face engineers plans to solicit help from AI communities on Hugging Face and GitHub, where the Open-R1 project is being hosted, to build a training pipeline.

    Open-R1 comes a week after not-so-open DeepSeek made public its R1 reasoning AI model. It brings all its components, including the data used to train the model, as open-source.

    “The R1 model is impressive, but there’s no open dataset, experiment details, or intermediate models available, which makes replication and further research difficult,” Elie Bakouch, one of the Hugging Face engineers on the Open-R1 project, told TechCrunch.“Having control over the dataset and process is critical for deploying a model responsibly in sensitive areas, and it helps with understanding and addressing biases in the model.”

    “We need to make sure that we implement the algorithms and recipes correctly, and this is something a community effort is perfect at tackling, where you get as many eyes on the problem as possible,” said Von Werra.

    The Open-R1 project attracted 10,000 stars in just three days on GitHub, noting much interest in the project.

    “We’re really excited about the recent open source releases that are strengthening the role of openness in AI. It’s an important shift for the field that changes the narrative that only a handful of labs can progress and that open source is lagging.”

    Funded by a Chinese quantitative hedge fund, DeepSeek’s R1 matches — and even surpasses — the performance of OpenAI’s o1 reasoning model in several benchmarks.

    Being a reasoning model helps it avoid pitfalls that generally trip up models by effectively fact-checking facts, although it takes seconds to respond. The upside is that they tend to be more reliable in physics, science, and math domains.

    DeepSeek R-1 Explained

  • Nvidia Lost $600B In Market Cap After DeepSeek Deployed a Low-Cost, High-Quality AI Model

    Nvidia Lost $600B In Market Cap After DeepSeek Deployed a Low-Cost, High-Quality AI Model

    IBL News | New York

    The Chinese upstart DeepSeek sparked a freakout in Wall Street after proving that an advanced AI model can be deployed without the most advanced gear provided by Nvidia and others. As a result, investors suddenly started questioning the outlook for AI spending.

    On Monday, Nvidia lost $600 billion in market cap after tumbling 17%, while tech-heavy Nasdaq Composite sank 3.4%. Microsoft and Google parent Alphabet both fell more than 2.5%. The Bonds climbed as investors sought safety.

    Since DeepSeek released an open-source version of its reasoning model R1 at the beginning of last week, many in the tech industry have praised what the company achieved and what it means for AI.

    Venture capitalist Marc Andreessen, for example, posted that DeepSeek is “one of the most amazing and impressive breakthroughs I’ve ever seen.”

    R1 seemingly matches or beats OpenAI’s o1 model on certain AI benchmarks.

    The Chinese company claimed one of its models only cost $5.6 million to train, compared to the hundreds of millions of dollars that leading American companies pay to train theirs.

    DeepSeek’s iOS app is now #1 on the “Top Free Apps” chart in Apple’s App Store in the US, just ahead of ChatGPT

  • Chinese DeepSeek Rolled Out an Open-Source Model that Rivals OpenAI’s o1

    Chinese DeepSeek Rolled Out an Open-Source Model that Rivals OpenAI’s o1

    IBL News | New York

    The Chinese AI startup DeepSeek rolled out an open-source reasoning model called DeepSeek-R1 on Monday, and despite a smaller development budget, it said it rivaled OpenAI’s o1 and Google’s systems.

    Other Chinese firms unveiled their reasoning models in the past weeks, including Moonshot AI, Minimax, iFlyTek, and TikTok owner ByteDance’s Doubao-1.5-pro.

    These models claim to be capable of reasoning through complex tasks and solving challenging problems in science, coding, and math.

    They also offer better pricing than the American models. For example, DeepSeek-R1, which matches OpenAI’s most powerful available model o1, offers $2.20 per million tokens.

    Marc Andreessen, general partner at Andreessen Horowitz (a16z) venture capital firm, posted on X:

    Yann LeCun, the Chief AI Scientist for Meta’s Fundamental AI Research (FAIR) division, posted on his LinkedIn account:

    The DeepSeek engineers said they needed only about $6 million in raw computing power to build their new system.

    The world’s leading AI companies train their chatbots using supercomputers that use as many as 16,000 chips, if not more. DeepSeek’s engineers, on the other hand, said they needed only about 2,000 specialized computer chips from Nvidia.

    That is about 10 times less than the tech giant Meta spent building its latest AI technology.

    DeepSeek is run by a quantitative stock trading firm called High Flyer. By 2021, it had channeled its profits into acquiring thousands of Nvidia chips, which it used to train its earlier models.

    The company has become known in China for scooping up talent fresh from top universities with the promise of high salaries and the ability to follow their own research questions.

    DeepSeek has open-sourced its latest AI underlying code. It also released a chat website and API.

     

    “The center of gravity of the open source community has been moving to China,” said Ion Stoica, a professor of computer science at the University of California, Berkeley. “This could be a huge danger for the U.S.” because it allows China to accelerate the development of new technologies.

    Last week, OpenAI CEO Sam Altman said they had finalized a version of its new reasoning AI model, o3 mini, and would be launching it in a couple of weeks.

  • Meta Will Spend $65 Billion on Data Centers, Hiring Up, and Building an AI Engineer

    Meta Will Spend $65 Billion on Data Centers, Hiring Up, and Building an AI Engineer

    IBL News | New York

    Meta’s CEO, Mark Zuckerberg, announced it would spend $65 billion this year to expand its AI data center infrastructure, ramp up hiring, and bolster its position against OpenAI and Google to dominate the technology.

    The company also plans to build an AI engineer that will start writing its code.

    Meta is deploying one gigawatt of computing power online and building a $10 billion, four million-square-foot data center in Louisiana, the latest of its 27 data centers worldwide.

    The company expects to end the year with more than 1.3 million graphic processing units, commonly known as GPUs.

    “This will be a defining year for AI,” Zuckerberg [in the picture above] said in a Facebook post.

    Meta’s announcement comes just days after President Trump announced that OpenAI, SoftBank, and Oracle would form a venture called Stargate and invest $500 billion in AI infrastructure across the U.S.

    Also, last week, President Trump signed an executive order aiming “to sustain and enhance America’s dominance in AI to promote human flourishing, economic competitiveness, and national security.”

    Earlier this month, Microsoft said it was planning to invest about $80 billion in 2025 to develop data centers, while Amazon.com has said its 2025 spending will go beyond $75 billion in 2024.

    Elon Musk said he built a data center in just a few months in Memphis, Tenn., increasing its computing capacity to one million GPUs.

    Meta has emerged as a significant player in the AI race with its AI assistant — with 600 million monthly active users — the Ray-Ban smart glasses, and its open-source approach based on Llama AI models.

    13 examples that will blow your mind (Don’t miss the 5th one): pic.twitter.com/U5yElFgXaM

    — Poonam Soni (@CodeByPoonam) January 25, 2025

  • AI Will Generate Better Student Learning Outcomes as Teaching Models Change, Says AAC&U

    AI Will Generate Better Student Learning Outcomes as Teaching Models Change, Says AAC&U

    IBL News | New York

    A national survey of 338 university presidents and other senior leaders about generative AI conducted by AAC&U and Elon University finds that these tools will enhance and customize learning and create better student learning outcomes as teaching models change.

    In addition, most university presidents and senior leaders think the tools will improve students’ research skills, creativity, and writing ability.

    The survey, conducted Nov. 4-Dec. 7, 2024, covered the current situation on their campuses, their struggles, the changes they anticipate, and the sweeping impacts they foresee. The results covered in a new report, Leading Through Disruption (PDF), were released at the annual AAC&U meeting, held January 22–24 in Washington, DC. [Picture above: opening of the conference].

    The main finding points out that the spread of AI tools in education has disrupted key aspects of teaching and learning on the nation’s campuses and will likely lead to significant changes in classwork, student assignments, and even the role of colleges and universities in the country.

    Other outcomes indicate:

    • High student adoption of GenAI, lower faculty uptake: at least half of students use the tools while fewer than half of faculty use them
    • The most common uses by academic leaders are for writing and communications, information gathering and summarization, idea generation, and data analysis.
    • Unpreparedness: Majorities of these college and university leaders believe their institutions are unprepared to use GenAI.
    • Cheating has increased on campuses since GenAI tools have become widely available, but detection doesn’t work correctly.
    • Most leaders say that spreading GenAI tools will affect students’ academic integrity.
    • Decreased attention spans: 66% think GenAI will diminish student attention spans, including 24% who think the tools will significantly impact this.
    • The challenges often for avoiding adoption include faculty unfamiliarity with or resistance to GenAI, distrust of GenAI tools and their outputs, and concerns about diminished student learning outcomes.

    “The overall takeaway from these leaders is that they are working to make sense of the changes they confront and looking over the horizon at a new AI-infused world they think will be better for almost everyone in higher education,” said Lee Rainie, director of Elon University’s Imagining the Digital Future Center.

    C. Edward Watson, vice president for digital innovation at the American Association of Colleges and Universities (AAC&U), said, “The fact that 44% of institutions have already created AI-specific courses shows both the urgency and opportunity before us. The challenge now is turning today’s disruption into tomorrow’s innovation in teaching and learning.”

    A persistent concern on campus relates to jobs. These college and university leaders say some reductions in employment levels could occur. Still, it will mostly be minor: 29% say they expect reductions in the number of staff at their schools (only 3% say it will be primary). In comparison, 11% expect reductions in faculty and teaching assistants (only 1% say it will be significant).

  • OpenAI Issued Its First AI Agent, Which Takes Control of Web Browsers and Performs Actions

    OpenAI Issued Its First AI Agent, Which Takes Control of Web Browsers and Performs Actions

    IBL News | New York

    OpenAI yesterday introduced a research preview of its general-purpose AI agent, an operator that can take control of a web browser and independently perform some actions. It costs $200 per month on a pro subscription plan for paid users in the U.S.

    This move is OpenAI’s first attempt in the upcoming agentic economy, with tools that automate and take actions on behalf of humans.

    “The Powering Operator is a Computer-Using Agent (CUA), a model that combines GPT-4o’s vision capabilities with advanced reasoning through reinforcement learning. CUA is trained to interact with graphical user interfaces (GUIs)—the buttons, menus, and text fields people see on a screen—just as humans do. This gives it the flexibility to perform digital tasks without using OS- or web-specific APIs,” the San Francisco–based research lab explained.

    Operator combines advanced GUI perception with structured problem-solving. It breaks tasks into multi-step plans and adaptively self-corrects. The model seeks user confirmation to enter login details or respond to CAPTCHA forms.

    In other words, Operator can use buttons, navigate menus, and fill out forms on a web page much like a human would.

    OpenAI says it’s collaborating with companies like DoorDash, eBay, Instacart, Priceline, StubHub, and Uber to ensure that Operator respects these businesses’ terms of service agreements.

    These are some of the prompts that OpenAI provided to illustrate the reach of Operator.

    • “Search Britannica for a detailed map view of bear habitats. Now please check out the black, brown and polar bear links and provide a concise general overview of their physical characteristics, specifically their differences. Oh and save the links for me so I can access them quickly.”

    • “I want one of those target deals. Can you check if they have a deal on poppi prebiotic sodas? If they do, I want the watermelon flavor in the 12fl oz can. Get me the type of deal that comes with this and check if it’s gluten free.”

    • “I am planning to shift to Seattle and I want you to search Redfin for a townhouse with at least 3 bedrooms, 2 bathrooms, and an energy-efficient design (e.g., solar panels or LEED-certified). My budget is between $600,000 – $800,000 and it should ideally be close to 1500 sq ft.”

    Last week, OpenAI released Tasks, giving ChatGPT simple automation features such as setting reminders and scheduling prompts to run at a set time every day.

    “The next challenge space we plan to explore is expanding the action space of agents,” said OpenAI.

    OpenAI has been slow to develop an AI agent compared to rivals like Google or Anthropic.

     

  • Elon Musk Cast Doubt on Stargate’s Initiative to Create $100 Billion in AI Infrastructure

    Elon Musk Cast Doubt on Stargate’s Initiative to Create $100 Billion in AI Infrastructure

    IBL News | New York

    Elon Musk cast doubt on President Trump’s first significant tech investment announcement when he announced a joint venture between OpenAI, SoftBank, and Oracle to create at least $100 billion in computing infrastructure to power artificial intelligence.

    Mr. Musk said that the venture, dubbed Stargate, did not have the financing to achieve the promised investment levels.

    “They don’t have the money,” Mr. Musk wrote in reply to an OpenAI post on the announcement. “SoftBank has well under $10B secured. I have that on good authority.”

    President Trump claimed that those companies decided to spend up to $500 billion building data centers, making the United States a global leader in the technology, beating out China.

    Stargate already has $100 billion in hand, as SoftBank, OpenAI, Oracle, and MGX, an investment group in the United Arab Emirates that focuses on AI, provided the financing.

    Elon Musk has been battling with OpenAI’s chief executive, Sam Altman. Mr. Musk, who helped found the company, has sued OpenAI and Mr. Altman for antitrust violations.

    Mr. Altman took to X Wednesday morning to refute Mr. Musk’s assertions.

  • Salesforce Presented Its Agentic Platform for Building a Digital Limitless Workforce

    Salesforce Presented Its Agentic Platform for Building a Digital Limitless Workforce

    IBL News | New York

    Salesforce, the leading CRM company, is preparing the launch of Agentforce 2.0 for February 2025, an AI toolset featured by the company as the “digital labor platform for building a limitless workforce for the enterprise.” 

    It’s a complete AI system for augmenting teams with autonomous AI agents in the flow of work.

    An example is the agents for Skills for Sales Development and Sales Coaching, with pricing starting at $2 per conversation.

    With its release, Salesforce introduced a library of pre-built skills and workflow integrations for rapid customization and the ability to deploy Agentforce in Slack.

    Marc Benioff, Chair and CEO of Salesforce, said, “We’re seamlessly bringing together AI, data, apps, and automation with humans to reshape how work gets done. Agentforce 2.0 cements our position as the leader in digital labor solutions, allowing any company to build a limitless workforce that can truly transform their business.”

    Agentforce 2.0 features a library of pre-built agent skills, spanning CRM, Slack, Tableau, and partner-developed skills on the AppExchange.

    This latest release empowers customers to extend Agentforce to any system or workflow using MuleSoft to create low-code workflows that span any system.

    It also features an enhanced Agent Builder capable of interpreting natural language instructions, such as ‘Onboard New Product Managers,’ to auto-generate new agents. These agents seamlessly combine pre-made skills with custom logic built in Salesforce, offering unparalleled flexibility and speed.

    Agentforce includes Tableau Skills for analytics and insights.

  • AI-Created Personalized Reports with Visuals and Graphics Help Students to Improve Test Scores

    AI-Created Personalized Reports with Visuals and Graphics Help Students to Improve Test Scores

    IBL News | New York

    Researchers at the South China Normal University in Guangzhou, highly focused on teacher education and training, explored how AI-driven visual reports can transform assessment in K12. A 13-week study analyzed the performance of half of the students who received personalized reports with clear visuals, personalized feedback, and actionable insights versus the other half who received oral feedback from instructors.

    The “AI reports” group demonstrated a 12.8% improvement in test scores over those only receiving oral.

    In addition, the vast majority expressed interest in using the report for other classes.

    Experts highlighted that AI tools get learners more efficient and faster personalized feedback and actionable feedback.

    Students can see their learning journey in these reports and act on it. They review areas of strength, identify learning gaps, and plan where to focus their effort.