Author: IBL News

  • The Open edX Platform Reaches 4,5K Deployments, with 70K Courses, and 77M Users

    The Open edX Platform Reaches 4,5K Deployments, with 70K Courses, and 77M Users

    IBL News | Cambridge, Massachusetts

    The Open edX Platform has reached 4,500 deployments, hosts 70,000 courses, and has 77 million registered users worldwide, including 45 million at 2U’s edx.org.

    The organization behind Open edX, now renamed Axim Collaborative, presented the state of this international community during its annual conference at MIT’s Stata Center in Cambridge, Massachusetts last week.

    Ed Zarecor, Vice President of Engineering at Axim Collaborative (pictured above), explained that there are currently 24 contributing organizations and 333 individuals providing code.

    Jenna Makowski, Senior Product Manager at Axim Collaborative (pictured below), presented current and near-future priorities, including the Open edX platform roadmap.

    She highlighted the Learner Analytics (OARS) project, which aligns with open data standards and provides near real-time statistics.

    The goal of Axim Collaborative is to leverage open-source technology to democratize education and drive advancements in learning.

     

    During the opening talk at the 2023 Open edX Conference, Anant Agarwal, Chief Platform Officer at 2U, Founder of edX, and Professor, presented his ideas on “Reimagining the ‘3 Rs’ for Higher Ed in 2023.”

    Agarwal highlighted the top priorities that should be emphasized to help adult learners thrive in today’s world.

    “We should offer programs that teach today’s most in-demand tech skills, such as coding and data science, or develop human skills for the digital age, like resilience, storytelling and negotiation. Use a mix of live instruction, rich multimedia, and asynchronous learning. And give them options like part-time/full-time and online/in-person—or both.”

    He also said “that the world of education has completely changed by AI.”

  • Udacity Incorporates an OpenAI Provided Chatbot

    Udacity Incorporates an OpenAI Provided Chatbot

    IBL News | New York

    Udacity became the first MOOC learning platform to incorporate an AI chatbot. Powered by OpenAI’s GPT-3.5 Turbo model, it is intended to provide a real-time complement to the platform’s human mentors.

    Udacity says it seeks to enhance personalized support and guidance for learners.

    “We created an intelligent virtual tutor that can handle thousands of interactions at once,“ added the company. “We’re thrilled to be at the forefront of this change in education.”

    Udacity’s chatbot is able to:

    Summarize concepts to better understand complex material and retain information more effectively.

    Pose deeper questions by asking the bot for definitions, examples, and alternative explanations to deepen understanding of a given topic.

    • Translate to another language specific words, phrases, exercises, and quizzes. This can be a game-changer for non-native English speakers that felt limited by a language barrier.

    Fix errors in your code by asking the bot for help debugging errors, suggest improvements, or fix errors in coding exercises. It allows to learn how to code more efficiently and effectively.

    The Udacity chat icon appears in lower right corner of the screen. Udacity warned that learners should review the output and the advice provided by the chatbot.
    .

  • TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

    TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

    IBL News | Cambridge, Massachusetts

    The MIT and Harvard non-profit organization — Center for Reimagining Learning (or “tCRIL”) — that handles the Open edX platform named its first CEO: Stephanie Khurana [in the picture]. She assumed her role on April 3.

    In parallel, this organization which started by the two universities with the $800 million of proceed from the sale of edX Inc to 2U, changed its name into Axim Collaborative.

    Axim Collaborative’s mission is to make learning more accessible, more relevant, and more effective.

    The name Axim (a hybrid of the two ideas) was selected to underscore the centrality of access and impact,

    Khurana brings two decades of experience in social venture philanthropy and in technology innovation space. Most recently she served as managing partner and chief operating officer of Draper Richards Kaplan Foundation, a global venture philanthropy that identifies and supports innovative social ventures tackling complex societal problems.

    Earlier in her career, Khurana was on the founding teams of two technology start-ups: Cambridge Technology Partners (CTP) and Surebridge, both of which went on to be sold.

    Khurana also served in numerous roles at Harvard University, working on initiatives to support academic progress and build communities of belonging with undergraduates.

    Stephanie Khurana introduced herself to the Open edX community members in a town hall style which took place last Friday, March 31st, at the end of the annual developers conference.

    The gathering, celebrated at MIT’s Stata Center in Cambridge, Massachusetts, last week, attracted over 250 attendants, a similar number to past editions.

    One of the stories of the event was the acquisition of French-based company Overhang.IO, creator of the distribution tool Tutor. Pakistani American Edly purchased it for an undisclosed amount.

    Régis Behmo, the Founder and only developer in of Overhang, assumed the role of VP of Engineering at Edly.

    “Edly understands how contributing to open source creates value both for the company and for the whole edTech community. This partnership will help us drive this movement forward to serve learners and educators worldwide,” Behmo said.

    “Régis’s experience and leadership will be invaluable as we increase our impact on educational technology. In coming weeks and months, we’ll be making further announcements around our expanded roadmap for open source contributions to Open edX,” said Yasser Bashir, the founder and CEO of Arbisoft LLC, that operates with Edly its edTech brand.
    .

  • Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

    Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

    IBL News | New York

    Italy’s data protection authority said on Friday it will immediately block and investigate OpenAI from processing data of Italian users. The order is temporary until the company respects the European Union’s landmark privacy law, the General Data Protection Regulation (GDPR).

    Italy’s ban to ChatGPT come amid calls to block OpenAI’s releases over a range of risks for privacy, cybersecurity and disinformation on both Europe and the U.S.

    The Italian authority said reminded that ChatGPT also suffered a data breach and exposed users conversations and payment information last week.

    Moreover, ChatGPT has been shown producing completely false information about named individuals, apparently making up details its training data lacks.

    Consumer advocacy groups are saying that OpenAI is getting a “mass collection and storage of personal data to train the algorithms of ChatGPT” and is “processing data inaccurately.”

    This week, Elon Musk and dozens of AI experts this week called for a six-month pause on training systems more powerful than GPT-4. 

    Over 1,100 signatories — including Steve Wozniak, Tristan Harris of the Center for Humane Technology, some engineers from Meta and Google, Stability AI CEO Emad Mostaque signed an open letter, that was posted online, calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

    • “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

    • “AI labs have been locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

    • “The pause should be public and verifiable, and include all key actors. If it cannot be enacted quickly, governments should step in and institute a moratorium.”

    • “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

    • “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

    No one from OpenAI nor anyone from Anthropic signed this letter.

    Wednesday, OpenAI CEO Sam Altman spoke with the WSJ, saying OpenAI has not started training GPT-5.

    Pause Giant AI Experiments: An Open Letter:

    AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

    Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

    Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

    AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

    AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

    In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

    Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

     

  • Generative AI Will Impact Labor Market and Have Notable Economic, Social, and Policy Implications

    Generative AI Will Impact Labor Market and Have Notable Economic, Social, and Policy Implications

    IBL News | New York

    Generative AI or GTP (Generative Pre-trained Transformer) models will have notable economic, social, and policy implications.

    They will impact 80% of the U.S. workforce, with at least 10% of their work task affected. Around 19% of workers will see at least 50% of their task impacted.

    This is the main conclusion of a research paper posted online authored by four researchers at Cornell University — Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock.

    According to the research, the influence spans all wage levels, with higher-income jobs potentially facing greater exposure.

    Language Models (LLMs) — via ChatGPT or the OpenAI Playground — can process and produce various forms of sequential data, including assembly language, protein sequences, and chess games, extending beyond natural language applications alone.

  • Adobe Unveils Firefly, a Family of Creative Generative AI Models Coming to Its Products

    Adobe Unveils Firefly, a Family of Creative Generative AI Models Coming to Its Products

    IBL News | New York

    Adobe unveiled last week a host of new, creative generative AI called Firefly, which focused on the creation of images and text effects.

    The first applications that will include Adobe Firefly integration  — now in beta — will be Adobe Express, Adobe Experience Manager, Adobe Photoshop, and Adobe Illustrator.

    The company will also introduce a “Do Not Train” tag for creators who do not want their content used in model training; the tag will remain associated with content wherever it is used, published, or stored.

    ”With Firefly, Adobe will bring generative AI-powered ‘creative ingredients’ directly into customers’ workflows, increasing productivity and creative expression for all creators from high-end creative professionals to the long tail of the creator economy,”’ said David Wadhwani, President of Digital Media Business of Adobe.

    Adobe said that it is planning to make Firefly available via APIs on various platforms to enable customers to integrate into custom workflows and automation.

    Update May 29, 2023:

    New Photoshop

  • Quora Released its Poe Chatbot, A Tool That Includes GPT-4, Claude and ChatGPT

    Quora Released its Poe Chatbot, A Tool That Includes GPT-4, Claude and ChatGPT

    IBL News | New York

    Quora released its Poe chatbot, a tool powered by models from OpenAI and Anthropic.

    In a blog post announcement, its CEO Adam D’Angelo explained that the company was considering more language models for the future: “Different models will be optimized for different tasks, they will represent different points of view, or they will have access to different knowledge.”

    Poe is currently the only consumer internet product with either Claude or Claude+ available right now. It is offered now for free (download it here) as a free app with limited features. The paid tier costs $9.99 per month and will give you access to both GPT-4 and Claude+.

    Meanwhile, OpenAI is charging $20 for ChatGPT Plus.

    Poe, which stands for “Platform for Open Exploration,” is Quora’s attempt to democratize access to AI chatbots and foster curiosity and learning among users.

    Poe allows users to ask questions and have conversations with various AI-powered chatbots, including GPT-4, Claude+, Claude, ChatGPT and more. This way, users can try different chatbots and easily switch between different bots, comparing their responses.

    Users can also request new personalities to be added to Poe’s roster of bots.

  • Zoom Adds to Its IQ Smart Companion New Features Provided by OpenAI

    Zoom Adds to Its IQ Smart Companion New Features Provided by OpenAI

    IBL News | New York

    Zoom is adding new AI-powered features to its video conferencing app to be able to compete with Microsoft Team, Google Workspace, and Salesforce’s Slack.

    In a blog post published on Monday, the company announced a partnership with OpenAI that will add more tools to its proprietary Zoom IQ AI-powered assistant.

    The new Zoom IQ can summarize what users have missed in real time and ask further questions. If they need to create a whiteboard session for their meeting, Zoom IQ can generate it based on text prompts.

    Once the session ends, Zoom IQ summarizes the meeting and posts that recap to Zoom Team Chat, even suggesting actions for owners to take on.

    Zoom IQ chat also drafts and rephrases responses and sends follow-ups with customers over email.

    In addition, the company also launched Zoom IQ for Sales, which uses conversational intelligence to improve seller performance.

  • Databricks Launches Dolly, an Open Sourced LLM Clone of Stanford’s Alpaca Model

    Databricks Launches Dolly, an Open Sourced LLM Clone of Stanford’s Alpaca Model

    IBL News | New York

    The big data analytics firm Databricks open-sourced last week a new AI model called Dolly, along with all of its training code and instructions on how to recreate it.

    “Dolly is a cheap-to-build LLM (large language model) that exhibits a surprising degree of the instruction following capabilities exhibited by ChatGPT,” the company announced in a blog post.

    The model underlying Dolly has only 6 billion parameters, compared to 175 billion in GPT-3. It is only two years old, “making it particularly surprising that it works so well.”

    In February 2023, Meta released the weights for a set of high-quality language models called LLaMA for academic researchers.

    In March 2023, Stanford University built the Alpaca model, which was based on LLaMA, but tuned on a small dataset of 50,000 human-like questions and answers.

    Databricks evaluated Dolly on the instruction-following capabilities described in the InstructGPT paper on which ChatGPT is based.

    Dolly — named after Dolly the sheep, the first cloned mammal — is an open-source clone of an Alpaca, inspired by a LLaMA.

    Instead of creating its own model from scratch or using LLaMA, Databricks took a much older and open-source LLM called GPT-J, which was created by EleutherAI several years earlier.

    GTP-J was the foundation on which Dolly was built.

    Databricks was able to take the EleutherAI model and make it “highly approachable” simply by training it with a small, 50,000-word dataset in less than three hours using a single machine.

    “This shows that the magic of instruction following does not lie in training models on gigantic datasets using massive hardware,” Databricks explained.

    “Rather, the magic lies in showing these powerful open-source models specific examples of how to talk to humans, something anybody can do for a hundred dollars using this small 50.000 dataset of Q&A examples.”

     

    “It exhibits many of the same qualitative capabilities, including text generation, brainstorming, and open Q&A.”

    “We believe models like Dolly will help democratize LLMs, transforming them from something very few companies can afford into a commodity every company can own and customize to improve their products,” Databricks said.

  • Microsoft Search Engine Bing Adds DALL-E’s Image AI Creator

    Microsoft Search Engine Bing Adds DALL-E’s Image AI Creator

    IBL News | New York

    Microsoft announced this week that its new AI-enabled Bing chat will allow users to generate images — one of the most searched categories, second only to general web searches.

    Powered by an advanced version of OpenAI’s DALL-E model, this new feature called Bing Image Creator, now offered to a few users, will allow users of Microsoft’s Edge browser to create an image by typing a description, providing contexts like location or activity, and choosing an art style.

    It means that Bing now can generate both written and visual content within a chat.

    “It’s like your creative copilot. Just type something like “draw an image” or “create an image” as a prompt in chat to get creating a visual for a newsletter to friends or as inspiration for redecorating your living room,” said Yusuf Mehdi, Corporate Vice President & Consumer Chief Marketing Officer of Microsoft.

    In addition to Bing Image Creator, Microsoft is rolling out new AI-powered Visual Stories — similar to Instagram stories — and updated Knowledge Cards, AI-generated infographics will be the new Bing and Edge preview, as shown below.

    The preview experience of Image Creator is now available at bing.com/create for Bing users in English.

    knowledge card showing information about corgis

    knowledge card showing information about Rio in Brazil