Category: Top News

  • TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

    TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

    IBL News | Cambridge, Massachusetts

    The MIT and Harvard non-profit organization — Center for Reimagining Learning (or “tCRIL”) — that handles the Open edX platform named its first CEO: Stephanie Khurana [in the picture]. She assumed her role on April 3.

    In parallel, this organization which started by the two universities with the $800 million of proceed from the sale of edX Inc to 2U, changed its name into Axim Collaborative.

    Axim Collaborative’s mission is to make learning more accessible, more relevant, and more effective.

    The name Axim (a hybrid of the two ideas) was selected to underscore the centrality of access and impact,

    Khurana brings two decades of experience in social venture philanthropy and in technology innovation space. Most recently she served as managing partner and chief operating officer of Draper Richards Kaplan Foundation, a global venture philanthropy that identifies and supports innovative social ventures tackling complex societal problems.

    Earlier in her career, Khurana was on the founding teams of two technology start-ups: Cambridge Technology Partners (CTP) and Surebridge, both of which went on to be sold.

    Khurana also served in numerous roles at Harvard University, working on initiatives to support academic progress and build communities of belonging with undergraduates.

    Stephanie Khurana introduced herself to the Open edX community members in a town hall style which took place last Friday, March 31st, at the end of the annual developers conference.

    The gathering, celebrated at MIT’s Stata Center in Cambridge, Massachusetts, last week, attracted over 250 attendants, a similar number to past editions.

    One of the stories of the event was the acquisition of French-based company Overhang.IO, creator of the distribution tool Tutor. Pakistani American Edly purchased it for an undisclosed amount.

    Régis Behmo, the Founder and only developer in of Overhang, assumed the role of VP of Engineering at Edly.

    “Edly understands how contributing to open source creates value both for the company and for the whole edTech community. This partnership will help us drive this movement forward to serve learners and educators worldwide,” Behmo said.

    “Régis’s experience and leadership will be invaluable as we increase our impact on educational technology. In coming weeks and months, we’ll be making further announcements around our expanded roadmap for open source contributions to Open edX,” said Yasser Bashir, the founder and CEO of Arbisoft LLC, that operates with Edly its edTech brand.
    .

  • Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

    Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

    IBL News | New York

    Italy’s data protection authority said on Friday it will immediately block and investigate OpenAI from processing data of Italian users. The order is temporary until the company respects the European Union’s landmark privacy law, the General Data Protection Regulation (GDPR).

    Italy’s ban to ChatGPT come amid calls to block OpenAI’s releases over a range of risks for privacy, cybersecurity and disinformation on both Europe and the U.S.

    The Italian authority said reminded that ChatGPT also suffered a data breach and exposed users conversations and payment information last week.

    Moreover, ChatGPT has been shown producing completely false information about named individuals, apparently making up details its training data lacks.

    Consumer advocacy groups are saying that OpenAI is getting a “mass collection and storage of personal data to train the algorithms of ChatGPT” and is “processing data inaccurately.”

    This week, Elon Musk and dozens of AI experts this week called for a six-month pause on training systems more powerful than GPT-4. 

    Over 1,100 signatories — including Steve Wozniak, Tristan Harris of the Center for Humane Technology, some engineers from Meta and Google, Stability AI CEO Emad Mostaque signed an open letter, that was posted online, calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

    • “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

    • “AI labs have been locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

    • “The pause should be public and verifiable, and include all key actors. If it cannot be enacted quickly, governments should step in and institute a moratorium.”

    • “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

    • “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

    No one from OpenAI nor anyone from Anthropic signed this letter.

    Wednesday, OpenAI CEO Sam Altman spoke with the WSJ, saying OpenAI has not started training GPT-5.

    Pause Giant AI Experiments: An Open Letter:

    AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

    Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

    Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

    AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

    AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

    In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

    Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

     

  • Generative AI Will Impact Labor Market and Have Notable Economic, Social, and Policy Implications

    Generative AI Will Impact Labor Market and Have Notable Economic, Social, and Policy Implications

    IBL News | New York

    Generative AI or GTP (Generative Pre-trained Transformer) models will have notable economic, social, and policy implications.

    They will impact 80% of the U.S. workforce, with at least 10% of their work task affected. Around 19% of workers will see at least 50% of their task impacted.

    This is the main conclusion of a research paper posted online authored by four researchers at Cornell University — Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock.

    According to the research, the influence spans all wage levels, with higher-income jobs potentially facing greater exposure.

    Language Models (LLMs) — via ChatGPT or the OpenAI Playground — can process and produce various forms of sequential data, including assembly language, protein sequences, and chess games, extending beyond natural language applications alone.

  • Adobe Unveils Firefly, a Family of Creative Generative AI Models Coming to Its Products

    Adobe Unveils Firefly, a Family of Creative Generative AI Models Coming to Its Products

    IBL News | New York

    Adobe unveiled last week a host of new, creative generative AI called Firefly, which focused on the creation of images and text effects.

    The first applications that will include Adobe Firefly integration  — now in beta — will be Adobe Express, Adobe Experience Manager, Adobe Photoshop, and Adobe Illustrator.

    The company will also introduce a “Do Not Train” tag for creators who do not want their content used in model training; the tag will remain associated with content wherever it is used, published, or stored.

    ”With Firefly, Adobe will bring generative AI-powered ‘creative ingredients’ directly into customers’ workflows, increasing productivity and creative expression for all creators from high-end creative professionals to the long tail of the creator economy,”’ said David Wadhwani, President of Digital Media Business of Adobe.

    Adobe said that it is planning to make Firefly available via APIs on various platforms to enable customers to integrate into custom workflows and automation.

    Update May 29, 2023:

    New Photoshop

  • Quora Released its Poe Chatbot, A Tool That Includes GPT-4, Claude and ChatGPT

    Quora Released its Poe Chatbot, A Tool That Includes GPT-4, Claude and ChatGPT

    IBL News | New York

    Quora released its Poe chatbot, a tool powered by models from OpenAI and Anthropic.

    In a blog post announcement, its CEO Adam D’Angelo explained that the company was considering more language models for the future: “Different models will be optimized for different tasks, they will represent different points of view, or they will have access to different knowledge.”

    Poe is currently the only consumer internet product with either Claude or Claude+ available right now. It is offered now for free (download it here) as a free app with limited features. The paid tier costs $9.99 per month and will give you access to both GPT-4 and Claude+.

    Meanwhile, OpenAI is charging $20 for ChatGPT Plus.

    Poe, which stands for “Platform for Open Exploration,” is Quora’s attempt to democratize access to AI chatbots and foster curiosity and learning among users.

    Poe allows users to ask questions and have conversations with various AI-powered chatbots, including GPT-4, Claude+, Claude, ChatGPT and more. This way, users can try different chatbots and easily switch between different bots, comparing their responses.

    Users can also request new personalities to be added to Poe’s roster of bots.

  • Zoom Adds to Its IQ Smart Companion New Features Provided by OpenAI

    Zoom Adds to Its IQ Smart Companion New Features Provided by OpenAI

    IBL News | New York

    Zoom is adding new AI-powered features to its video conferencing app to be able to compete with Microsoft Team, Google Workspace, and Salesforce’s Slack.

    In a blog post published on Monday, the company announced a partnership with OpenAI that will add more tools to its proprietary Zoom IQ AI-powered assistant.

    The new Zoom IQ can summarize what users have missed in real time and ask further questions. If they need to create a whiteboard session for their meeting, Zoom IQ can generate it based on text prompts.

    Once the session ends, Zoom IQ summarizes the meeting and posts that recap to Zoom Team Chat, even suggesting actions for owners to take on.

    Zoom IQ chat also drafts and rephrases responses and sends follow-ups with customers over email.

    In addition, the company also launched Zoom IQ for Sales, which uses conversational intelligence to improve seller performance.

  • Databricks Launches Dolly, an Open Sourced LLM Clone of Stanford’s Alpaca Model

    Databricks Launches Dolly, an Open Sourced LLM Clone of Stanford’s Alpaca Model

    IBL News | New York

    The big data analytics firm Databricks open-sourced last week a new AI model called Dolly, along with all of its training code and instructions on how to recreate it.

    “Dolly is a cheap-to-build LLM (large language model) that exhibits a surprising degree of the instruction following capabilities exhibited by ChatGPT,” the company announced in a blog post.

    The model underlying Dolly has only 6 billion parameters, compared to 175 billion in GPT-3. It is only two years old, “making it particularly surprising that it works so well.”

    In February 2023, Meta released the weights for a set of high-quality language models called LLaMA for academic researchers.

    In March 2023, Stanford University built the Alpaca model, which was based on LLaMA, but tuned on a small dataset of 50,000 human-like questions and answers.

    Databricks evaluated Dolly on the instruction-following capabilities described in the InstructGPT paper on which ChatGPT is based.

    Dolly — named after Dolly the sheep, the first cloned mammal — is an open-source clone of an Alpaca, inspired by a LLaMA.

    Instead of creating its own model from scratch or using LLaMA, Databricks took a much older and open-source LLM called GPT-J, which was created by EleutherAI several years earlier.

    GTP-J was the foundation on which Dolly was built.

    Databricks was able to take the EleutherAI model and make it “highly approachable” simply by training it with a small, 50,000-word dataset in less than three hours using a single machine.

    “This shows that the magic of instruction following does not lie in training models on gigantic datasets using massive hardware,” Databricks explained.

    “Rather, the magic lies in showing these powerful open-source models specific examples of how to talk to humans, something anybody can do for a hundred dollars using this small 50.000 dataset of Q&A examples.”

     

    “It exhibits many of the same qualitative capabilities, including text generation, brainstorming, and open Q&A.”

    “We believe models like Dolly will help democratize LLMs, transforming them from something very few companies can afford into a commodity every company can own and customize to improve their products,” Databricks said.

  • Microsoft Search Engine Bing Adds DALL-E’s Image AI Creator

    Microsoft Search Engine Bing Adds DALL-E’s Image AI Creator

    IBL News | New York

    Microsoft announced this week that its new AI-enabled Bing chat will allow users to generate images — one of the most searched categories, second only to general web searches.

    Powered by an advanced version of OpenAI’s DALL-E model, this new feature called Bing Image Creator, now offered to a few users, will allow users of Microsoft’s Edge browser to create an image by typing a description, providing contexts like location or activity, and choosing an art style.

    It means that Bing now can generate both written and visual content within a chat.

    “It’s like your creative copilot. Just type something like “draw an image” or “create an image” as a prompt in chat to get creating a visual for a newsletter to friends or as inspiration for redecorating your living room,” said Yusuf Mehdi, Corporate Vice President & Consumer Chief Marketing Officer of Microsoft.

    In addition to Bing Image Creator, Microsoft is rolling out new AI-powered Visual Stories — similar to Instagram stories — and updated Knowledge Cards, AI-generated infographics will be the new Bing and Edge preview, as shown below.

    The preview experience of Image Creator is now available at bing.com/create for Bing users in English.

    knowledge card showing information about corgis

    knowledge card showing information about Rio in Brazil

     

     

  • Google Provides Limited Access to Bard, Its “Early Experiment Chatbot”

    Google Provides Limited Access to Bard, Its “Early Experiment Chatbot”

    IBL News | New York

    Yesterday, Google began providing limited access to Bard, its rival to ChatGPT, to selected users in the United States and the United Kingdom. The date for full public access has not been announced yet. Users can join a waitlist for Bard at bard.google.com.

    Google stresses that Bard is a “complement to search”, not a replacement, given the tendency of these bots to invent information or hallucination, as users notice in OpenAI’s ChatGPT and Microsoft’s Bing chatbot. It all underscores the experimental nature of the technology.

    The search giant describes Bard as “an early experiment, intended to help people boost their productivity, accelerate their ideas, and fuel their curiosity.”

    In a demo for The VergeBard — based on Google’s AI language model LaMDA — generated three responses to each user query, with minimal variation in their content. Underneath each reply is a prominent “Google It” button.

    Bard’s interface is festooned with disclaimers such as “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.”

    Bard lacks Bing’s clearly labeled footnotes, which Google says only appear when it directly quotes a source like a news article.
    A GIF showing Bard responding to a query about how to introduce your daughter to flyfishing.

    According to a report in TechCrunch, Google’s Bard is lagging behind OpenAI’s GPT-4, and Anthropic’s Claude in a head-to-head comparison on a few example prompts.

    “Overall GPT-4 is unambiguously ahead of the others, though depending on the context Claude and Bard can be competitive. Importantly, however, both Claude and Bard gave factually incorrect answers at times, and Bard even made up a citation to support its assertion about GDPR enforcement.”

  • OpenAI Starts to Roll Out Plugins in ChatGPT

    OpenAI Starts to Roll Out Plugins in ChatGPT

    IBL News | New York

    OpenAI started to roll out plugins in ChatGPT with a small set of users. These plugins, now in alpha mode, extend the bot’s functionality by connecting ChatGPT to third-party applications.

    The plugins enable ChatGPT to interact with APIs defined by developers, performing a wide range of actions, such as:

    • Retrieve real-time information; e.g., sports scores, stock prices, the latest news
    • Retrieve knowledge-base information; e.g., company docs, personal notes
    • Perform actions on behalf of the user; e.g., booking a flight, ordering food

    They are available to developers on the waitlist.

    The stated goal of OpenAI is to build a community of plugin developers.

    To foster the creation of new plugins, OpenAI has open-sourced a “retrieval” plugin that enables ChatGPT to access snippets of documents from data sources like files, notes, emails, or public documentation by asking questions in natural language. [Documentation] 

    The first plugins have been created by ExpediaFiscalNoteInstacartKAYAKKlarnaMiloOpenTableShopifySlackSpeakWolfram, and Zapier.

    The OpenTable plugin allows the chatbot to search across restaurants for available bookings, for example, while the Instacart plugin lets ChatGPT place orders from local stores. Zapier connects with apps like Google Sheets, Trello, and Gmail to trigger productivity tasks.

    The most intriguing plugin would be OpenAI’s first-party web-browsing plugin, which would allow ChatGPT to draw data from around the web, as its knowledge is limited today to info prior to 2021.

    An AI startup called WebGPT built a plugin with access to the live web.

    Beyond the web plugins, OpenAI released a code interpreter for ChatGPT that provides the chatbot with a working Python interpreter in a sandboxed, firewalled environment along with disk space.

    It supports uploading files to ChatGPT and downloading the results; OpenAI says it’s particularly useful for solving mathematical problems, doing data analysis and visualization, and converting files between formats.