Category: Views

  • Open-Source Initiatives Challenge Closed, Proprietary AI Systems With New LLMs

    Open-Source Initiatives Challenge Closed, Proprietary AI Systems With New LLMs

    IBL News | New York

    Several startups, collectives, and academics have released a wave of new large language models (LLMs) as open source, trying to challenge the closed, proprietary AI systems such as OpenAI and Anthropic.

    These private organizations, knowing that state-of-the-art LLMs require huge compute budgets — OpenAI reportedly used 10,000 Nvidia GPUs to train ChatGPT — and deep ML expertise have refused to open up their models. They rely on API distribution instead.

    The data, source code, or deep learning programming, of the model weights, remain hidden from public scrutiny.

    Open-source initiatives state that they are seeking to democratize access to LLMs.

    Two weeks ago, Databricks announced the ChatGPT-type Dolly, which was inspired by Alpaca, another open-source LLM released by Stanford in mid-March.

    Stanford’s Alpaca used the weights from Meta’s LLaMA model that was released in late February.

    LLaMA was hailed for its superior performance over models such as GPT–3, despite having ten times fewer parameters.

    Other open-source LLaMA-inspired models have been released in recent weeks, such as:

    – Vicuna, a fine-tuned version of LLaMA that apparently matches GPT-4 performance;

    – Koala, a model from Berkeley AI Research Institute;

    – ColossalChat, a ChatGPT-type model part of the Colossal-AI project from UC Berkeley.

    Some of these open-source models have even been optimized to run on the lowest-powered devices, from a MacBook Pro down to a Raspberry Pi and an old iPhone.

    However, none of these open-source LLMs are available yet for commercial use, as the LLaMA model is not released for commercial use.

    In addition, the OpenAI GPT-3.5 terms of use prohibit using the model to develop AI models that compete with OpenAI.

    In March, the free-software community Mozilla announced an open-source initiative for developing AI, saying they “intend to create a decentralized AI community that can serve as a ‘counterweight’ against the large profit-focused companies.”

    .

  • Language Models that Run Themselves Accelerate the Advent of AGI

    Language Models that Run Themselves Accelerate the Advent of AGI

    IBL News | New York

    Language models that speed up and automate tasks with text or code, also called “autonomous AI,” “self-prompting,” or “auto-prompting” have become the latest trend in generative AI.

    These models develop and execute prompts that can lead to new prompts, becoming truly powerful.

    OpenAI developer Andrej Karpathy said, “Stringing them together in loops creates agents that can perceive, think, and act, their goals defined in English in prompts.”

    At the moment, the most popular self-prompting example is the experimental open-source application “Auto-GPT”.

    According to its coding team, this Python application is designed to independently develop and manage business ideas and generate income.

    The program plans step-by-step, justifies decisions, and develops plans, which it documents.

    The system integrates GPT-4 for text generation, accesses the Internet for data retrieval, stores data, and generates speech via the Elevenlabs API. It’s even capable of self-improvement and bug-fixing by generating Python scripts via GPT-4.

    Projects like Baby-AGI or Jarvis (HuggingGPT) work with the same idea as Auto-GPT by automating complex tasks autonomously.

    The team behind HuggingGPT explained, “By leveraging the strong language capability of ChatGPT and abundant AI models in Hugging Face, HuggingGPT is able to cover numerous sophisticated AI tasks in different modalities and domains and achieve impressive results in language, vision, speech, and other challenging tasks, which paves a new way towards advanced artificial intelligence.”

    Experts agree that GPT-4 is going a little AGI (Artificial General Intelligence) with autonomous AI. “Models that apply self-improvement of language models could get rapidly more powerful as we approach the possibility of real-life AGIs, experts say.
    .

  • Bloomberg Introduces a 50-Billion Parameter LLM Built For Finance

    Bloomberg Introduces a 50-Billion Parameter LLM Built For Finance

    IBL News | New York

    Bloomberg released this week a research paper introducing BloombergGPT, a new large-language (LLM) AI model with 50 billion parameter built from scratch for finance.

    The company said that BloombergGPT, that has been specifically trained on a wide range of financial data, outperforms similarly-sized models by significant margins (as shown in the table below).

    “It represents the first step in the development and application of this new technology for the financial industry,” said the company.

    “This model will assist Bloomberg in improving existing financial NLP tasks, such as sentiment analysis, named entity recognition, news classification, and question answering, among others. Furthermore, BloombergGPT will unlock new opportunities for marshalling the vast quantities of data available on the Bloomberg Terminal.”

    Bloomberg researchers pioneered a mixed approach that combines both finance data with general-purpose datasets to train a model that achieves best-in-class results on financial benchmarks, while also maintaining competitive performance on general-purpose LLM benchmarks.

    Bloomberg’s data analysts collected financial language documents over the span of forty years, pulled from this extensive archive to create a comprehensive 363 billion token dataset consisting of English financial documents.

    This data was augmented with a 345 billion token public dataset to create a large training corpus with over 700 billion tokens. Using a portion of this training corpus, the team trained a 50-billion parameter decoder-only causal language model.

     

  • The OpenAI’s CEO Envisions a Universal Income Society to Compensate Jobs Replaced by AI

    The OpenAI’s CEO Envisions a Universal Income Society to Compensate Jobs Replaced by AI

    IBL News | New York

    Sam Altman, the CEO of OpenAI, an organization that has moved at record speed from a small research nonprofit into a multibillion-dollar company, with the help of Microsoft, showed his contradictions in an interview with The Wall Street Journal last week.

    He is featured as an entrepreneur who made a fortune investing in young startups, owner of the three mansions in California, and a family office now employing dozens to manage those properties along with investments in companies such as Worldcoin, Helion Energy, and Retro.

    Sam Altman said he fears what could happen if AI is rolled out into society recklessly and argues that is uniquely dangerous to have profits be the main driver of developing powerful AI models.

    Meanwhile he says that his ultimate mission is to build AGI (artificial general intelligence) while stating a goal of forging a new world order in which machines free people to pursue more creative work. In his vision, universal basic income will help compensate for jobs replaced by AI and humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”

    In the long run, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology.

    “Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life,” writes The Wall Street Journal.

    “OpenAI’s headquarters — with 400 employees —, in San Francisco’s Mission District, evoke an affluent New Age utopia more than a nonprofit trying to save the world. Stone fountains are nestled amid succulents and ferns in nearly all of the sun-soaked rooms.”

    Elon Musk, one of OpenAI’s critics who co-founded the nonprofit in 2015 but parted ways in 2018 after a dispute over its control and direction, said that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.” 

    Billionaire venture capitalist Peter Thiel, a close friend of Mr. Altman’s and an early donor to the nonprofit, has long been a proponent of the idea that humans and machines will one day merge.

    Behind OpenAI there is a for-profit arm, OpenAI LP, that reports to the nonprofit parent.

    According to some employees, the partnership of Sam Altman with Satya Nadella, the Microsoft CEO, started in 2019, contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world. They saw the deal as a Faustian bargain.

    Microsoft initially invested $1 billion in OpenAI and obtained exclusivity using Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, giving the tech giant the sole right to license OpenAI’s technology for future products.

    Altman’s other projects include Worldcoin, a company he co-founded that seeks to give cryptocurrency to every person on earth.

    He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.

    He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts.

     

  • Artificial Intelligence Enters a New Phase of Corporate Dominance

    Artificial Intelligence Enters a New Phase of Corporate Dominance

    IBL News | New York

    The 2023 AI Index [read in full here] — compiled by researcher from Stanford University as well as AI companies including Google, Anthropic, McKinsey, LinkedIn, and Hugging Face — suggests that AI is entering an era of corporate control, with industry players dominating over academia and government in deploying and safeguarding AI applications.

    Decisions about how to deploy this technology and how to balance risk and opportunity lie firmly in the hands of corporate players, as we’ve seen over the past years with AI tools, like ChatGPT, Bing, and image-generating software Midjourney, going mainstream.

    The report, released today, states: “Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, compute, and money, resources that industry actors inherently possess in greater amounts compared to nonprofits and academia.”

    Many experts in the AI world, mentioned by The Verge, worry that the incentives of the business world will also lead to dangerous outcomes as companies rush out products and sideline safety concerns.

    As AI tools become more widespread, the number of errors and malicious use cases are increasing. Such incidents might include fatalities involving Tesla’s self-driving software; the use of audio deepfakes in corporate scams; the creation of nonconsensual deepfake nudes; and numerous cases of mistaken arrests caused by faulty facial recognition software.

  • TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

    TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

    IBL News | Cambridge, Massachusetts

    The MIT and Harvard non-profit organization — Center for Reimagining Learning (or “tCRIL”) — that handles the Open edX platform named its first CEO: Stephanie Khurana [in the picture]. She assumed her role on April 3.

    In parallel, this organization which started by the two universities with the $800 million of proceed from the sale of edX Inc to 2U, changed its name into Axim Collaborative.

    Axim Collaborative’s mission is to make learning more accessible, more relevant, and more effective.

    The name Axim (a hybrid of the two ideas) was selected to underscore the centrality of access and impact,

    Khurana brings two decades of experience in social venture philanthropy and in technology innovation space. Most recently she served as managing partner and chief operating officer of Draper Richards Kaplan Foundation, a global venture philanthropy that identifies and supports innovative social ventures tackling complex societal problems.

    Earlier in her career, Khurana was on the founding teams of two technology start-ups: Cambridge Technology Partners (CTP) and Surebridge, both of which went on to be sold.

    Khurana also served in numerous roles at Harvard University, working on initiatives to support academic progress and build communities of belonging with undergraduates.

    Stephanie Khurana introduced herself to the Open edX community members in a town hall style which took place last Friday, March 31st, at the end of the annual developers conference.

    The gathering, celebrated at MIT’s Stata Center in Cambridge, Massachusetts, last week, attracted over 250 attendants, a similar number to past editions.

    One of the stories of the event was the acquisition of French-based company Overhang.IO, creator of the distribution tool Tutor. Pakistani American Edly purchased it for an undisclosed amount.

    Régis Behmo, the Founder and only developer in of Overhang, assumed the role of VP of Engineering at Edly.

    “Edly understands how contributing to open source creates value both for the company and for the whole edTech community. This partnership will help us drive this movement forward to serve learners and educators worldwide,” Behmo said.

    “Régis’s experience and leadership will be invaluable as we increase our impact on educational technology. In coming weeks and months, we’ll be making further announcements around our expanded roadmap for open source contributions to Open edX,” said Yasser Bashir, the founder and CEO of Arbisoft LLC, that operates with Edly its edTech brand.
    .

  • Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

    Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

    IBL News | New York

    Italy’s data protection authority said on Friday it will immediately block and investigate OpenAI from processing data of Italian users. The order is temporary until the company respects the European Union’s landmark privacy law, the General Data Protection Regulation (GDPR).

    Italy’s ban to ChatGPT come amid calls to block OpenAI’s releases over a range of risks for privacy, cybersecurity and disinformation on both Europe and the U.S.

    The Italian authority said reminded that ChatGPT also suffered a data breach and exposed users conversations and payment information last week.

    Moreover, ChatGPT has been shown producing completely false information about named individuals, apparently making up details its training data lacks.

    Consumer advocacy groups are saying that OpenAI is getting a “mass collection and storage of personal data to train the algorithms of ChatGPT” and is “processing data inaccurately.”

    This week, Elon Musk and dozens of AI experts this week called for a six-month pause on training systems more powerful than GPT-4. 

    Over 1,100 signatories — including Steve Wozniak, Tristan Harris of the Center for Humane Technology, some engineers from Meta and Google, Stability AI CEO Emad Mostaque signed an open letter, that was posted online, calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

    • “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

    • “AI labs have been locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

    • “The pause should be public and verifiable, and include all key actors. If it cannot be enacted quickly, governments should step in and institute a moratorium.”

    • “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

    • “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

    No one from OpenAI nor anyone from Anthropic signed this letter.

    Wednesday, OpenAI CEO Sam Altman spoke with the WSJ, saying OpenAI has not started training GPT-5.

    Pause Giant AI Experiments: An Open Letter:

    AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

    Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

    Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

    AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

    AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

    In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

    Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

     

  • Google Shows What AI-Embedded Writing Will Look Like in Gmail and Google Docs

    Google Shows What AI-Embedded Writing Will Look Like in Gmail and Google Docs

    IBL News | New York

    Google announced that it plans to embed generative AI in Gmail and Google Docs yesterday, as shown in the video below.

    These features of this “collaborative AI partner” are not out yet. They will be launched via Google’s tester program, starting with English in the U.S., this month.

    “From there, we’ll iterate and refine the experiences before making them available more broadly to consumers, small businesses, enterprises, and educational institutions in more countries and languages,” wrote Johanna Voolich Wright Vice President, of Product at Google Workspace.

    For now, Google says it is only “sharing our broader vision” across Gmail, Docs, Slides, Sheets, Meet, and Chat.

    A “help me write” box in Gmail and Google Docs will let users type what they want
    and AI will spit out a block of text based on that prompt. In addition, Google’s “collaborative AI partner” into Workspace will result in these features:

    • draft, reply, summarize, and prioritize your Gmail
    • brainstorm, proofread, write, and rewrite in Docs
    • bring your creative vision to life with auto-generated images, audio, and video in Slides
    • go from raw data to insights and analysis via auto completion, formula generation, and contextual categorization in Sheets
    • generate new backgrounds and capture notes in Meet
    • enable workflows for getting things done in Chat

    Google Cloud also announced generative AI support in Vertex AI and Generative AI App Builder, helping businesses and governments build gen apps.

    So far, the company has opened up API access to a language model, but it hasn’t been any real consumer product launch.

    Analysts interpret that Google is in total panic over the rise of ChatGPT and AI-powered text. Just like how Google put social features into every product back in the G+ days, the plan going forward is to build ChatGPT-style generative text into every Google product.

     

  • DuckDuckGo Unveils a Feature that Summarizes Information Using Generative AI

    DuckDuckGo Unveils a Feature that Summarizes Information Using Generative AI

    IBL News | New York

    The privacy-focused search engine DuckDuckGo entered the generative technology race by announcing a free AI-powered summarization feature, an instant answer but not a chatbot, called DuckAssist this week.

    DuckAssist — in beta now and only available via apps and browser extensions — suggests natural language answers in English when it recognizes a search engine it can answer. And when an AI-powered response is available, the user sees a magic wand icon with an “ask me” button in their search results.

    “If this DuckAssist trial goes well, we will roll it out to all DuckDuckGo search users in the coming weeks,” said Gabriel Weinberg, CEO of DuckDuckGo, in a blog post.

    DuckDuckGo says it’s drawing on natural language technology from Davinci model from OpenAI and Claude model from Anthropic, combined with its own indexing of Wikipedia — “99%+ is Wikipedia” — and occasionally related sites like the Encyclopedia Britannica, among other sources. The company also notes DDG is “experimenting” with the new Turbo model OpenAI recently announced.

    Although it’s imperfect, DuckDuckGo considers Wikipedia a relatively reliable source.

    DuckDuckGo Enters The AI Race With DuckAssist

    Moreover, Gabriel Weinberg, CEO of DuckDuckGo, said:

    “Generative AI technology is designed to generate text in response to any prompt, regardless of whether it “knows” the answer or not. By asking DuckAssist to only summarize information from Wikipedia and related sources, the probability that it will “hallucinate” — that is, just make something up — is greatly diminished.”

    “In all cases though, a source link, usually a Wikipedia article, will be linked below the summary, often pointing you to a specific section within that article so you can learn more.”

    “Nonetheless, DuckAssist won’t generate accurate answers all of the time.”

    “DuckAssist may also make mistakes when answering especially complex questions, simply because it would be difficult for any tool to summarize answers in those instances.”

     

  • Snapchat Introduces My AI, a ChatGPT-Powered Artificial Intelligence Bot Into Its App

    Snapchat Introduces My AI, a ChatGPT-Powered Artificial Intelligence Bot Into Its App

    IBL News | New York

    Snap Inc. announced yesterday that it was introducing My AI, a ChatGPT-powered artificial intelligence bot, into its Snapchat app. The goal is to allow users to talk with the chatbot as they would with their human friends.

    The Chief Executive of Snap Inc, Evan Spiegel, said that My AI will first roll out to subscribers of the Snapchat+ service — which costs $3.99 a month — but he hopes it will ultimately become available to all Snapchat users.

    The chatbot has been trained to avoid swear words and sexually explicit content and to decline requests to write academic essays. Other than that, at launch, My AI is essentially just a fast mobile-friendly version of ChatGPT inside Snapchat.

    The company, with 2.5 million subscribers, has been aiming to diversify its revenue base beyond advertising.

    While ChatGPT — the fastest-growing consumer software product in history — has become a productivity tool, Snap’s implementation treats it like a persona, as shown in the picture below.

    The design suggests that My AI is another friend inside of Snapchat to hang out with, not a search engine.

    Snap is one of the first clients of OpenAI’s new enterprise tier called Foundry, which lets companies run its latest GPT-3.5 model with dedicated computing designed for large workloads.