Category: Top News

  • Elon Musk Hires Top Researchers to Build a “Good AGI (Artificial General Intelligence)”

    Elon Musk Hires Top Researchers to Build a “Good AGI (Artificial General Intelligence)”

    IBL News | New York

    This weekend, Elon Musk this week-end announced the launch of xAI, a new organization that will work closely with Twitter, Tesla, and other companies, with the mission of “building a good AGI (artificial general intelligence) with the purpose of understanding the true nature of the universe.”

    xAI will be led by Elon Musk, CEO of Tesla and SpaceX, and veterans of DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto, as shown in the picture above. The team — in the picture above — has an impressive background in AI and is advised by the director of the Center for AI Safety.

    “We have worked on and led the development of some of the largest breakthroughs in the field, including AlphaStarAlphaCodeInceptionMinervaGPT-3.5, and GPT-4,” said the company.

    In a Twitter Space discussion, Elon Musk said that xAI is being built as competition to OpenAI.

    • “The goal is to make xAI a useful tool for consumers and businesses, and there is value in having multiple entities and competition.”

    “There is a significant danger in training AI to be politically correct or training it not to say what it thinks is true, so at xAI they will let the AI say what it believes to be true, and it will result in some criticism.”

    “Tesla’s self-driving capabilities will be enhanced because of xAI.”

    “Ray Kurzweil’s prediction of AGI by 2029 is pretty accurate, give or take a year.”

    “I have seen no evidence of aliens whatsoever so far, and it’s possible that other consciousness may not exist in our galaxy.”

  • Open AI Will Access the AP News Agency’s Text Archive to Train Its Systems

    Open AI Will Access the AP News Agency’s Text Archive to Train Its Systems

    IBL News | New York

    The Associated Press (AP) reached a two-year deal with OpenAI last week to share access to select news content and technology.

    By striking a deal with OpenAI, AP aims to become an industry leader in developing standards and best practices around generative AI for other newsrooms, which are primarily its customers.

    As part of the deal, OpenAI will license some of the AP’s factual text archive dating back to 1985 to help train its AI systems.

    The AP will have access to OpenAI’s technology and product expertise.

    Earlier this year, AP launched an AI-enabled search tool that makes it easier for its clients to access its vast trove of photos and videos using descriptive language rather than traditional metadata.

    The AP does not yet use generative AI in its news stories.

  • Google’s Bard Allows Users to Listen to Responses and Adds Chats’ Activity

    Google’s Bard Allows Users to Listen to Responses and Adds Chats’ Activity

    IBL News | New York

    Google’s Bard’s chatbot unveiled new features yesterday, allowing users to listen to responses, change the tone (simple, long, short, professional, or casual) and style (shorten/formalize, etc.), and use over 40 languages, including Arabic, Chinese, German, Hindi, and Spanish.

    The company added other familiar features like a recent chats sidebar with pinned, renaming conversations, and the ability to share chats with other people.

    Another update refers to using images in prompts, giving users the ability to upload images with prompts, and Bard will analyze the photo to help.

    A new feature will allow exporting Python code to Replit in addition to Google Colab.



  • Anthropic Releases a New Version of Its AI Chatbot ‘Claude 2’

    Anthropic Releases a New Version of Its AI Chatbot ‘Claude 2’

    IBL News | New York

    Anthropic this week announced a new version of its AI chatbot named Claude 2, available in beta at claude.ai in the U.S. and U.K. It also can be accessed via AI.

    This AI company — which competes with OpenAI as well as startups such as Cohere and AI21 Labs — said that Claude 2 has longer memory and improved performance in coding, math, and reasoning.

    Like the old Claude (Claude 1.3), Claude 2 can search across documents, summarize, write and code, and answer questions about particular topics.

    “Our latest model scored 76.5% on the multiple-choice section of the Bar exam, up from 73.0% with Claude 1.3. When compared to college students applying to graduate school, Claude 2 scores above the 90th percentile on the GRE reading and writing exams, and similarly to the median applicant on quantitative reasoning.”

    “Think of Claude as a friendly, enthusiastic colleague or personal assistant who can be instructed in natural language to help you with many tasks,” added the company.

    Users can input up to 100K tokens in each prompt, which means that Claude can work over hundreds of pages of technical documentation or even a book. Claude can now also write longer documents, from memos to letters to stories up to a few thousand tokens.

    It’s the largest of any commercially available model. Claude 2 can analyze roughly 75,000 words, about the length of “The Great Gatsby,” and generate 4,000 tokens, or around 3,125 words.

    Currently, Claude powers Jasper AI, Sourcegraph’s coding assistant Cody, Poe, Notion, and DuckDuckGo’s Duck Assist.

    Google is among Anthropic’s investors, having pledged $300 million in Anthropic for a 10% stake in the startup. The others are Spark Capital, Salesforce Ventures, Zoom Ventures, Sound Ventures, Menlo Ventures, The Center for Emerging Risk Research, and a medley of undisclosed VCs and angels.

    To date, Anthropic, which launched in 2021, and led by former OpenAI VP of research Dario Amodei, has raised $1.45 billion at a valuation in the single-digit billions. The company estimates it’ll need — $5 billion over the next two years — mostly for computing power in the form of tens of thousands of GPUs to train its models.

    In addition to its own API, Anthropic plans to make Claude 2 available through Bedrock, Amazon’s generative AI hosting platform, in the coming months.

    https://vimeo.com/844014740/141c021b45

    https://vimeo.com/844019370/d1e5de8aa0

    News stories about Anthropic at IBL News 

  • Axim Collaborative Releases Palm, the 16th Version of the Open edX Platform

    Axim Collaborative Releases Palm, the 16th Version of the Open edX Platform

    IBL News | New York

    Axim Collaborative — MIT’s and Harvard University’s non-profit organization that manages the Open edX software and its community — released the 16th version of the platform, called Palm.

    This release spans changes in the code of the edX platform — used at edx.org — from October 11, 2022, to April 11, 2023.

    To date, Open edX releases have been Olive, Nutmeg, Maple, Lilac, Koa, Juniper, Ironwood, Hawthorn, Ginkgo, Ficus, Eucalyptus, Dogwood, Cypress, Birch, and Aspen.

    In Palm, the minimum required versions will be Docker v20.10.15 and Compose v2.0.0.Ecommerce now supports the new Stripe Payment Intents API and no longer uses the Stripe Charges API.

    Palm includes discussion improvements, with posts streamlined, allowing users to see more information at once. In addition, comments and responses can now be sorted in reverse order.

    The iOS and Android apps are seeing an update on the dashboard, header, and course navigation.

    The release notes feature additional breaking changes.

  • Meta’s Latest Social App ‘Threads’ Reaches 100 Million Users

    Meta’s Latest Social App ‘Threads’ Reaches 100 Million Users

    IBL News | New York

    Twitter rival Instagram’s text-based app Threads announced it crossed the milestone of 100 million active users in less than a week, since its launch on June 6.

    Until now, OpenAI’s ChatGPT had the distinction of being the fastest-growing consumer product by achieving 10 million daily users in 40 days and 100 million monthly users in two months.

    Mark Zuckerberg’s Meta’s new text-focused social platform lacks some features. The app has a read-only web interface, no support for post search, direct messages, hashtags, and no “Following” feed.

    Threads, Meta’s new Twitter clone, is deeply tied into Instagram. Instagram accounts now display a Threads user number so the counting is both transparent and happening in real time.

    With Twitter in trouble — and its owner Elon Musk developing a controversial strategy — there’s a massive appetite for a replacement as Mastodon and Bluesky didn’t massively scale.

  • OpenAI’s New Plug-In ‘Code Interpreter’ Allows Anyone to Be a Data Analyst

    OpenAI’s New Plug-In ‘Code Interpreter’ Allows Anyone to Be a Data Analyst

    IBL News | New York

    OpenAI took its own in-house plug-in Code Interpreter and made it available to all of its ChatGPT Plus subscribers this week.

    Code Interpreter lets ChatGPT run code optionally with access to files (up to 100MB in size) that the user has uploaded. The user can ask ChatGPT to analyze data, create charts, edit files, perform math, etc. The tool can write code in Python and manipulate files.

    It means that Code Interpreter can generate charts, maps, data visualizations, and graphics, analyze music playlists, create interactive HTML files, clean datasets, and extract color palettes from images. The interpreter unlocks a myriad of capabilities, making it a powerful tool for data visualization, analysis, and manipulation.

    By unlocking their most powerful feature since GPT-4, OpenAI allows anyone to be a data analyst now expert Linas Bellunas showed by presenting “10 mind-blowing examples” of what’s possible with Code Interpreter. The future of data science has changed forever,” he explained.

    Code Interpreter can operate at an advanced level by automating complex quantitative analyses, merging and cleaning data, and even reasoning about data in a human-like manner.

    The AI can produce visualizations and dashboards, which users can then refine and customize simply by conversing with the AI. Its ability to create downloadable outputs adds another layer of usability to Code Interpreter.

    Experts agree that Code Interpreter is setting a new standard for the future of AI and data science. With this tool, OpenAI is pushing the boundaries of ChatGPT and large language models (LLMs) generally yet again.

  • OpenAI Makes the API GPT-4 Generally Available and Deactivates de Bing Plug-In

    OpenAI Makes the API GPT-4 Generally Available and Deactivates de Bing Plug-In

    IBL News | New York

    OpenAI this week announced through a blog post that all paying API customers will have access to GPT-4 by the end of this month. GPT-3.5 Turbo, image-generating model DALL·E, and speech-to-text model Whisper APIs are also generally available.

    “Today all existing API developers with a history of successful payments can access the GPT-4 API with 8K context; we plan to open up access to new developers by the end of this month, and then start raising rate limits after that depending on compute availability,” said Open AI.

    Applications using the stable model names for base GPT-3 models (ada, babbage, curie, davinci) will automatically be upgraded to the new models on January 4, 2024.

    Developers using the old models will have to manually upgrade their integrations by that date.

    GPT-4 can generate text and code and accept image and text inputs — an improvement over GPT-3.5, its predecessor, which only accepted text.

    Like previous GPT models from OpenAI, GPT-4 was trained using publicly available data, including from public webpages, as well as data that OpenAI licensed.

    However, GPT-4 isn’t perfect. It hallucinates facts and makes reasoning errors, sometimes with confidence. It doesn’t learn from its experience, failing at hard problems such as introducing security vulnerabilities into the code it generates.

    OpenAI said that later this year, it will allow developers to fine-tune GPT-4 and GPT-3.5 Turbo, with their own data.

    Also, OpenAI announced the deactivation of the browsing capability with Bing after it launched the feature for Plus subscribers a few weeks ago.

    We’ve learned that the browsing beta can occasionally display content in ways we don’t want, e.g. if a user specifically asks for a URL’s full text, it may inadvertently fulfill this request. We are temporarily disabling Browse while we fix this.”

    The users went on to share other ingenious tips and tricks that they use to bypass these paywalls in the thread. Several users illustrated on Reddit that they were able to bypass these paywalls using ChatGPT by prompting the tool to print the text on an article behind a paywall.

    It’s unclear when OpenAI will restore the feature.

    Another recent announcement from OpenAI refers to the creation of a new team led by its chief scientist to steer and control “superintelligent” AI systems that could arrive within the decade.

    The prediction is that AI with intelligence will exceed that of humans.

  • Class Technologies Will Release Its AI Teaching Assistant

    Class Technologies Will Release Its AI Teaching Assistant

    IBL News | New York

    Class.com announced it would release later this year its ChatGPT API-based Teaching Assistant to improve learner engagement, focus, and outcomes on live online courses.

    The chatbot will provide answers based on what was taught in class, highlight the transcript of spoken text, add details, provide a study guide, and supplement instructional materials.

    Class.com’s tool will include the option of turning it on or off in the courses following the instructors’ choice.

    “Class will work closely with the education community to develop best practices and policies for the use of AI in the classroom,” said Michael Chasen, CEO of the company.

    Focused on online synchronous learning, Class’ built-in Zoom platform claims to serve 1,500+ institutions worldwide with 10M+ users.

  • Andreessen Horowitz: “Predicting the Generative AI Market Is Hard”

    Andreessen Horowitz: “Predicting the Generative AI Market Is Hard”

    IBL News | New York

    Generative AI is getting real traction from real companies: models like Stable Diffusion and ChatGPT are setting historical records for user growth and several applications in image generation, copywriting, and code writing have exceeded $100 million of annualized revenue.

    • Infrastructure vendors are the biggest winners in this market so far, capturing the majority of dollars.

    • Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. Many apps are also relatively undifferentiated since they rely on similar underlying AI models and haven’t discovered obvious network effects, or data/workflows, that is hard for competitors to duplicate.

    • Most model providers, though responsible for the very existence of this market, haven’t yet achieved a large commercial scale. However, Given the huge usage of these models, large-scale revenues may not be far behind.

    This is what the investors of Andreessen Horowitz have observed after meeting with dozens of startup founders and operators in large companies.

    “Predicting what will happen next is much harder. But we think the key thing to understand is which parts of the stack are truly differentiated and defensible,” states the company.

    “The first wave of generative AI apps are starting to reach scale, but struggle with retention and differentiation.”

    This is Andreessen Horowitz’s preliminary view of the generative AI tech stack.

    It’s estimated that 10-20% of total revenue in generative AI today goes to the big three clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

    The biggest winner in generative AI so far is Nvidia. The company reported $3.8 billion of data center GPU revenue in the third quarter of its fiscal year 2023, including a meaningful portion for generative AI use cases.

    Other hardware options do exist, including Google Tensor Processing Units (TPUs); AMD Instinct GPUs; AWS Inferentia and Trainium chips; and AI accelerators from startups like Cerebras, Sambanova, and Graphcore. Intel, late to the game, is also entering the market with its high-end Habana chips and Ponte Vecchio GPUs.

    “Models face unclear long-term differentiation because they are trained on similar datasets with similar architectures; cloud providers lack deep technical differentiation because they run the same GPUs; and even the hardware companies manufacture their chips at the same fabs.”
    .