Category: Top News

  • OpenAI Announced GPT-4 Turbo, GPTs, and Assistants API, Among Other Improvements [Video]

    OpenAI Announced GPT-4 Turbo, GPTs, and Assistants API, Among Other Improvements [Video]

    IBL News | San Francisco

    OpenAI shared yesterday several new additions and improvements, including GPT-4 Turbo, an improved version of its flagship model, during its first DevDay conference in San Francisco. [OpenAI’s CEO in the picture above].

    The company also introduced GPTs, which allow developers to create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.

    “Anyone can easily build their own GPT — no coding is required. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images, or analyzing data,” explained OpenAI.

    Example GPTs are available today for ChatGPT Plus and Enterprise users, including Canva and Zapier AI Actions.

    More innovations announced at DevDay included:

    • New GPT-4 Turbo with a 128k context window, equivalent to more than 300 pages in text in a single prompt.GPT-4 Turbo has knowledge of world events up to April 2023.OpenAI is offering it at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.It will be a stable, production-ready model in the coming weeks.
    • New Assistants API, which is intended to make it easier for developers to build their own assistive AI apps.
    • New multimodal capabilities in the platform, including vision and image creation (DALL·E 3). Developers can now generate human-quality speech from text via the text-to-speech API.
    • Release of Whisper large-v3, the next version of OpenAI’s open-source automatic speech recognition model (ASR)
    • Open-sourcing the Consistency Decoder, a drop-in replacement for the Stable Diffusion VAE decoder.

    Official Press Release to the Media from OpenAI:

    A few key stats we announced on stage as it’s been a big year for OpenAI:

    • We have more than 2 million developers building on our API for a wide range of use cases.
    • Over 92% of Fortune 500 are building on our products.
    • And we have about 100M weekly active users on ChatGPT.

    Introducing GPTs: 

    We’re introducing GPTs – custom versions of ChatGPT. Anyone can easily build GPTs to help with specific tasks, at work, or at home. We think GPTs take a first step towards an agent-like future. For third-party developers, we’re showing them how to build these agent-like experiences into their own apps as well. Example GPTs are available today for ChatGPT Plus and Enterprise users to try out including Canva and Zapier AI Actions.

    New models and developer products announced at DevDay, including

    • ChatGPT gets a new UI.
    • GPT-4 Turbo: a new model that includes longer context length, better world knowledge because we’re updating the cutoff to April 2023, and other improvements.
    • New Assistants API makes it easier for developers to build their own GPT-like experiences into their own apps and services.
    • New modalities to the API, including vision, DALL·E 3, and text-to-speech with six preset voices to choose from.
    • Dropping the price of all of our models across the board so it’s easier for developers to build and scale on our platform.

    See details in our blog posts, GPTs, and new models/products, for more info. Press images are here, and we’ll add more throughout the day.

  • Legal and Compliance Risks that ChatGPT Presents to Organizations, According to Gartner

    Legal and Compliance Risks that ChatGPT Presents to Organizations, According to Gartner

    IBL News | New York

    The output generated by ChatGPT and other LLMs presents legal and compliance risks that every organization has to face or face dire consequences, according to the consultancy firm Gartner, Inc, which has identified six areas.

    “Failure to do so could expose enterprises to legal, reputational, and financial consequences,” said Ron Friedmann, Senior Director Analyst at Gartner Legal & Compliance Practice.

    • Risk 1: Fabricated and Inaccurate Answers

    ChatGPT is also prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” said Friedmann.

    Only accurate training of the robot with limited sources will mitigate this tendency to provide incorrect information.

    •  Risk 2. Data Privacy and Confidentiality

    Sensitive, proprietary, or confidential information used in prompts may become a part of its training dataset and incorporated into responses for users outside the enterprise if chat history is not disabled,

    “Legal and compliance need to establish a compliance framework and clearly prohibit entering sensitive organizational or personal data into public LLM tools,” said Friedmann.

    • Risk 3. Model and Output Bias

    “Complete elimination of bias is likely impossible, but legal and compliance need to stay on top of laws governing AI bias and make sure their guidance is compliant,” said Friedmann.

    “This may involve working with subject matter experts to ensure output is reliable and with audit and technology functions to set data quality controls,” he added.

    • Risk 4.  Intellectual Property (IP) and Copyright risks

    As ChatGPT is trained on a large amount of internet data that likely includes copyrighted material, its outputs – which do not offer source references – have the potential to violate copyright or IP protection.

    “Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn’t infringe on copyright or IP rights.”

    • Risk 5. Cyber Fraud Risks

    Bad actors are already using ChatGPT to generate false information at scale, like fake reviews, for instance.

    Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which

    A hacking technique known as “prompt injection” brings criminals to write malware codes or develop phishing sites that resemble well-known sites.

    “Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann.

    • Risk 6. Consumer Protection Risks

    Businesses that fail to disclose that they are using ChatGPT as a customer support chatbot run the risk of being charged with unfair practices under various laws and face the risk of losing their customers’ trust.

    For instance, the California chatbot law mandates that in certain consumer interactions, organizations must disclose that a consumer is communicating with a bot.

    Legal and compliance leaders need to ensure their organization’s use complies with regulations and laws.
    .

  • Brave Launches ‘Leo’,  Its AI Assistant Based in Llama 2 and Claude

    Brave Launches ‘Leo’, Its AI Assistant Based in Llama 2 and Claude

    IBL News | New York

    The Brave web browser released its AI-powered assistant, Leo, to all desktop users with version 1.60.

    The chatbot is intended to get questions answered, summarize pages or videos, translate text, and rewrite phrases, among other uses. The Android and iOS app is expected in the coming months.

    The company is also releasing a $15 per month paid version called Leo Premium “with features like access to faster and better large language models (LLMs) and higher-rate limits.”

    Users can access the Leo assistant by clicking the Leo icon in the sidebar or by typing a question in the address bar and clicking the Leo icon to get a direct answer.

    Leo is based on Llama 2 and Anthropic’s Claude LLMs. While free users get the basic version of these models, paying users will get access to models like Llama 2 70B, Code Llama 70B, and Anthropic Claude Instant. These models enable faster and more accurate responses.

    Brave said that all requests to Leo use an anonymous server as a proxy, so they can’t be linked back to a particular IP.  Additionally, the company specified that responses are immediately discarded after generation, and not stored on any server or used to train models. Brave noted that all subscriptions are validated by unlinkable tokens, so the company can’t know about your activity or your email.

    Other browsers like Opera, Microsoft Edge, SigmaOS, and Browser Company’s Arc have also introduced AI assistants in the sidebar.

    Brave, which laid off 9% of its staff in October, launched in May its own search API for clients, with prices starting from $3 per 1,000 queries.

     

  • Israeli Start-Up D-ID Releases Its App that Creates Talking Video Avatars

    Israeli Start-Up D-ID Releases Its App that Creates Talking Video Avatars

    IBL News | New York

    The Israeli start-up D-ID.com launched its iOS and Android apps, which allow users to upload a still image and script and turn it into an AI-generated video.

    Originally available as a web platform, this technology, a mix of proprietary and open-source software, is being used to create digital representations of people, including fictional characters, presenters, or brand ambassadors.

    Like the web version, the app features premade digital characters that D-ID provides or uploads an image from the user’s phone’s photo library. The videos can be up to 10 minutes in length.

    This mobile service is subscription-based, with plans starting at $5.99 per month.

    “At its core lies a foundational model capable of generating video frames based on audio input. All its products are powered by its robust API with the ability to render video at an industry-leading 100 FPS, four times faster than real-time rendering,” Gil Perry, CEO of D-ID, said to TechCrunch.

    D-ID, which raised a $25 million Series B last year, claims that over 150 million videos have been made using the platform.
    .

  • The Poe Chatbot Platform Offers Developers The Ability To Generate Revenue with Bots

    The Poe Chatbot Platform Offers Developers The Ability To Generate Revenue with Bots

    IBL News | New York

    Poe.com launched this week a revenue generation program for developers who create prompt and server bots on this chatbot platform. The service is currently available to US residents.

    Developers can write code and integrate it with Poe’s API to benefit from Poe’s monetization structure.

    When a bot causes a user to subscribe to Poe, the company shares a cut of the revenue they pay.

    In the near future, the user will be able to set a per-message fee.

    “With this latest launch, we believe we are fulfilling our goals for Poe to greatly reduce the amount of work needed for any AI developer to reach a large audience of users,” said the company.

    Since last February, Poe has delivered the ability to build on top of other bots without needing to pay for access. It has also offered a variety of features, such as threading, file uploading, and image generation.

    “With this, we hope Poe unlocks a thriving economy with a wide diversity of AI products. We expect all kinds of bots to do well, across areas like tutoring, knowledge, therapy, entertainment, assistants, analysis, storytelling, roleplay, and image, video, music, and other media generation. Since this is the beginning of a new market, there are lots of opportunities to provide a valuable service for the world and make money at the same time.”

    Poe will host a hackathon at AGI house on 11/4 for people in the San Francisco Bay Area “to experiment with creating bots that will be monetized.” 

    • Poe’s Discord.

  • Gradient Raised $10 Million to Compete in the Custom-Tailored LLMs Segment

    Gradient Raised $10 Million to Compete in the Custom-Tailored LLMs Segment

    IBL News | New York

    Gradient, a Burlingame, California – based start-up that allows developers to build and customize AI apps using LLMs, announced last month it raised $10 million in seed funding led by Wing VC, with participation from Mango Capital, Tokyo Black, The New Normal Fund, Secure Octane, and Global Founders Capital.

    The Gradient platform hosts a number of open-source LLMs — including Meta’s Llama 2, which users can scale and fine-tune to their needs — and tools, such as Hugging Face, LangChain, LlamaIndex, and Pinecone.

    Gradient also offers proprietary healthcare, finance, and law LLMs that customers can use to solve domain-specific problems, like data reconciliation, context-gathering, and paperwork processing.

    With a workforce of 20 employees, Gradient can host and serve models through an API à la Hugging Face, CoreWeave, and other AI infrastructure providers. Or it can deploy AI systems in an organization’s public cloud environment, whether Google Cloud Platform, Azure, or AWS.

    In either case, customers maintain full ownership and control over their data and trained models. Its solution is SOC 2, HIPAA, and GDPR compliant.

    “We’ve seen that the vast majority of businesses understand the value AI can bring to their business, but struggle to realize the value due to the complexity of adoption. Our platform radically simplifies harnessing AI for a business, which is a tremendous value-add,” said the company. 

    Other companies that have emerged in the space of building custom-tailored LLM-powered apps that benefit from the massive influx of capital are Reka, Writer, Contextual AI, Fixie, Cohere, and LlamaIndex.

    Nearly a fifth of total global VC funding this year has come from the AI sector alone. PitchBook expects the generative AI market to reach $42.6 billion in 2023.

    Beyond these start-ups, OpenAI offers a range of model fine-tuning tools, as do incumbents like Google (via Vertex AI), Amazon (via Bedrock), and Microsoft (via the Azure OpenAI Service).
    .

  • Biden Issued an Executive Order Directing Agencies to Develop AI Safety Guidelines

    Biden Issued an Executive Order Directing Agencies to Develop AI Safety Guidelines

    IBL News | New York

    U.S. President Joe Biden signed yesterday an Executive Order that establishes new standards for Generative AI safety, security, and privacy ahead of any legislation coming from lawmakers.

    Biden’s Executive Order responds to the global debate around the need for guardrails to counter the potential pitfalls of giving over too much control to AI systems.

    It builds on the work that led to voluntary commitments from 15 leading companies to reduce the risks of AI.

    The order will:

    • “Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government before companies make them public.
    • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. “The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.”
    • Protect against the risks of using AI to engineer dangerous biological materials.
    • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. “The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”
    • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
    • Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff.”

    “Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids,” said the White House.

    Now, the Biden-Harris Administration plans to pursue with Congress to pass bipartisan data privacy legislation by preserving AI development techniques.

    Meanwhile, Europe is on the cusp of passing the first extensive AI regulations.
    .
  • OpenAI Released Advanced Versions of DALL·E 3 and ChatGPT-4

    OpenAI Released Advanced Versions of DALL·E 3 and ChatGPT-4

    IBL News | New York

    OpenAI released and made it available its image model DALL-E 3 to ChatGPT Plus and Enterprise users. DALL-E 3 can create unique, crisper-in-detail images from a simple conversation, providing a selection of visuals for users to refine and iterate upon.

    This model can render intricate details, including text, hands, and faces. It also responds efficiently to extensive and detailed prompts, and it supports both landscape and portrait aspect ratios, as explained in this research paper.

    DALL·E 3 avoids any harmful imagery, including violent, sexual, or hateful content.

    This model is designed to decline requests that ask for an image in the style of a living artist.

    OpenAI offers the option for creators to opt their images out from training of their future image generation models.

    In addition, ahead of the upcoming OpenAI’s DevDay conference next week, where the company is expected to explore new tools with developers, the San Francisco–based research lab released a multimodal version of ChatGPT-4 that allows users to upload and analyze PDFs and various document types.

    The GPT-4 All Tools includes advanced data analysis, DALL·E 3, and built-in browsing capabilities without the need for plugins. These new features may make many third-party ChatGPT plugins obsolete.

    Microsoft’s Bing and Designer also added a more advanced version of DALL·E 3.

    This development pushes the boundaries of generative AI capabilities beyond text-based queries.

     

     


    In other news, OpenAI announced it built a new Preparedness team to evaluate, forecast, and protect against the risks of highly-capable AI—from today’s models to AGI.
    .

     

  • Class.com Expands Its Virtual Learning Platform to Microsoft Teams

    Class.com Expands Its Virtual Learning Platform to Microsoft Teams

    IBL News | New York

    Class.com, the learning platform exclusively built on Zoom since its release in late 2020, launched this month its virtual classroom solution on Microsoft Teams video conferencing platform.

    Class for Teams offers a similar experience to the existing Class for Zoom, with many of the same features and functionalities.

    “With the release of Class for Teams, we can bring Class to even more organizations who are already using Teams, expand the impact of Class, and improve teaching and learning for more individuals around the world,” said the company.

    Class worked together with Microsoft to build Class for Teams.
    .

  • “Open-Source AI Is Taking Over the World,” Says a Key Guide

    “Open-Source AI Is Taking Over the World,” Says a Key Guide

    IBL News | New York

    “Open-source AI represents the future of privacy and ownership of data.” This is the main conclusion of the 2023 State of Open Source AI Book.

    Another key finding is that “open-source AI is taking over the world.”

    In the last year, the open-source community has demonstrated its motivation by delivering quality products and creating different innovations in different fields.

    “This is just the beginning. Many improvements in multiple directions must be made in order to compare the results with centralized solutions,” says the report.

    Experts say that, as it happened with Linux, the world-class operating system, open source will dominate the future of LLMs and image models. Even Google acknowledged that they have no moat in this new world of open source AI.

    The consensus is that open source models are incredibly good at the most valuable tasks, as they can be fine-tuned to cover likely up to 99% of use cases when a product has collected enough labeled data.