Author: IBL News

  • Meta Announced the Release of SeamlessM4T, an AI Model that Translates 100 Languages

    Meta Announced the Release of SeamlessM4T, an AI Model that Translates 100 Languages

    IBL News | New York

    Meta this month announced the release of SeamlessM4T, an AI open-source model that can translate and transcribe 100 languages across text and speech.

    It’s available along with a new translation data set named SeamlessM4T. According to Meta, this is a “significant breakthrough” in the field of AI-powered speech-to-speech and speech-to-text.

    “Our single model provides on-demand translations that enable people who speak different languages to communicate more effectively,” Meta said to TechCrunch.

    Several companies, such as Google, Amazon, Microsoft, OpenAI, and a number of startups, are investing resources in developing sophisticated AI translation and transcription tools.

    Google is creating a “Universal Speech Model”, a model that can understand the world’s 1,000 most-spoken languages.

    Mozilla, meanwhile, spearheaded Common Voice, one of the largest multi-language collections of voices for training automatic speech recognition algorithms.

  • OpenAI Brings Fine-Tuning to Its GPT-3.5 Turbo

    OpenAI Brings Fine-Tuning to Its GPT-3.5 Turbo

    IBL News | New York

    OpenAI made fine-tuning for GPT-3.5 Turbo available for users.

    According to the company, fine-tuned versions of GPT-3.5 can match, or even outperform, the base capabilities of GPT-4, the company’s flagship model, on “certain narrow tasks.”

    Data sent in and out of the fine-tuning API, as with all our APIs, will be owned by the customer and not used by OpenAI to train models.

    In addition to increased performance, fine-tuning also enables businesses to shorten their prompts while ensuring similar performance.

    Fine-tuning with GPT-3.5-Turbo can also handle 4k tokens—double our previous fine-tuned models.

    Early testers have reduced prompt size by up to 90% by fine-tuning instructions into the model itself, speeding up each API call and cutting costs, according to OpenAI.

    Fine-tuning costs are as follows:

    • Training: $0.008 / 1K tokens
    • Usage input: $0.012 / 1K tokens
    • Usage output: $0.016 / 1K tokens

    Fine-tuning is most powerful when combined with other techniques such as prompt engineering, information retrieval, and function calling.

    In other news, OpenAI today made available two updated GPT-3 base models (babbage-002 and davinci-002), which can be fine-tuned as well.

    OpenAI said that fine-tuning support for GPT-4 — which, unlike GPT-3.5, can understand images in addition to text — will arrive sometime later this fall, but said when.
    .

    • OpenAI’s fine-tuning guide.

     

  • McKinsey Introduces ‘Lilli’, Its AI Chat Application For Employees and Clients

    McKinsey Introduces ‘Lilli’, Its AI Chat Application For Employees and Clients

    IBL News | New York

    McKinsey & Company this month unveiled its generative AI chatbot named Lilli, designed to summarize key points and provides relevant content to its partner consultants and clients.

    Lilli has been in beta since June 2023, used by 7,000 employees as a “minimum viable product” (MVP), answering 50,000 questions. It will be rolling out across McKinsey this fall.

    The chat application is named after Lillian Dombrowski, the first woman McKinsey hired for a professional services role back in 1945,

    The tool accesses the firm’s extensive knowledge base, with over 100,000 documents, interview transcripts, and resources from 40 curated sources and experts in 70 countries.

    “Lilli aggregates our knowledge and capabilities in one place for the first time and will allow us to spend more time with clients activating those insights and recommendations and maximizing the value we can create,” said Erik Roth, a senior partner with McKinsey.

    “I use Lilli to look for weaknesses in our argument and anticipate questions that may arise,” said Adi Pradhan, an associate partner at McKinsey. “I also use it to tutor myself on new topics and make connections between different areas on my projects.”

    With 30,000 employees, McKinsey & Company is one of the largest consulting agencies in the world.

    With an interface similar to ChatGPT and Claude 2, Lilli contains an expandable left-hand sidebar with saved categorized prompts, according to a report in VentureBeat.

    It includes two tabs that a user may toggle between, one, “GenAI Chat”, that sources data from a more generalized large language model (LLM) backend, and another, “Client Capabilities”, which sources responses from McKinsey’s corpus of documents, transcripts, and presentations. Lilli goes full attribution by citing its sources at the bottom of every response, along with links and even page numbers to specific pages.

    McKinsey’s chatbot leverages models developed by Cohere and OpenAI on the Microsoft Azure platform, although the firm insists that its tool is “LLM agnostic” and is constantly exploring new LLMs.

    Report by McKinsey: The state of AI in 2023: Generative AI’s breakout year
    Paul Tocatilan on LinkedIn: Democratization of Mentors

  • Some Voices Face Disillusionment on ChatGPT and Generative AI

    Some Voices Face Disillusionment on ChatGPT and Generative AI

    IBL News | New York

    Fast-scaling and hyped technologies don’t invariably fulfill their promise, especially if they don’t generate significant revenue. In this regard, what if generative AI turns out to be a dud?

    Programmers and undergrads who use generative tools as code assistants and for writing text are not deep-pocketed.

    This is the core idea that the Co-Founder of the Center for the Advancement of Trustworthy AI, Gary Marcus, well known for his testimony on AI oversight at the US Senate, defends in a viral article.

    “Neither coding nor high-speed, mediocre quality copy-writing is remotely enough to maintain current valuation dreams,” he wrote.

    Other voices like venture capitalist Benedict Evans have raised a similar approach this in a series of posts:

     

    “My friends who tried to use ChatGPT to answer search queries to help with academic research have faced similar disillusionment,” Gary Marcus stated. “A lawyer who used ChatGPT for legal research was excoriated by a judge, and basically had to promise, in writing, never to do so again in an unsupervised way.”

    In my mind, the fundamental error that almost everyone is making is in believing that Generative AI is tantamount to AGI (general purpose artificial intelligence, as smart and resourceful as humans if not more so).”

    His doubts are rooted in the unsolved problem of the tendency to confabulate (hallucinate) false information at the core of generative AI.

    AI researchers still cannot guarantee that any given system will be honest, harmless, and non-biased, as Sam Altman, the CEO of OpenAI [in the picture], recently said.

    “If hallucinations aren’t fixable, generative AI probably isn’t going to make a trillion dollars a year,” Gary Marcus predicted.
    .

     

     

  • The U.S. Air Force’s Stealthy XQ-58A Valkyrie Drones Are Successfully Driven by AI

    The U.S. Air Force’s Stealthy XQ-58A Valkyrie Drones Are Successfully Driven by AI

    IBL News | New York

    The U.S. Air Force has implemented AI agents on advanced drones like the XQ-58A Valkyrie. Autonomous Air Combat Operations seems to be ready.

    Machine-learning trained and artificial intelligence algorithms used in this uncrewed jet aircraft’s AI brain were trained millions of times in simulated environments before being put to the test in reality.

    The first-ever flight was completed successfully at the Wright-Paterson Air Force Base in Ohio on July 25, 2023.

    An F-15E Strike Eagle from the 96th Test Wing’s 40th Flight Test Squadron at Eglin AFB, Florida flies in formation with an XQ-58A Valkyrie flown by artificial intelligence agents developed by the Autonomous Air Combat Operations, or AACO, team from AFRL. The algorithms matured millions of hours in high fidelity AFSIM simulation events, 10 sorties on the X-62 VISTA, Hardware-in-the-Loop events with the XQ-58A, and ground test operations. (U.S. Air Force photo)“This sortie officially enables the ability to develop AI/ML agents that will execute modern air-to-air and air-to-surface skills that are immediately transferrable to other autonomy programs,” said Col. Tucker Hamilton, chief, AI Test, and Operations, for the Department of the Air Force.

    “AI will be a critical element to future warfighting and the speed at which we’re going to have to understand the operational picture and make decisions,” said Brig. Gen. Scott Cain, AFRL commander. “AI, Autonomous Operations, and Human-Machine Teaming continue to evolve at an unprecedented pace, and we need the coordinated efforts of our government, academia, and industry partners to keep pace.”
    .

     

  • Legal Services, Filmmaking, and Coding Are Among the Industries Most Impacted by AI

    Legal Services, Filmmaking, and Coding Are Among the Industries Most Impacted by AI

    IBL News | New York

    Generative AI technology is currently shaking up at least three industries: legal services, filmmaking, and coding. The Financial Times wrote a report analyzing the impact of AI tools.

    In legal services, firms are regularly using AI technology, still in the early days – with Harvey as the main provider. They consider it a productivity gain and a time-saving tool, especially for tasks assigned to junior staff.

    Rather than replacing jobs, experts say AI could intensify work.

    In the filmmaking industry, the impact of AI can be seen in the reduction of work for screenwriters, adoption of AI dubbing technology in foreign-language films, and “digital doubles” for actors.

    Therefore, screenwriters fear that book adaptations or first drafts can be written by AI, while actors worry about losing control of their images, and voice dubbers are concerned that new AI technologies matching mouth movements to different languages will eliminate their jobs.

    For example, two start-ups, Flawless and Papercup, have developed tools that use AI to automate the translation and dubbing process.

    In the software industry, generative AI can suggest lines of code that programmers can run and test. It can also analyze existing code and search for vulnerabilities. The consensus is that AI can boost productivity but struggles with complex software.

    Experts note that AI does get stuff wrong too, as it might invent a function that doesn’t exist, but it all looks perfectly plausible. This echoes the need for developers to review responses.

    In other words, AI can boost developers’ productivity, but it is not efficient yet.
    .

  • Microsoft’s Copilot in Teams Chat Summarizes Key Information and Writes Follow-Up Emails

    Microsoft’s Copilot in Teams Chat Summarizes Key Information and Writes Follow-Up Emails

    IBL News | New York

    Microsoft announced that, beyond its initial version of March, new capabilities on its Copilot for Teams Phone (both VoIP and PSTN calls) and Teams Chat.

    Users can get real-time written summarization and insights on phone and chat conversations, with draft notes and highlighted key points, such as names, dates, numbers, and tasks.

    An example of the Microsoft 365 Copilot on Teams points out how this tool summarizes a customer’s talk as he speaks, capturing his relevant questions and feedback while it suggests the next steps and writes a follow-up email.
    .

  • Amazon Plans AI-Generated Customer Review Highlights on Products

    Amazon Plans AI-Generated Customer Review Highlights on Products

    IBL News | New York

    Amazon this week announced it will use generative AI to provide a short paragraph of text right on the product detail page. It will help customers better understand what customers say and feel about a product without reading dozens of individual reviews.

    The e-commerce giant started customer reviews in 1995, allowing users the opportunity to voice their honest opinions on products — the good, the bad, and everything in between.
    In 2019, Amazon enabled customers who purchased an item on Amazon to provide feedback by leaving a quick star rating without having to write a full-text review. In 2022, 125 million customers contributed nearly 1.5 billion reviews and ratings to Amazon stores.

    Customer reviews have become synonymous with online shopping today.

    The AI-generated review feature is available to a subset of mobile shoppers in the U.S. across a broad selection of products, as shown below.

    A smart phone with an Amazon product review on the screen.

     

    A smart phone with an Amazon product review on the screen.

     

    A smart phone with an Amazon product review on the screen.

    Amazon also uses machine learning models to detect and eliminate fake reviews that intentionally mislead customers. This analyzes thousands of data points to detect risk, including relations to other accounts, sign-in activity, review history, and other indications of unusual behavior, as well as expert investigators that use sophisticated fraud-detection tools to analyze and prevent fake reviews from ever appearing in the Amazon store.

    In 2021, the company blocked 200 million fake reviews. It has also tried to crack down on the sources of fake reviews for years via lawsuits and other actions, including suing sellers who bought fake reviews. Last year, it sued the admins from 10,000 Facebook groups who were engaged in fake review brokering.

    Amazon addresses the concern around fake reviews today, saying it will only summarize those reviews from verified purchases.
    .

  • Nvidia Announced Its Support to a Hugging Face Generative AI Service

    Nvidia Announced Its Support to a Hugging Face Generative AI Service

    IBL News | New York

    NVIDIA, yesterday, announced it will support with its AI supercomputer DGX Cloud a new Hugging Face service called Training Cluster as a Service.

    This service, set to roll out “in the coming months,” simplifies the creation of new and custom generative AI models for the enterprise.

    DGX Cloud includes access to a cloud instance with eight Nvidia H100 or A100 GPUs and 640GB of GPU memory, as well as Nvidia’s AI Enterprise software to develop AI apps and large language models and consultations with Nvidia experts.

    Companies could subscribe to DGX Cloud on their own at a price starting at $36,999 per instance for a month. But Training Cluster as a Service integrates DGX Cloud infrastructure with Hugging Face’s platform of a repository for all things related to AI models (over 250,000 models and 50,000 data sets.)

    Our collaboration will bring Nvidia’s most advanced AI supercomputing to Hugging Face to enable companies to take their AI destiny into their own hands with open source to help the open-source community easily access the software and speed they need to contribute to what’s coming next,” Hugging Face co-founder and CEO Clément Delangue said.

    Hugging Face’s partnership with Nvidia comes as this AI startup is looking to raise funds at a $4 billion valuation.

    Meanwhile, Nvidia is pushing into cloud services for training and running AI models as the demand for such services grows. In March, the company launched AI Foundations, a collection of components that developers can use to build custom generative AI models for particular use cases.
    .

  • SAP Invested in Generative AI startups Anthropic, Cohere, and Aleph Alpha

    SAP Invested in Generative AI startups Anthropic, Cohere, and Aleph Alpha

    IBL News | New York

    The German giant SAP announced that it invested, through its venture capital firm Sapphire Ventures, in three major AI startups: Anthropic, Cohere, and Aleph Alpha.

    The terms of the direct investment weren’t detailed, although SAP disclosed that it built on its $1 billion-plus commitment to back new AI firms.

    SAP also highlighted several internal efforts around Generative AI, including a digital assistant for customer experience.

    “SAP is committed to creating an enterprise AI ecosystem for the future that complements our world-class business applications suite and helps our customers unlock their full potential,” SAP’s Chief Strategy Officer, Sebastian Steinhaeuser, said in a press release.

    Anthropic’s Claude system processes text within the context of natural conversations. Cohere provides a generative text platform that can be deployed on virtual private clouds or on-site where data resides.

    Aleph Alpha — which was already an SAP Partner — creates and hosts multimodal, multilanguage models focused on interoperability, data privacy, and security.
    .