Category: Top News

  • OpenAI Introduces Its Enterprise-Grade Version of ChatGPT

    OpenAI Introduces Its Enterprise-Grade Version of ChatGPT

    IBL News | San Francisco

    OpenAI yesterday launched ChatGPT Enterprise, with similar features included in Microsoft’s Bing Chat Enterprise.

    Experts see this move as an attempt to combat the fears of businesses that have restricted their employees from using the consumer version of ChatGPT.

    Essentially, it adds “enterprise-grade” privacy and data protection and analysis capabilities on top of the vanilla ChatGPT, along with enhanced performance and customization options. OpenAI emphasized that it won’t train models on data sent by businesses.

    ChatGPT Enterprise provides a dashboard with admin tools. It includes integrations for single sign-on, domain verification, usage statistics, and templates to build internal workflows.

    It also comes with unlimited access to Code Interpreter, now called Advanced Data Analytics, which allows one to analyze data, generate charts and insights, and solve math problems, including from uploaded files.

    Currently, Code Interpreter is available on ChatGPT Plus, the $20-per-month premium service.

    However, OpenAI plans to design more tools for data analysts, marketers, and customer support.

    ChatGPT Enterprise delivers a GPT-4 performance twice as fast as the standard one, with an expanded 32,000-token (around 25,000-word) context window.

    OpenAI is under pressure to monetize its tools as it reportedly spent $540 million last year to develop ChatGPT.

    Moreover, ChatGPT is costing OpenAI $700,000 a day to run, according to TechCrunch. Yet OpenAI made only $30 million in revenue in fiscal year 2022.

    CEO Sam Altman told investors that his company intends to boost that figure to $200 million this year and $1 billion in 2024.

    In addition, the usage of ChatGPT is dropping: a total of 9.7% from May to June, according to analytics firm Similarweb.

     

  • OpenAI Acquires a Design Company As Part of Its Strategy to Generate Revenue

    OpenAI Acquires a Design Company As Part of Its Strategy to Generate Revenue

    IBL News | New York

    OpenAI, backed by billions from Microsoft and major VC firms, this month announced its first public acquisition in its seven-year history. It’s a two-year-old, New York–based startup that builds AI tools and experiences called Global Illumination. It has eight employees (in the picture above).

    This company has built products on Instagram and Facebook and has also made significant contributions at YouTube, Google, Pixar, and Riot Games, according to OpenAI.

    Its most recent creation was Biomes, a Minecraft-like open-source sandbox multiplayer online role-playing game.

    The entire team, including the founders Thomas Dimson, Taylor Gordon, and Joey Flynn, have joined OpenAI to work on core products like ChatGPT.

    OpenAI spent over $540 million to develop ChatGPT and is now looking for revenue, experts say.

    Last year, it made only $30 million in revenue last year. CEO Sam Altman reportedly told investors that the company intends to boost that figure to $200 million this year and $1 billion next year.

  • Bing’s Market Share Remains Stagnant Despite Its Huge Investment In OpenAI

    Bing’s Market Share Remains Stagnant Despite Its Huge Investment In OpenAI

    IBL News | New York

    Despite its multi-billion dollar investment in OpenAI’s ChatGPT, Microsoft hasn’t shifted Bing’s market share.

    According to data company Statcounter, the market share of Microsoft’s search engine, which includes Bing Chat, has remained stagnant since its debut, at 2.99%, with only a slight deviation from January’s 3.03%.

    YipitData, an analytics firm, said that Bing’s usage sky-rocketed from 95.7 million in February to 101.7 million in March, but the traffic was short-lived, as the numbers dropped to 96.4 million in April. Usage spiked again in May to 99.2 million.

    The decline can be attributed to several things. First up, during its debut, the tool was spotted giving inaccurate responses. There’s also the fact that Microsoft had limited the use of the tool to its Microsoft Edge-based browser. Additionally, many organizations are still warming up to the new technology.

    Microsoft has refuted the findings and insists that the chatbot is still a hit, stating that the third-party findings are inaccurate.

    SimilarWeb highlighted that the number of users leveraging ChatGPT’s offerings has decreased by 12 percent between June and July.

    Source: StatCounter Global Stats – Search Engine Market Share

  • OpenAI Partners with Scale for Fine-Tuning LLMs Services

    OpenAI Partners with Scale for Fine-Tuning LLMs Services

    IBL News | New York

    OpenAI this week announced a partnership with San Francisco–based startup Scale in order to offer enterprise-grade fine-tuning capabilities.

    With fine-tuning processes, companies can customize models on proprietary data for AI to optimize the performance of LLM. It requires rigorous data enrichment and model evaluation.

    OpenAI recently launched fine-tuning for GPT-3.5 Turbo and will bring fine-tuning to GPT-4 this fall.

    A pilot project of fine-tuning GPT-3.5 will be Brex, a financial services company that has been using GPT-4 for memo generation. Now, this firm wants to explore it if they can improve cost and latency while maintaining quality by using a fine-tuned GPT-3.5 model.

    Scale explains that it prepares and enhances enterprise data with its Scale Data Engine. Then, it fine-tunes GPT-3.5 with this data and further customizes models with plugins and retrieval augmented generation, or the ability to reference and cite your proprietary documents in its responses. Scale then leverages its Test and Evaluation platform and trained domain experts with the goal of achieving performance and following safety requirements.

    “Its AI’s software package Nucleus enables firms to quickly identify and fix mislabeled data, or refine existing data labels to improve algorithmic training and boost an AI system’s performance,” said the company founder and CEO, the 24-years old billionaire Alexandr Wang.
    .

  • Meta Announced the Release of SeamlessM4T, an AI Model that Translates 100 Languages

    Meta Announced the Release of SeamlessM4T, an AI Model that Translates 100 Languages

    IBL News | New York

    Meta this month announced the release of SeamlessM4T, an AI open-source model that can translate and transcribe 100 languages across text and speech.

    It’s available along with a new translation data set named SeamlessM4T. According to Meta, this is a “significant breakthrough” in the field of AI-powered speech-to-speech and speech-to-text.

    “Our single model provides on-demand translations that enable people who speak different languages to communicate more effectively,” Meta said to TechCrunch.

    Several companies, such as Google, Amazon, Microsoft, OpenAI, and a number of startups, are investing resources in developing sophisticated AI translation and transcription tools.

    Google is creating a “Universal Speech Model”, a model that can understand the world’s 1,000 most-spoken languages.

    Mozilla, meanwhile, spearheaded Common Voice, one of the largest multi-language collections of voices for training automatic speech recognition algorithms.

  • OpenAI Brings Fine-Tuning to Its GPT-3.5 Turbo

    OpenAI Brings Fine-Tuning to Its GPT-3.5 Turbo

    IBL News | New York

    OpenAI made fine-tuning for GPT-3.5 Turbo available for users.

    According to the company, fine-tuned versions of GPT-3.5 can match, or even outperform, the base capabilities of GPT-4, the company’s flagship model, on “certain narrow tasks.”

    Data sent in and out of the fine-tuning API, as with all our APIs, will be owned by the customer and not used by OpenAI to train models.

    In addition to increased performance, fine-tuning also enables businesses to shorten their prompts while ensuring similar performance.

    Fine-tuning with GPT-3.5-Turbo can also handle 4k tokens—double our previous fine-tuned models.

    Early testers have reduced prompt size by up to 90% by fine-tuning instructions into the model itself, speeding up each API call and cutting costs, according to OpenAI.

    Fine-tuning costs are as follows:

    • Training: $0.008 / 1K tokens
    • Usage input: $0.012 / 1K tokens
    • Usage output: $0.016 / 1K tokens

    Fine-tuning is most powerful when combined with other techniques such as prompt engineering, information retrieval, and function calling.

    In other news, OpenAI today made available two updated GPT-3 base models (babbage-002 and davinci-002), which can be fine-tuned as well.

    OpenAI said that fine-tuning support for GPT-4 — which, unlike GPT-3.5, can understand images in addition to text — will arrive sometime later this fall, but said when.
    .

    • OpenAI’s fine-tuning guide.

     

  • McKinsey Introduces ‘Lilli’, Its AI Chat Application For Employees and Clients

    McKinsey Introduces ‘Lilli’, Its AI Chat Application For Employees and Clients

    IBL News | New York

    McKinsey & Company this month unveiled its generative AI chatbot named Lilli, designed to summarize key points and provides relevant content to its partner consultants and clients.

    Lilli has been in beta since June 2023, used by 7,000 employees as a “minimum viable product” (MVP), answering 50,000 questions. It will be rolling out across McKinsey this fall.

    The chat application is named after Lillian Dombrowski, the first woman McKinsey hired for a professional services role back in 1945,

    The tool accesses the firm’s extensive knowledge base, with over 100,000 documents, interview transcripts, and resources from 40 curated sources and experts in 70 countries.

    “Lilli aggregates our knowledge and capabilities in one place for the first time and will allow us to spend more time with clients activating those insights and recommendations and maximizing the value we can create,” said Erik Roth, a senior partner with McKinsey.

    “I use Lilli to look for weaknesses in our argument and anticipate questions that may arise,” said Adi Pradhan, an associate partner at McKinsey. “I also use it to tutor myself on new topics and make connections between different areas on my projects.”

    With 30,000 employees, McKinsey & Company is one of the largest consulting agencies in the world.

    With an interface similar to ChatGPT and Claude 2, Lilli contains an expandable left-hand sidebar with saved categorized prompts, according to a report in VentureBeat.

    It includes two tabs that a user may toggle between, one, “GenAI Chat”, that sources data from a more generalized large language model (LLM) backend, and another, “Client Capabilities”, which sources responses from McKinsey’s corpus of documents, transcripts, and presentations. Lilli goes full attribution by citing its sources at the bottom of every response, along with links and even page numbers to specific pages.

    McKinsey’s chatbot leverages models developed by Cohere and OpenAI on the Microsoft Azure platform, although the firm insists that its tool is “LLM agnostic” and is constantly exploring new LLMs.

    Report by McKinsey: The state of AI in 2023: Generative AI’s breakout year
    Paul Tocatilan on LinkedIn: Democratization of Mentors

  • Some Voices Face Disillusionment on ChatGPT and Generative AI

    Some Voices Face Disillusionment on ChatGPT and Generative AI

    IBL News | New York

    Fast-scaling and hyped technologies don’t invariably fulfill their promise, especially if they don’t generate significant revenue. In this regard, what if generative AI turns out to be a dud?

    Programmers and undergrads who use generative tools as code assistants and for writing text are not deep-pocketed.

    This is the core idea that the Co-Founder of the Center for the Advancement of Trustworthy AI, Gary Marcus, well known for his testimony on AI oversight at the US Senate, defends in a viral article.

    “Neither coding nor high-speed, mediocre quality copy-writing is remotely enough to maintain current valuation dreams,” he wrote.

    Other voices like venture capitalist Benedict Evans have raised a similar approach this in a series of posts:

     

    “My friends who tried to use ChatGPT to answer search queries to help with academic research have faced similar disillusionment,” Gary Marcus stated. “A lawyer who used ChatGPT for legal research was excoriated by a judge, and basically had to promise, in writing, never to do so again in an unsupervised way.”

    In my mind, the fundamental error that almost everyone is making is in believing that Generative AI is tantamount to AGI (general purpose artificial intelligence, as smart and resourceful as humans if not more so).”

    His doubts are rooted in the unsolved problem of the tendency to confabulate (hallucinate) false information at the core of generative AI.

    AI researchers still cannot guarantee that any given system will be honest, harmless, and non-biased, as Sam Altman, the CEO of OpenAI [in the picture], recently said.

    “If hallucinations aren’t fixable, generative AI probably isn’t going to make a trillion dollars a year,” Gary Marcus predicted.
    .

     

     

  • The U.S. Air Force’s Stealthy XQ-58A Valkyrie Drones Are Successfully Driven by AI

    The U.S. Air Force’s Stealthy XQ-58A Valkyrie Drones Are Successfully Driven by AI

    IBL News | New York

    The U.S. Air Force has implemented AI agents on advanced drones like the XQ-58A Valkyrie. Autonomous Air Combat Operations seems to be ready.

    Machine-learning trained and artificial intelligence algorithms used in this uncrewed jet aircraft’s AI brain were trained millions of times in simulated environments before being put to the test in reality.

    The first-ever flight was completed successfully at the Wright-Paterson Air Force Base in Ohio on July 25, 2023.

    An F-15E Strike Eagle from the 96th Test Wing’s 40th Flight Test Squadron at Eglin AFB, Florida flies in formation with an XQ-58A Valkyrie flown by artificial intelligence agents developed by the Autonomous Air Combat Operations, or AACO, team from AFRL. The algorithms matured millions of hours in high fidelity AFSIM simulation events, 10 sorties on the X-62 VISTA, Hardware-in-the-Loop events with the XQ-58A, and ground test operations. (U.S. Air Force photo)“This sortie officially enables the ability to develop AI/ML agents that will execute modern air-to-air and air-to-surface skills that are immediately transferrable to other autonomy programs,” said Col. Tucker Hamilton, chief, AI Test, and Operations, for the Department of the Air Force.

    “AI will be a critical element to future warfighting and the speed at which we’re going to have to understand the operational picture and make decisions,” said Brig. Gen. Scott Cain, AFRL commander. “AI, Autonomous Operations, and Human-Machine Teaming continue to evolve at an unprecedented pace, and we need the coordinated efforts of our government, academia, and industry partners to keep pace.”
    .

     

  • Legal Services, Filmmaking, and Coding Are Among the Industries Most Impacted by AI

    Legal Services, Filmmaking, and Coding Are Among the Industries Most Impacted by AI

    IBL News | New York

    Generative AI technology is currently shaking up at least three industries: legal services, filmmaking, and coding. The Financial Times wrote a report analyzing the impact of AI tools.

    In legal services, firms are regularly using AI technology, still in the early days – with Harvey as the main provider. They consider it a productivity gain and a time-saving tool, especially for tasks assigned to junior staff.

    Rather than replacing jobs, experts say AI could intensify work.

    In the filmmaking industry, the impact of AI can be seen in the reduction of work for screenwriters, adoption of AI dubbing technology in foreign-language films, and “digital doubles” for actors.

    Therefore, screenwriters fear that book adaptations or first drafts can be written by AI, while actors worry about losing control of their images, and voice dubbers are concerned that new AI technologies matching mouth movements to different languages will eliminate their jobs.

    For example, two start-ups, Flawless and Papercup, have developed tools that use AI to automate the translation and dubbing process.

    In the software industry, generative AI can suggest lines of code that programmers can run and test. It can also analyze existing code and search for vulnerabilities. The consensus is that AI can boost productivity but struggles with complex software.

    Experts note that AI does get stuff wrong too, as it might invent a function that doesn’t exist, but it all looks perfectly plausible. This echoes the need for developers to review responses.

    In other words, AI can boost developers’ productivity, but it is not efficient yet.
    .