Author: IBL News

  • Skills-Based Hiring is Becoming the New Norm Among Corporate Recruiters

    Skills-Based Hiring is Becoming the New Norm Among Corporate Recruiters

    IBL News | New York

    Ongoing shortages of talent — underlined in a recent report from the U.S. Department of Labor stating there are 9.5 million job openings — are causing employers to prioritize competencies over credentials.

    Now candidates with specific abilities rather than a college degree are becoming the new norm. Degrees, however, continue to be in demand, especially by employers with higher salaries.

    Skills-based hiring — reflected on micro-credentials — is the new trend not only among corporate recruiters but also among state governors and the U.S. House of Representatives.

    As AI and technological changes impact the economy, the need for continuous upskilling for learners and new recruiting strategies are generating a new job market.

    The new AI tracking and scanning technological systems are increasingly determining who to hire, reshaping HR’s department hiring processes.

    Higher Education organizations are taking note as well, aware of the needed employability of graduates. Digital micro-credentialing can now reflect richer and more granular knowledge among students.

    Many colleges use real-time labor market analytics to keep up with changes in the workplace and tune their curriculum.
    .

  • Challenges Such as Hallucination and Other Research Concerns Delay the Adoption of LLMs

    Challenges Such as Hallucination and Other Research Concerns Delay the Adoption of LLMs

    IBL News | New York

    Hallucination in AI, which happens when the model makes stuff up, is the number one roadblock corporations see to adopt LLMs (Large Language Models), according to companies such as Anthropic, Langchain, Elastics, Dropbox, and others.

    Reducing and measuring hallucination, along with optimizing contexts, incorporating multimodality, and GPU alternatives, and increasing usability are among the top ten challenges and major research directions today, expert Chip Huyen wrote in an insightful article. Furthermore, many startups are focusing on these problems.

    1. Reduce and measure hallucinations
    2. Optimize context length and context construction
    3. Incorporate other data modalities
    4. Make LLMs faster and cheaper
    5. Design a new model architecture
    6. Develop GPU alternatives
    7. Make agents usable
    8. Improve learning from human preference
    9. Improve the efficiency of the chat interface
    10. Build LLMs for non-English languages

    Some interesting ideas mentioned in the article:

    • Ad-hoc tips to reduce hallucination include adding more context to the prompt, a chain of thought, self-consistency, or asking your model to be concise in its response.

    • Context learning, length, and construction have emerged as predominant patterns for the LLM industry, as they are critical for RAG – Retrieval Augmented Generation.

    • Multimodality promises a big boost in model performance.

    • Developing a new architecture to outperform Transformer isn’t easy, as Transformer has been so heavily optimized over the last six years.

    • Develop GPU alternatives like Quantum virtual computers continues attracting hundreds of millions of dollars.

    • Regardless the excitement around Auto-GPT and GPT-Engineering, there is still doubt about whether LLMs are reliable and performant enough to be entrusted with the power to act.

    • The most notable startup in this area is perhaps Adept, which has raised almost half a billion dollars to date. [Video below]

    • DeepMind tries to generate responses that please the most people.

    • There are certain areas where the chat interface can be improved for more efficiency. Nvidia’s NeVA chatbot is one of the most mentioned examples.

    NVIDIA's NeVA interface

    • The most notable startup in this area is perhaps is Adept, which has raised almost half a billion dollars to date. [Video below]

     

    • Current English-first LLMs don’t work well for many other languages, both in terms of performance, latency, and speed. Building LLMs for non-English languages opens a new frontier.

    Symato might be the biggest community effort today.
    .

  • Google Announced ‘Duet AI’ for Gmail, Docs, Drive, and Slides

    Google Announced ‘Duet AI’ for Gmail, Docs, Drive, and Slides

    IBL News | San Francisco

    Google Cloud announced Duet AI — its collection of generative AI features for text summarization, code writing, and data organization — yesterday at its annual Cloud Next conference in San Francisco.

    Google said that Duet AI will be rolled out across all of its Workspace apps, including Gmail, Drive, Slides, and Docs, at $30 per user.

    Duet AI, which directly challenges Microsoft, can turn Google Docs outline into a deck in Slides or have it make a chart out of the data in a spreadsheet.

    One of the biggest features of Duet AI will be the ability to take notes in real-time and later summarize them in Meet. In addition, during a call, the user will be able to talk privately with a Google chatbot to go over missed details.

    Another new Meet feature lets Duet “attend” a meeting on your behalf. With this “attend for me” functionality, Google will auto-generate some text about what the user might want to discuss. This note-taking feature will come to Google’s Workspace Labs in the coming months.

     

     • Google Workspace Blog: Now Available: Duet AI for Google Workspace
    • Google Cloud Blog: Expanding Duet AI, an AI-powered collaborator, across Google Cloud

  • OpenAI Introduces Its Enterprise-Grade Version of ChatGPT

    OpenAI Introduces Its Enterprise-Grade Version of ChatGPT

    IBL News | San Francisco

    OpenAI yesterday launched ChatGPT Enterprise, with similar features included in Microsoft’s Bing Chat Enterprise.

    Experts see this move as an attempt to combat the fears of businesses that have restricted their employees from using the consumer version of ChatGPT.

    Essentially, it adds “enterprise-grade” privacy and data protection and analysis capabilities on top of the vanilla ChatGPT, along with enhanced performance and customization options. OpenAI emphasized that it won’t train models on data sent by businesses.

    ChatGPT Enterprise provides a dashboard with admin tools. It includes integrations for single sign-on, domain verification, usage statistics, and templates to build internal workflows.

    It also comes with unlimited access to Code Interpreter, now called Advanced Data Analytics, which allows one to analyze data, generate charts and insights, and solve math problems, including from uploaded files.

    Currently, Code Interpreter is available on ChatGPT Plus, the $20-per-month premium service.

    However, OpenAI plans to design more tools for data analysts, marketers, and customer support.

    ChatGPT Enterprise delivers a GPT-4 performance twice as fast as the standard one, with an expanded 32,000-token (around 25,000-word) context window.

    OpenAI is under pressure to monetize its tools as it reportedly spent $540 million last year to develop ChatGPT.

    Moreover, ChatGPT is costing OpenAI $700,000 a day to run, according to TechCrunch. Yet OpenAI made only $30 million in revenue in fiscal year 2022.

    CEO Sam Altman told investors that his company intends to boost that figure to $200 million this year and $1 billion in 2024.

    In addition, the usage of ChatGPT is dropping: a total of 9.7% from May to June, according to analytics firm Similarweb.

     

  • OpenAI Acquires a Design Company As Part of Its Strategy to Generate Revenue

    OpenAI Acquires a Design Company As Part of Its Strategy to Generate Revenue

    IBL News | New York

    OpenAI, backed by billions from Microsoft and major VC firms, this month announced its first public acquisition in its seven-year history. It’s a two-year-old, New York–based startup that builds AI tools and experiences called Global Illumination. It has eight employees (in the picture above).

    This company has built products on Instagram and Facebook and has also made significant contributions at YouTube, Google, Pixar, and Riot Games, according to OpenAI.

    Its most recent creation was Biomes, a Minecraft-like open-source sandbox multiplayer online role-playing game.

    The entire team, including the founders Thomas Dimson, Taylor Gordon, and Joey Flynn, have joined OpenAI to work on core products like ChatGPT.

    OpenAI spent over $540 million to develop ChatGPT and is now looking for revenue, experts say.

    Last year, it made only $30 million in revenue last year. CEO Sam Altman reportedly told investors that the company intends to boost that figure to $200 million this year and $1 billion next year.

  • Bing’s Market Share Remains Stagnant Despite Its Huge Investment In OpenAI

    Bing’s Market Share Remains Stagnant Despite Its Huge Investment In OpenAI

    IBL News | New York

    Despite its multi-billion dollar investment in OpenAI’s ChatGPT, Microsoft hasn’t shifted Bing’s market share.

    According to data company Statcounter, the market share of Microsoft’s search engine, which includes Bing Chat, has remained stagnant since its debut, at 2.99%, with only a slight deviation from January’s 3.03%.

    YipitData, an analytics firm, said that Bing’s usage sky-rocketed from 95.7 million in February to 101.7 million in March, but the traffic was short-lived, as the numbers dropped to 96.4 million in April. Usage spiked again in May to 99.2 million.

    The decline can be attributed to several things. First up, during its debut, the tool was spotted giving inaccurate responses. There’s also the fact that Microsoft had limited the use of the tool to its Microsoft Edge-based browser. Additionally, many organizations are still warming up to the new technology.

    Microsoft has refuted the findings and insists that the chatbot is still a hit, stating that the third-party findings are inaccurate.

    SimilarWeb highlighted that the number of users leveraging ChatGPT’s offerings has decreased by 12 percent between June and July.

    Source: StatCounter Global Stats – Search Engine Market Share

  • OpenAI Partners with Scale for Fine-Tuning LLMs Services

    OpenAI Partners with Scale for Fine-Tuning LLMs Services

    IBL News | New York

    OpenAI this week announced a partnership with San Francisco–based startup Scale in order to offer enterprise-grade fine-tuning capabilities.

    With fine-tuning processes, companies can customize models on proprietary data for AI to optimize the performance of LLM. It requires rigorous data enrichment and model evaluation.

    OpenAI recently launched fine-tuning for GPT-3.5 Turbo and will bring fine-tuning to GPT-4 this fall.

    A pilot project of fine-tuning GPT-3.5 will be Brex, a financial services company that has been using GPT-4 for memo generation. Now, this firm wants to explore it if they can improve cost and latency while maintaining quality by using a fine-tuned GPT-3.5 model.

    Scale explains that it prepares and enhances enterprise data with its Scale Data Engine. Then, it fine-tunes GPT-3.5 with this data and further customizes models with plugins and retrieval augmented generation, or the ability to reference and cite your proprietary documents in its responses. Scale then leverages its Test and Evaluation platform and trained domain experts with the goal of achieving performance and following safety requirements.

    “Its AI’s software package Nucleus enables firms to quickly identify and fix mislabeled data, or refine existing data labels to improve algorithmic training and boost an AI system’s performance,” said the company founder and CEO, the 24-years old billionaire Alexandr Wang.
    .

  • Meta Announced the Release of SeamlessM4T, an AI Model that Translates 100 Languages

    Meta Announced the Release of SeamlessM4T, an AI Model that Translates 100 Languages

    IBL News | New York

    Meta this month announced the release of SeamlessM4T, an AI open-source model that can translate and transcribe 100 languages across text and speech.

    It’s available along with a new translation data set named SeamlessM4T. According to Meta, this is a “significant breakthrough” in the field of AI-powered speech-to-speech and speech-to-text.

    “Our single model provides on-demand translations that enable people who speak different languages to communicate more effectively,” Meta said to TechCrunch.

    Several companies, such as Google, Amazon, Microsoft, OpenAI, and a number of startups, are investing resources in developing sophisticated AI translation and transcription tools.

    Google is creating a “Universal Speech Model”, a model that can understand the world’s 1,000 most-spoken languages.

    Mozilla, meanwhile, spearheaded Common Voice, one of the largest multi-language collections of voices for training automatic speech recognition algorithms.

  • OpenAI Brings Fine-Tuning to Its GPT-3.5 Turbo

    OpenAI Brings Fine-Tuning to Its GPT-3.5 Turbo

    IBL News | New York

    OpenAI made fine-tuning for GPT-3.5 Turbo available for users.

    According to the company, fine-tuned versions of GPT-3.5 can match, or even outperform, the base capabilities of GPT-4, the company’s flagship model, on “certain narrow tasks.”

    Data sent in and out of the fine-tuning API, as with all our APIs, will be owned by the customer and not used by OpenAI to train models.

    In addition to increased performance, fine-tuning also enables businesses to shorten their prompts while ensuring similar performance.

    Fine-tuning with GPT-3.5-Turbo can also handle 4k tokens—double our previous fine-tuned models.

    Early testers have reduced prompt size by up to 90% by fine-tuning instructions into the model itself, speeding up each API call and cutting costs, according to OpenAI.

    Fine-tuning costs are as follows:

    • Training: $0.008 / 1K tokens
    • Usage input: $0.012 / 1K tokens
    • Usage output: $0.016 / 1K tokens

    Fine-tuning is most powerful when combined with other techniques such as prompt engineering, information retrieval, and function calling.

    In other news, OpenAI today made available two updated GPT-3 base models (babbage-002 and davinci-002), which can be fine-tuned as well.

    OpenAI said that fine-tuning support for GPT-4 — which, unlike GPT-3.5, can understand images in addition to text — will arrive sometime later this fall, but said when.
    .

    • OpenAI’s fine-tuning guide.

     

  • McKinsey Introduces ‘Lilli’, Its AI Chat Application For Employees and Clients

    McKinsey Introduces ‘Lilli’, Its AI Chat Application For Employees and Clients

    IBL News | New York

    McKinsey & Company this month unveiled its generative AI chatbot named Lilli, designed to summarize key points and provides relevant content to its partner consultants and clients.

    Lilli has been in beta since June 2023, used by 7,000 employees as a “minimum viable product” (MVP), answering 50,000 questions. It will be rolling out across McKinsey this fall.

    The chat application is named after Lillian Dombrowski, the first woman McKinsey hired for a professional services role back in 1945,

    The tool accesses the firm’s extensive knowledge base, with over 100,000 documents, interview transcripts, and resources from 40 curated sources and experts in 70 countries.

    “Lilli aggregates our knowledge and capabilities in one place for the first time and will allow us to spend more time with clients activating those insights and recommendations and maximizing the value we can create,” said Erik Roth, a senior partner with McKinsey.

    “I use Lilli to look for weaknesses in our argument and anticipate questions that may arise,” said Adi Pradhan, an associate partner at McKinsey. “I also use it to tutor myself on new topics and make connections between different areas on my projects.”

    With 30,000 employees, McKinsey & Company is one of the largest consulting agencies in the world.

    With an interface similar to ChatGPT and Claude 2, Lilli contains an expandable left-hand sidebar with saved categorized prompts, according to a report in VentureBeat.

    It includes two tabs that a user may toggle between, one, “GenAI Chat”, that sources data from a more generalized large language model (LLM) backend, and another, “Client Capabilities”, which sources responses from McKinsey’s corpus of documents, transcripts, and presentations. Lilli goes full attribution by citing its sources at the bottom of every response, along with links and even page numbers to specific pages.

    McKinsey’s chatbot leverages models developed by Cohere and OpenAI on the Microsoft Azure platform, although the firm insists that its tool is “LLM agnostic” and is constantly exploring new LLMs.

    Report by McKinsey: The state of AI in 2023: Generative AI’s breakout year
    Paul Tocatilan on LinkedIn: Democratization of Mentors