Category: Top News

  • Hugging Face Launched ‘Chat Assistants’ As An Open Rival to OpenAI’s GPT Store

    Hugging Face Launched ‘Chat Assistants’ As An Open Rival to OpenAI’s GPT Store

    IBL News | New York

    Hugging Face announced last week of a third-party, customizable Chat Assistants as a free, open-source alternative to OpenAI’s custom GPTs — which require a $20 per month subscription.

    This offering allows users of Hugging Chat to easily create their own customized AI chatbots with specific capabilities. They can choose which of several open source LLMs they wish to use, including Mistral’s Mixtral and Meta’s Llama 2.

    Like OpenAI’s GPTs, Hugging Face — the New York City-based AI startup — has also created an aggregator and a central repository of third-party customized Hugging Chat Assistants, which users can choose.

    The assistants aggregator page bears a visual resemblance to the GPT Store page, with custom Assistants displayed in a baseball card-style boxes with circular logos inside.

    OpenAI’s GPTs outperform by supporting web search, retrieval augmented generation (RAG), and generating logos (through DALL-E 3).
    .

  • BMW Plans to Add Alexa Voice Assistant in Their Cars

    BMW Plans to Add Alexa Voice Assistant in Their Cars

    IBL News | New York

    BMW showcased Amazon’s Alexa LLM-powered voice assistant in cars during the CES conference in Las Vegas.

    The new capabilities provide users with a natural way of getting to know them instead of digging through the car manual.

    For example, users can ask the assistant for things like recommendations on different drive modes and activate their chosen mode.

    They can also ask for instructions on how car features work—like the parking assistance system—and hear explanations in easy-to-understand terms through the BMW assistant’s customized voice.

    The demo followed Amazon’s previous announcement that BMW’s next-generation Intelligent Personal Assistant will be supported through our Alexa Custom Assistant technology (ACA).

    BMW and Amazon said that voice technology can strip away complexity and minimize distractions in the car.

    Amazon also reported that Character.ai conversational chatbots work with Alexa.
    .

     

     

  • Google Sets AI as Its Main Corporate Goal for 2024

    Google Sets AI as Its Main Corporate Goal for 2024

    IBL News | New York

    Google’s main goal for 2024 is to “deliver the world’s most advanced, safe, and responsible AI,” according to an internal document leaked to Alex Heath in The Verge.

    The main corporate goals for this year are:

    1.⁠ ⁠Deliver the world’s most advanced, safe, and responsible AI.

    2.⁠ ⁠Improve knowledge, learning, creativity, and productivity.

    3. Build the most helpful personal computing platforms and devices.

    4.⁠ ⁠Enable organizations and developers to innovate on Google Cloud.

    5. Provide the world’s most trusted products and platforms.

    6. Build a Google that’s extraordinary for Googlers and the world.

    7.⁠ ⁠Improve company velocity, efficiency, and productivity, and deliver durable cost savings.

    This last goal on the list points to more layoffs. Additionally, CEO Sundar Pichai has warned to expect more “specific resource allocation decisions” (translation: layoffs). Since the beginning of January, Google has laid off about 12,000 employees in various areas. Upcoming layoffs have a lot of employees on edge.

    Google currently lags far behind OpenAI in AI technology and deployment. Google’s Gemini models unveiled last year are falling behind OpenAI, which is reportedly already working on the next major upgrade to GPT-4.
    .

  • Meta Released a Specialized version of Llama 2 for Code Generation

    Meta Released a Specialized version of Llama 2 for Code Generation

    IBL News | New York

    Meta released this month an updated version of model Code Llama 70B. This improved model can write code more accurately in various programming languages, such as Python, C++, Java, and PHP, from natural language prompts or existing code snippets.

    Based on the open-source Llama 2, one of the largest LLMs in the world, with 175 billion parameters, Code Llama is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.

    The model 70B version — which is available in Hugging Face — is designed for general code synthesis and understanding, while Llama 2 is a general-purpose LLM that can generate text in any domain and style, from poetry to news articles.

    Code Llama 70B has been fine-tuned for code generation using a technique called self-attention, which allows it to learn the relationships and dependencies between different parts of the code.

    Code generation has been a long-standing goal of computer scientists, as it promises to make software development more efficient, accessible, and creative.

    However, unlike natural language, which is often ambiguous and flexible, code is precise and rigid. It has to follow strict rules and syntax, and it has to produce the desired output and behavior.

    Code generation models need to have a lot of data, computing power, and intelligence.

    Code Llama 70B has been trained on 500 billion tokens of code and code-related data, making it more capable and robust than its predecessors, according to Meta.

    It also has a larger context window of 100,000 tokens, which enables it to process and generate longer and more complex code.

    Code Llama 70B also includes CodeLlama-70B-Python, a variant that has been optimized for Python. This variant has been trained on an additional 100 billion tokens of Python code, making it more fluent and accurate in generating Python code. CodeLlama-70B-Python can also handle a range of tasks, such as web scraping, data analysis, machine learning (ML), and web development.

    Code Llama 70B is available for free download under the same license as Llama 2 and previous Code Llama models, which allows both researchers and commercial users to use and modify it.

    The model can be accessed and used through various platforms and frameworks, such as Hugging FacePyTorchTensorFlow, and Jupyter Notebook. More information and documentation can be found on GitHub and Hugging Face.

    Meta AI also provides documentation and tutorials on how to use and fine-tune the model for different purposes and languages.

    Mark Zuckerberg, the CEO of Meta AI, said in a statement posted to his Facebook account: “We’re open-sourcing a new and improved Code Llama, including a larger 70B parameter model. Writing and editing code has emerged as one of the most important uses of AI models today. The ability to code has also proven to be important for AI models to process information in other domains more rigorously and logically. I’m proud of the progress here, and looking forward to including these advances in Llama 3 and future models as well.”

    Code Llama 70B is expected to have a significant impact on the field of code generation and the software development industry, as it offers a powerful and accessible tool for creating and improving code. It can also lower the barrier to entry for people who want to learn coding, as it can provide guidance and feedback based on natural language instructions. Moreover, Code Llama 70B can potentially enable new applications and use cases, such as code translation, code summarization, code documentation, code analysis, and code debugging.
    .

  • OpenAI Drops the Price of API Access for GPT-3.5 Turbo as Open-Source Models Expand

    OpenAI Drops the Price of API Access for GPT-3.5 Turbo as Open-Source Models Expand

    IBL News | New York

    OpenAI reduced this month the prices for GPT-3.5 Turbo, released new embedding models (numbers that represent the concepts), and introduced new ways for developers to manage API keys and understand API usage.

    Essentially, the San Francisco-based company is introducing two new embedding models: a smaller and highly efficient text-embedding-3-small model, and a larger and more powerful text-embedding-3-large model.

    Pricing for text-embedding-3-small has been reduced by 5X compared to text-embedding-ada-002, from a price per 1k tokens of $0.0001 to $0.00002.

    Today, OpenAI introduced a new GPT-3.5 Turbo model, gpt-3.5-turbo-0125. “For the third time in the past year, we will be decreasing prices on GPT-3.5 Turbo to help our customers scale,” said the company.

    Input prices are dropping by 50% and output by 25%, to $0.0005 per thousand tokens in and $0.0015 per thousand tokens out.

    This model will also have various improvements, including higher accuracy at responding in requested formats and a fix for a bug that caused a text encoding issue for non-English language function calls.

    GPT-3.5 Turbo is the model most people interact with, usually through ChatGPT, and it serves as a kind of industry standard now. It’s also a popular API, being lower cost and faster than GPT-4 on a lot of tasks.

    Users are using these APIs for text-intensive applications, such as analyzing entire papers or books. OpenAI needs to make sure its customers don’t leave, attracted to open-source or self-managed models.

    On the other hand, Langfuse — which provides open-source observability and analytics for LLM apps — reported that it has been calculating costs for OpenAI and Anthropic models since October, as shown below.
    .

  • Figma Launched FigJam AI to Improve Meetings

    Figma Launched FigJam AI to Improve Meetings

    IBL News | New York

    Figma launched last month a public beta of FigJam AI, a set of OpenAI-based tools aimed at improving meetings. It summarizes meetings, rewrites notes, and suggests next steps.

    It competes with Google, Microsoft, and Zoom — all of them using AI to make meetings more usable.

    In 2020, during the pandemic, Figma noticed that its users were congregating and chatting on its platform design pages. That led the company to the 2021 launch of FigJam, a web-based digital whiteboard.

    In August, Figma issued an open beta of Jambot, a very popular AI plug-in.

    Adobe, which is no longer Figma’s future owner, agreed to pay this company a $1 billion breakup fee after their merger was canceled.
    .

     

     

     

  • Gartner Predicts that Over 80% of Enterprises Will Use Gen AI by 2026

    Gartner Predicts that Over 80% of Enterprises Will Use Gen AI by 2026

    IBL News | New York

    By 2026, over 80% of enterprises will be using Gen AI applications in production environments and/or have APIs and models, predicted Gartner, Inc.

    Demand is especially increasing in healthcare, life sciences, legal, financial services, and the public sector.

    Three innovations projected to have a huge impact on organizations within ten years include GenAI-enabled applications, foundation models, and AI trust, risk, and security management.

    • “The most common pattern for GenAI-embedded capabilities today is text-to-X, which democratizes access for workers, to what used to be specialized tasks, via prompt engineering using natural language,” said Arun Chandrasekaran, VP Analyst at Gartner.

    • “However, these applications still present obstacles such as hallucinations and inaccuracy that may limit widespread impact and adoption.”

    • “Foundation models are an important step forward for AI due to their massive pretraining and wide use-case applicability.”

    • “Foundation models will advance digital transformation within the enterprise by improving workforce productivity, automating and enhancing customer experience and enabling cost-effective creation of new products and services.”

    “Organizations that do not consistently manage AI risks are exponentially inclined to experience adverse outcomes, such as project failures and breaches. Inaccurate, unethical, or unintended AI outcomes, process errors, and interference from malicious actors can result in security failures, financial and reputational loss or liability, and social harm.”
    .

  • Microsoft Introduced Copilot Pro for $20 Per Month Per User

    Microsoft Introduced Copilot Pro for $20 Per Month Per User

    IBL News | New York

    Microsoft introduced this week Copilot Pro, a new premium subscription — at $20 per month per user.

    Beyond the normal free version of Copilot, it provides access to Microsoft 365 with its suite of Word, Excel, PowerPoint, Outlook, and OneNote.

    It also gives users the ability to build their own Copilot GPT – a customized Copilot tailored for a specific topic in Microsoft’s Copilot GPT Builder (coming soon) with a set of prompts.

    Copilot Pro features enhanced AI image creation with Image Creator from Designer (formerly Bing Image Creator), ensuring it’s faster with 100 boosts per day while bringing more detailed image quality as well as landscape image format.

    Copilot Pro provides users priority access to the latest OpenAI models – starting with GPT-4 Turbo.
    .

     

  • Google Released ‘Lumiere’, Which Utilizes Unique Architecture to Generate AI Video

    Google Released ‘Lumiere’, Which Utilizes Unique Architecture to Generate AI Video

    IBL News | New York

    Google introduced this week Lumiere, a text-to-video generation AI model designed for to portray realistic clips. It’s one of the most advanced text-to-video generators yet demonstrated, although it is still in a primitive state.

    Existing AI video models synthesize keyframes followed by temporal super-resolution. But Google uses a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model.

    “We demonstrate state-of-the-art text-to-video generation results, and show that our design easily facilitates a wide range of content creation tasks and video editing applications, including image-to-video, video inpainting, and stylized generation,” said the company.

    Lumiere does a good job of creating videos of cute animals in ridiculous scenarios, such as using roller skates, driving a car, or playing a piano. It’s worth noting that AI companies often demonstrate video generators with cute animals because generating coherent, non-deformed humans is currently difficult.

    As for training data, Google doesn’t say where it got the videos it fed into Lumiere, writing, “We train our T2V [text to video] model on a dataset containing 30M videos along with their text caption. [sic] The videos are 80 frames long at 16 fps (5 seconds). The base model is trained at 128×128.”

    Other video generators are Meta’s Make-A-Video, Runway’s Gen2, and Stable Video Diffusion, which can generate short clips from still images.
    .

     

  • Pennsylvania Will Allow State Agencies to Use Generative AI

    Pennsylvania Will Allow State Agencies to Use Generative AI

    IBL News | New York

    Pennsylvania will be the first state to deploy ChatGPT for a small number of governments to create and edit copy, update policy language, draft job descriptions, and generate code.

    After an initial trial period, Pennsylvania’s Governor Shapiro’s office said that ChatGPT will be used more broadly by other parts of the state government.

    However, no citizens will interact with ChatGPT directly as part of this pilot program.

    The pilot is seen as a test run for other state governments.

    One major consideration in this trial is ChatGPT’s tendency to hallucinate when handling sensitive government policies.

    “Generative AI is here and impacting our daily lives already – and my Administration is taking a proactive approach to harness the power of its benefits while mitigating its potential risks,” said Governor Shapiro [in the picture] in a press release.

    “Our collaboration with Governor Shapiro and the Pennsylvania team will provide valuable insights into how AI tools can responsibly enhance state services,” said Open AI CEO Sam Altman in the same press release.

    Governor Shapiro signed an executive order in September to allow state agencies to use generative AI in their work. The state is home to Carnegie Mellon, whose researchers have paved the way for AI research.
    .