Author: IBL News

  • The Humanoid Robot StartUp Figure AI Attracted the Support of Open AI, NVIDIA, Microsoft, and Jeff Bezos’ VC

    The Humanoid Robot StartUp Figure AI Attracted the Support of Open AI, NVIDIA, Microsoft, and Jeff Bezos’ VC

    IBL News | New York

    The final form for ChatGPT is not a bot.

    Figure AI, a startup working to build humanoid robots that can perform dangerous and undesirable jobs, got support from OpenAI and other large names in AI, such as NVIDIA, Microsoft, and Jeff Bezos’ venture fund.

    The Sunnyvale, California-based company announced on Thursday that it raised $675 million in Series B funding at a $2.6 billion valuation with investments from Microsoft, OpenAI Startup Fund, NVIDIA, Jeff Bezos (through Bezos Expeditions), Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest.

    Focused on deploying humanoid robots to assist people with real-world applications addressing labor shortages, Figure recently announced its first commercial agreement with BMW Manufacturing to bring humanoids into automotive production.

    The Figure team, made up of top AI robotics experts from Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation, has made remarkable progress in the past few months in the key areas of AI, robot development, robot testing, and commercialization. Founded 21 months ago, Figure currently has a team of 80 employees and is led by serial entrepreneur Brett Adcock.

    The new capital will be used to accelerate the timeline for humanoid commercial deployment as AI training, robot manufacturing, and expanding engineering headcount will be scaled up.

    The collaboration with OpenAI will help to accelerate “Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language,” stated the company.

    Peter Welinder, VP of Product and Partnerships at OpenAI, said: “We’ve always planned to come back to robotics and we see a path with Figure to explore what humanoid robots can achieve when powered by highly capable multimodal models. We’re blown away by Figure’s progress to date and we look forward to working together to open up new possibilities for how robots can help in everyday life.”

    Figure will use Microsoft Azure for AI infrastructure, training, and storage.

    To date, Figure AI has developed a general-purpose robot, called Figure 01, that looks and moves like a human. The company sees its robots being put to use in manufacturing, shipping and logistics, warehousing, and retail, where labor shortages are the most severe.

    Earlier this week, the company released a video showing Figure 01 in action (see below). The robot, attached to a tether, walks on two legs, and uses its five-fingered hands to pick up a plastic crate, then walks several more steps before placing the box on a conveyor belt.

    Figure’s ultimate aim is for Figure 01 to be able to perform “everyday tasks autonomously.” The company says getting there will require it to develop more robust AI systems.

    There is a crowded field of companies vying to make humanoid robots a reality, although the market is nascent. Amazon-backed Agility Robotics plans to open a factory that can produce up to 10,000 of its bipedal Digit robots per year.

    Tesla also trying to build a humanoid robot, called Optimus, while robotics company Boston Dynamics has developed several models. Norwegian humanoid robot startup 1X Technologies recently raised $100 million with backing from OpenAI.
    .

  • Elon Musk Sues OpenAI and Sam Altman for Abandoning Its Not-For-Profit Mission

    Elon Musk Sues OpenAI and Sam Altman for Abandoning Its Not-For-Profit Mission

    IBL News | New York

    Billionaire entrepreneur and owner of Tesla, SpaceX, and X Elon Musk sued this week OpenAI and its CEO, Sam Altman, saying they abandoned the startup’s original, not-for- profit mission, which was based on developing AI for the benefit of humanity.

    The lawsuit, filed on Thursday in California Superior Court in San Francisco, is a culmination of Musk’s opposition to the startup he co-founded. OpenAI has since become the leading company in generative AI, with the help of $13 billion of dollars in funding from Microsoft.

    Musk’s lawsuit alleges a breach of contract, saying Altman and co-founder Greg Brockman originally approached him to make an open-source, non-profit company, but the startup established in 2015 is now focused on making money.

    OpenAI “set the founding agreement aflame” in 2023 when it released its most powerful language model GPT-4, as essentially a Microsoft product, the lawsuit alleged.

    Musk said OpenAI’s three founders originally agreed to work on artificial general intelligence (AGI), a concept that machines could handle tasks like humans, but in a way that would “benefit humanity,” according to the lawsuit. OpenAI would also work in opposition to Google, which Musk said he believed was developing AGI for profit and would pose grave risks.

    “OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft. Under its new board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity,” Musk says in the suit.

    “OpenAI, Inc.’s once carefully crafted non-profit structure was replaced by a purely profit-driven CEO and a board with inferior technical expertise in AGI and AI public policy. The board now has an observer seat reserved solely for Microsoft,” Musk claims.

    Musk is represented in the suit by Los Angeles law firm Irell & Manella.

    According to Reuters, Musk decided to try to seize control of OpenAI from Altman and the other founders in late 2017, aiming to convert it into a commercial entity in partnership with Tesla, utilizing the automaker’s supercomputers.

    Last July, Musk founded his own artificial intelligence startup, xAI.
    .

    An email exchange between Musk and Altman, presented as evidence in the lawsuit. 

  • Hugging Face, ServiceNow, and Nvidia Released ‘StarCoder2’, a Free Code-Generating Model

    Hugging Face, ServiceNow, and Nvidia Released ‘StarCoder2’, a Free Code-Generating Model

    IBL News | New York

    The BigCode project, an open-scientific collaboration focused on the development of LLMs for Code (Code LLMs), released this week StarCoder2, an AI-powered open-source code generator with a less restrictive license than GitHub Copilot, Amazon CodeWhisperer, and Meta’s Code Llama.

    Like most other code generators, StarCoder2 can suggest ways to complete unfinished lines of code as well as summarize and retrieve snippets of code when asked in natural language.

    All StarCoder2 variants are trained on The Stack v2, a new large and high-quality code dataset. StarCoder 2 is a family of open LLMs for code and comes in three different sizes with 3B trained by ServiceNow, 7B trained by Hugging Face, and 15B parameter trained by NVIDIA with NVIDIA NeMo and trained on NVIDIA accelerated infrastructure.

    The last one, StarCoder2-15B, was trained on over 4+ trillion tokens and 600+ programming languages from The Stack v2.

    BigCode released all models, datasets, and the processing, as well as the training code, as explained in a paper.

    In the project, at least these U.S. universities participated: Northeastern University, University of Illinois Urbana-Champaign, Johns Hopkins University, Leipzig University, Monash University, University of British Columbia, MIT, Technical University of Munich, Technion – Israel Institute of Technology, University of Notre Dame, Princeton University, Wellesley College, University College London, UC San Diego, Cornell University, and UC Berkeley.

    Beyond Academia, the project gathered Kaggle, Roblox 12Sea AI Lab 13, CSIRO’s Data61, Mazzuma, Contextual AI, Cohere, and Salesforce.

    StarCoder 2 can be fine-tuned in a few hours using a GPU like the Nvidia A100 on first- or third-party data to create apps such as chatbots and personal coding assistants. And, because it was trained on a larger and more diverse data set than the original StarCoder (~619 programming languages), StarCoder 2 can make more accurate, context-aware predictions — at least hypothetically.

    Harm de Vries, head of ServiceNow’s StarCoder 2 development team, told TechCrunch in an interview, that “with StarCoder2, developers can use its capabilities to make coding more efficient without sacrificing speed or quality.”

    A recent Stanford study found that engineers who use code-generating systems are more likely to introduce security vulnerabilities in the apps they develop. Moreover, a poll from Sonatype, the cybersecurity firm, shows that the majority of developers are concerned about the lack of insight into how code from code generators is produced and “code sprawl” from generators producing too much code to manage.

    StarCoder 2’s license might also prove to be a roadblock for some, according to TechCrunch.

    “StarCoder 2 is licensed under the BigCode Open RAIL-M 1.0, which aims to promote responsible use by imposing “light touch” restrictions on both model licensees and downstream users. While less constraining than many other licenses, RAIL-M isn’t truly “open” in the sense that it doesn’t permit developers to use StarCoder 2 for every conceivable application (medical advice-giving apps are strictly off limits, for example). Some commentators say RAIL-M’s requirements may be too vague to comply with in any case — and that RAIL-M could conflict with AI-related regulations like the EU AI Act.”

    ServiceNow has already used StarCoder to create Now LLM, a product for code generation fine-tuned for ServiceNow workflow patterns, use cases, and processes. Hugging Face, which offers model implementation consulting plans, is providing hosted versions of the StarCoder 2 models on its platform. Nvidia, which is making StarCoder 2 available through an API and web front-end.
    .

  • The George Washington University Forms AI Advisory Councils From Each School

    The George Washington University Forms AI Advisory Councils From Each School

    IBL News | New York

    The George Washington University (GWU) formed this year an advisory council of faculty from each school to improve the use of AI tools in their teaching and student learning and provide best practices.

    Managed by the Instructional Core within the Libraries & Academic Innovation (LAI) office of GW, led by Geneva Henry., these advisory councils provide input on AI resources and training for professors aligning with the guidelines released by the Office of the Provost in April 2023.

    These guidelines state that using AI to study is permissible, but submitting AI-generated material for an assignment or using AI during an assessment is cheating.

    The decision of whether or not to allow AI in courses is individual to each professor.

    Douglas Crawford, a member of the council and an assistant professor of interior architecture, said to The GW Hatchet that the goal of the council is to act as a knowledge base for faculty looking to implement or restrict AI use in their courses.

    He said he encourages his students to use AI as a starting point for projects because it can provide more tailored inspiration than platforms like Google Images or Pinterest.

    John Helveston, a member of the council and an assistant professor of engineering management and systems engineering, said the introduction of AI into classrooms has pushed educators to rethink what they want to accomplish in classrooms and how they organize course content in order to make students think critically.

    Lorena Barba, a member of the council and a professor of mechanical and aerospace engineering, said the council had a “grassroots” origin that bodes well for faculty’s willingness to collaborate in discussions surrounding AI in a way that hasn’t been commonly seen in universities.

    “GW has a unique opportunity to be at the forefront, and many members of the AI Advisory Council are courageously embracing the challenge,” Lorena Barba said.
    .

  • McKinsey: “Gen AI Will Unleash the Next Wave of Productivity”

    McKinsey: “Gen AI Will Unleash the Next Wave of Productivity”

    IBL News | New York

    Generative AI is poised to unleash the next wave of productivity, stated McKinsey in a research titled “The economic potential of generative AI: The next productivity frontier,” released last month.

    The ability of generative AI applications, built using foundation models, to write text, compose music, and create digital art has persuaded consumers and households to experiment on their own.

    AI trained on these models can perform several functions; it can classify, edit, summarize, answer questions, and draft new content, among other tasks.

    McKinsey’s research considers that we are at the beginning of a journey to understand generative AI’s power, reach, and capabilities.

    “Deep learning has powered many of the recent advances in AI, but the foundation models powering generative AI applications are a step-change evolution within deep learning. Unlike previous deep learning models, they can process extremely large and varied sets of unstructured data and perform more than one task.”

    Its research suggests that generative AI is poised to transform roles and boost performance across functions such as sales and marketing, customer operations, and software development.

    In the process, it could unlock trillions of dollars in value across sectors from banking to life sciences.

    Foundation models have enabled new capabilities and vastly improved existing ones across a broad range of modalities, including images, video, audio, and computer code. AI trained on these models can perform several functions; it can classify, edit, summarize, answer questions, and draft new content, among other tasks.
    .

     

  • Mistral Launches ‘Mistral Large’, a New, Non-Open-Source LLM

    Mistral Launches ‘Mistral Large’, a New, Non-Open-Source LLM

    IBL News | New York

    Paris-based Mistral AI — which is trying to build an alternative to OpenAI’s GPT-4 and Anthropic’s Claude 2 — released yesterday a new LLM named Mistral Large. The model was not released under an open-source license.

    In addition, the French start-up is launching its alternative to ChatGPT with a new service called Le Chat, now available in beta. Le Chat can’t access the web.

    Founded by Mensch, Timothée Lacroix, and Guillaume Lample, a trio of former Meta and Google researchers, has now a valuation of €2 billion. [People working at Mistral in the picture above.]

    It supports context windows of 32k tokens (generally more than 20,000 words in English) in English, French, Spanish, German, and Italian.

    As a comparison, GPT-4 Turbo has a 128k-token context window.

    Mistral AI made Mistral Large available through Microsoft’s Azure, its first distribution partner, after signing a “multi-year partnership.”

    As part of the deal, Microsoft said it would invest in Mistral, although the financial details were not disclosed. The partnership will include a research and development collaboration to build applications for governments across Europe.

    Microsoft has already invested about $13 billion in San Francisco-based OpenAI, which is estimated to be worth $86 billion.
    .
  • Huge Decrease of Jobs in Writing, Customer Service and Translation

    Huge Decrease of Jobs in Writing, Customer Service and Translation

    IBL News | San Diego

    Since the Release of ChatGPT Upwork freelance website’s postings data — publicly available in the form of an RSS feed — show an increase in the number of jobs since ChatGPT was released in November 2022, according to the analysis of an expert posted at Bloomberry.com.

    However, three categories showed a large decline in jobs: writing, translation, and customer service jobs. The number of writing jobs declined by 33%, translation jobs declined by 19%, and customer service jobs declined by 16%.

    Since November 2022, video editing/production jobs were up 39%, graphic design jobs 8%, and web design jobs 10%. Software development jobs were also up, with backend development jobs at 6% and frontend/web development jobs at 4%.

    “Generative AI tools are already good enough to replace many writing tasks, whether it’s writing an article or a social media post. But they’re not polished enough for other jobs like video and image generation,” said Henley Wing, the author of the analysis.

    Jobs like generating AI content, developing AI agents, integrating OpenAI/ChatGPT APIs, and developing chatbots and AI apps are becoming the norm.

    However, the vast majority of companies are not yet developing their own LLM models or tuning them with training data. They seem to be integrating OpenAI’s API into their existing products and developing chatbots to replace their customer service agents.

  • Google Meet Detects When the User Raises His Hand

    Google Meet Detects When the User Raises His Hand

    IBL News | New York

    Google Meet announced last month an upcoming gesture detection feature called “Raise hand” for Q&A sessions and other moderation.

    This is a button in the toolbar available for enterprise accounts on Google Workplace, which lets people know when you have something to say, as the webcam recognizes hands up.

    The feature turns off gesture detection when someone is an active speaker.

    Until now, raising your hand to ask a question in Google Meet was done by clicking the hand-raise icon.

    This feature is toggled off by default and can be enabled from More Options > Reactions > Hand Raise Gesture.

    It’s similar to how Google Camera in Pixel phones can start a selfie timer when you raise your palm.
    .

  • Adobe Launches an AI Assistant for PDF Documents

    Adobe Launches an AI Assistant for PDF Documents

    IBL News | San Diego

    Adobe introduced this week an AI assistant in beta in Reader and Acrobat, which instantly generates summaries and insights from long PDF documents. It also recommends and answers questions based on a PDF’s content through an intuitive conversational interface.

    The AI assistant generates citations with the source, and as an output, it formats the information for sharing in emails, reports, and presentations. Clickable links help quickly find information in long documents.

    This feature will be sold through a new add-on subscription plan when AI Assistant is out of beta.

    “Our AI Assistant is bringing generative AI to the masses, unlocking new value from the information inside the approximately 3 trillion PDFs in the world,” stated Adobe.

    This assistant leverages the same AI and machine learning models behind Acrobat Liquid Mode, the technology that supports responsive reading experiences for PDFs on mobile.

    PDF was invented by Adobe thirty years ago Adobe, and today remains the standard for reading, editing, and transforming PDFs.

    Currently, the new AI Assistant features are available in beta for Acrobat Standard and Pro Individual and Teams subscription plans on desktop and web in English, with features coming to Reader desktop customers in English over the next few weeks – all at no additional cost. Other languages will follow. A private beta is available for enterprise customers.

    Adobe: How people are using AI Assistant (YouTube videos)

  • Google Open-Sources a Small Model of Gemini

    Google Open-Sources a Small Model of Gemini

    IBL News | San Diego

    Google released yesterday Gemma 2B and 7B, two lightweight, pre-trained open-source AI models, mostly suitable for small developments such as simple chatbots or summarizations.

    It also lets developers use the research and technology used to create the Gemini closed models.

    They are available via Kaggle, Hugging Face, Nvidia’s NeMo, and Google’s Vertex AI. It’s designed with Google’s AI Principles at the forefront.

    Gemma supports multi-framework Keras 3.0, native PyTorch, JAX, and Hugging Face Transformers.

    Developers and researchers can work with Gemma using free access in Kaggle, a free tier for Colab notebooks, and $300 in credits for first-time Google Cloud users. Researchers can also apply for Google Cloud credits of up to $500,000 to accelerate their projects.

    Each size of Gemma is available at ai.google.dev/gemma.

    Google is also providing toolchains for inference and supervised fine-tuning (SFT) across all major frameworks: JAX, PyTorch, and TensorFlow through native Keras 3.0.

    Google’s Gemini comes in several weights, including Gemini Nano, Gemini Pro, and Gemini Ultra.

    Last week, Google announced a faster Gemini 1.5 intended for business users and developers.