Category: Views

  • Humane Introduced Its Wearable Device ‘Ai Pin’ After $240M In Funding and Much Hype

    Humane Introduced Its Wearable Device ‘Ai Pin’ After $240M In Funding and Much Hype

    IBL News | New York

    The San Francisco-based startup Humane, founded by two former Apple employees, showcased its bold sci-fi venture yesterday—a device named Ai Pin.

    The unveiling followed five years of development, $240 million in funding, the acquisition of 25 patents, significant hype, and partnerships with top tech companies, including OpenAI, Microsoft, and Salesforce.

    Imran Chaudhri and Bethany Bongiorno, Humane’s husband-and-wife founders [in the picture below], envision a future with reduced dependence on screens, a departure from the ubiquity created by their former employer, Apple.

    They promote the pin as the first artificially intelligent device. Control options include speaking aloud, tapping a touchpad, or projecting a laser display onto the palm of a hand.

    In an instant, the device’s virtual assistant can send a text message, play a song, snap a photo, make a call, or translate a real-time conversation into another language. The system relies on AI to help answer questions and can summarize incoming messages.

    Essentially, the device can follow a conversation from one question to the next without needing explicit context.

    The technology is a step forward from Siri, Alexa, and Google Assistant, wrote The New York Times. “To tech insiders, it’s a moonshot. To outsiders, it’s a sci-fi fantasy.” “It’s a gadget that’s reminiscent of the badges worn in Star Trek.”

    Humane plans to commence shipping the pins next year, expecting to sell approximately 100,000 units at a cost of $699 each, requiring a $24 monthly subscription in the first year. The pin comes with a new operating system called Cosmos and its own wireless plan. Users will need new phone numbers for the device.

    “Users will need to dictate rather than type texts and trade a camera that zooms for wide-angle photos. They’ll need to be patient because certain features, like object recognition and videos, won’t be available initially. And the pin can sometimes be buggy, as it was during some of the company’s demos for The New York Times.”

    “The tech industry has a large graveyard of wearable products that have failed to catch on.”

    Sam Altman, OpenAI’s chief executive, has invested in Humane, as well as another AI company, Rewind AI, which plans to create a necklace that records what people say and hear. He has also discussed teaming up with Jony Ive, Apple’s former chief designer, to create an AI gadget with ambitions similar to Humane’s.
    .

  • OpenAI Announced GPT-4 Turbo, GPTs, and Assistants API, Among Other Improvements [Video]

    OpenAI Announced GPT-4 Turbo, GPTs, and Assistants API, Among Other Improvements [Video]

    IBL News | San Francisco

    OpenAI shared yesterday several new additions and improvements, including GPT-4 Turbo, an improved version of its flagship model, during its first DevDay conference in San Francisco. [OpenAI’s CEO in the picture above].

    The company also introduced GPTs, which allow developers to create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.

    “Anyone can easily build their own GPT — no coding is required. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images, or analyzing data,” explained OpenAI.

    Example GPTs are available today for ChatGPT Plus and Enterprise users, including Canva and Zapier AI Actions.

    More innovations announced at DevDay included:

    • New GPT-4 Turbo with a 128k context window, equivalent to more than 300 pages in text in a single prompt.GPT-4 Turbo has knowledge of world events up to April 2023.OpenAI is offering it at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.It will be a stable, production-ready model in the coming weeks.
    • New Assistants API, which is intended to make it easier for developers to build their own assistive AI apps.
    • New multimodal capabilities in the platform, including vision and image creation (DALL·E 3). Developers can now generate human-quality speech from text via the text-to-speech API.
    • Release of Whisper large-v3, the next version of OpenAI’s open-source automatic speech recognition model (ASR)
    • Open-sourcing the Consistency Decoder, a drop-in replacement for the Stable Diffusion VAE decoder.

    Official Press Release to the Media from OpenAI:

    A few key stats we announced on stage as it’s been a big year for OpenAI:

    • We have more than 2 million developers building on our API for a wide range of use cases.
    • Over 92% of Fortune 500 are building on our products.
    • And we have about 100M weekly active users on ChatGPT.

    Introducing GPTs: 

    We’re introducing GPTs – custom versions of ChatGPT. Anyone can easily build GPTs to help with specific tasks, at work, or at home. We think GPTs take a first step towards an agent-like future. For third-party developers, we’re showing them how to build these agent-like experiences into their own apps as well. Example GPTs are available today for ChatGPT Plus and Enterprise users to try out including Canva and Zapier AI Actions.

    New models and developer products announced at DevDay, including

    • ChatGPT gets a new UI.
    • GPT-4 Turbo: a new model that includes longer context length, better world knowledge because we’re updating the cutoff to April 2023, and other improvements.
    • New Assistants API makes it easier for developers to build their own GPT-like experiences into their own apps and services.
    • New modalities to the API, including vision, DALL·E 3, and text-to-speech with six preset voices to choose from.
    • Dropping the price of all of our models across the board so it’s easier for developers to build and scale on our platform.

    See details in our blog posts, GPTs, and new models/products, for more info. Press images are here, and we’ll add more throughout the day.

  • OpenAI Released Advanced Versions of DALL·E 3 and ChatGPT-4

    OpenAI Released Advanced Versions of DALL·E 3 and ChatGPT-4

    IBL News | New York

    OpenAI released and made it available its image model DALL-E 3 to ChatGPT Plus and Enterprise users. DALL-E 3 can create unique, crisper-in-detail images from a simple conversation, providing a selection of visuals for users to refine and iterate upon.

    This model can render intricate details, including text, hands, and faces. It also responds efficiently to extensive and detailed prompts, and it supports both landscape and portrait aspect ratios, as explained in this research paper.

    DALL·E 3 avoids any harmful imagery, including violent, sexual, or hateful content.

    This model is designed to decline requests that ask for an image in the style of a living artist.

    OpenAI offers the option for creators to opt their images out from training of their future image generation models.

    In addition, ahead of the upcoming OpenAI’s DevDay conference next week, where the company is expected to explore new tools with developers, the San Francisco–based research lab released a multimodal version of ChatGPT-4 that allows users to upload and analyze PDFs and various document types.

    The GPT-4 All Tools includes advanced data analysis, DALL·E 3, and built-in browsing capabilities without the need for plugins. These new features may make many third-party ChatGPT plugins obsolete.

    Microsoft’s Bing and Designer also added a more advanced version of DALL·E 3.

    This development pushes the boundaries of generative AI capabilities beyond text-based queries.

     

     


    In other news, OpenAI announced it built a new Preparedness team to evaluate, forecast, and protect against the risks of highly-capable AI—from today’s models to AGI.
    .

     

  • Educause’s 2024 Top 10 Report Encourages to Develop an Institutional Approach to AI

    Educause’s 2024 Top 10 Report Encourages to Develop an Institutional Approach to AI

    IBL News | Chicago

    “Educational institutions must expand beyond growth and innovation to address risk and to prepare for what may be ahead,” said Susan Grajek, Vice President of Partnerships, Communities, and Research at Educause, last week in Chicago during the association’s annual conference.

    Grajek presented, in a much-awaited session, the “2024 Educause Top 10” IT issues list, in the main auditorium of the McCormick Convention Center in Chicago, filled with thousands of educators, administrators, and IT managers who attended the three-day event.

    Educause’s annual list, a classic report, addresses how higher education IT leaders can contribute to their institution’s overall success.

    These are the 2024 Educause Top Ten Issues:

    1. Cybersecurity as a Core Competency. Adopting a formal risk management framework can help institutions to balance cost, risk, and opportunity.

    2. Driving to Better Decisions. Improving data quality and governance can help lead to more informed decision-making. Data is a strategic asset now.

    3. The Enrollment Crisis. Data can empower decision-makers to determine course offerings, identify prospective students, or spot opportunities and tap into new markets.

    4. Diving Deep into Data. Analytics can help institutions to harness actionable insights to improve learning and student success.

    5. Administrative Cost Reduction. Streamlining processes, data, and technologies can lead to cost savings.

    6. Meeting Students Where They Are. Providing students with universal access to institutional services can lead to better outcomes.

    7. Hiring Resilience. Recruiting and retaining IT talent under adverse circumstances.
    can help human resources leaders.

    8. Financial Keys to the Future. Using technology and data to develop financial models and projections can help higher ed leaders make tough choices.

    9. Balancing Budgets. Taking control of IT costs and vendor management can help institutions build strong relationships with solution providers and industry partners.

    10. Adapting to the Future. Cultivating institutional agility means preparing for a range of possible future scenarios.

    11. Honorary issue: AI Institutions have the need to develop an institutional approach.

    “AI has the potential to help people skill up rapidly, including those who traditionally lacked access to effective educational opportunities and resources,” Grajek said.

    “AI can potentially help reduce administrative costs if applied to administrative processes, job descriptions, project charters, meeting summaries, and onboarding and training. Academic applications may include assessment reform, developing course materials for introductory level courses, and tutoring. We will almost certainly create more and more powerful use cases in the coming months and years.”

    AI buzz dominated the three-day conference, with one out of eight talks focused in this technology, as Inside Higher Ed reported.

    Resources and links: 2024 EDUCAUSE Top 10: Institutional Resilience

  • How to Add Your Own Data to a Large Language Model

    How to Add Your Own Data to a Large Language Model

    IBL News | New York

    To create a corporate chatbot for customer support, generate personalized posts and marketing materials, or develop a tailored automation application, the Large Language Model (LLM), like GPT-4 has to include the ability to answer questions about private data.

    However, training or retraining the model is impractical due to the cost, time, and privacy concerns associated with mixing datasets, as well as the potential security risks.

    Usually, the approach taken is “content injection,” a technique called “embedding” that involves providing the model with additional information from a desired database of knowledge alongside the user’s query.

    This data collection can include product information, internal documents, or information scraped from the web, customer interactions, and industry-specific knowledge.

    At this stage, it’s essential to consider data privacy and security, ensuring that sensitive information is handled appropriately and in compliance with relevant information, as expert Shelly Palmer details in a post.

    The data to be embedded has to be cleaned and structured to ensure compatibility with the AI model.

    Also, it has to be tokenized and converted into a suitable format by setting the correct indexes.

    After data is preprocessed, the AI model has to be fine-tuned and pre-trained.

    The next step is to interact with the API. Query vectors will be matched to the database, pulling the content that will be injected.

    The number of tokens is calculated to know the cost. Usually, each token corresponds to four or five English-language words.

    To run an effective content injection schema, a prompt must be engineered. This is an example of a prompt:

    “You are an upbeat, positive employee of Our Company. Read the following sections of our knowledge base and answer the question using only the information provided here. If you do not have enough information to answer the question from the knowledge base below, please respond to the user with ‘Apologies. I am unable to provide assistance.’

    Context Injection goes here.

    Questions or input from the user go here.”

    There are three more considerations for the right implementation: Any personally identifiable information (PII) must be anonymized in order to protect the privacy of your customers and also ensure compliance with data protection regulations like GDPR (General Data Protection Regulation).

    Robust access control measures will help prevent unauthorized access and reduce the risk of data breaches.

    Continuous monitoring is in place in order to check for any signs of bias or other unintended consequences before they escalate.

    Blog Replit: How to train your own Large Language Models

    Andreessen Horowitz: Navigating the High Cost of AI Compute

     

     

     

  • Inflection AI, the Year-Old Startup Behind Chatbot Pi, Valued at $4 Billion

    Inflection AI, the Year-Old Startup Behind Chatbot Pi, Valued at $4 Billion

    IBL News | New York

    Inflection AI, which has a small team of around 35 employees and is led by ex-DeepMind leader Mustafa Suleyman, closed a $1.3 billion funding this week. The new capital raised by this one-year-old startup values this company at $4 billion.

    The round was led by led by Microsoft, Nvidia, GPU cloud provider CoreWeave, and billionaires Reid Hoffman, Bill Gates, and Eric Schmidt.

    The Palo Alto, California-based Inflection sits now behind OpenAI (which has raised $11.3 billion to date) as the second-best-funded generative AI startup — edging out Anthropic ($1.5 billion). Well behind it are Cohere ($445 million), Adept ($415 million), Runway ($237 million), Character.ai ($150 million), and Stability AI (~$100 million).

    This influx of capital will be used to continue expanding its computing capabilities further developing its AI-powered personal chatbot called Pi.

    Specifically, Inflection says it’s working with Nvidia and CoreWeave to build what it claims is one of the largest AI training clusters in the world, comprising 22,000 Nvidia H100 GPUs.

    “Personal AI is going to be the most transformational tool of our lifetimes. This is truly an inflection point,” Suleyman said in a statement.

    According to Inflection, Pi is intended to be a “kind” and “supportive” companion, offering “friendly” advice and info in a “natural, flowing” style.

    Pi is available to test via a messaging app or online.

  • Cohere, which Creates Cloud-Agnostic LLMs, Raised $270M, with Nvidia and Oracle as Investors

    Cohere, which Creates Cloud-Agnostic LLMs, Raised $270M, with Nvidia and Oracle as Investors

    IBL News | New York

    Generative AI startup Cohere, which is developing a model ecosystem for the enterprise, raised $270 million as part of its series C round. A mix of VC and strategic investors, including Nvidia, Oracle, and Salesforce Ventures, among others, participated in the round.

    This Toronto – based company has raised a total of $445 million to date. Only OpenAI ($11.3 billion) and Anthropic ($450 million) have raised more, ahead of rivals Inflection AI ($225 million) and Adept ($415 million). This influx of money has resulted in a valuation of around $2.1 billion, according to Bloomberg.

    Founded in 2019 and with a workforce of 180 employees, Cohere builds, trains, and customizes large language models for enterprise customers. Corporations can use their proprietary data models — which can be expensive to train — to do things like summarize customer emails or help write website copy.

    Nvidia CEO Jensen Huang said of Cohere, “Their service will help enterprises around the world harness these capabilities to automate and accelerate.”

    Cohere’s platform is cloud agnostic, allowing companies to use their preferred cloud provider to increase data privacy and make implementation simpler.

    The platform can be deployed inside public clouds (e.g., Google Cloud, Amazon Web Services), a customer’s existing cloud, virtual private clouds, or on-site.

    The startup works with companies like Jasper and HyperWrite for copywriting generation tasks like creating marketing content, drafting emails, and developing product descriptions. Also, it collaborates with LivePerson, the conversational marketing company, to build fine-tuned LLMs to improve explainability, as well as with several news outlets.

    Cohere said it sees “search and retrieval” as the next core area of growth, so models or chatbots have the ability to expand on their knowledge base and search the web for information that’s relevant to a query.

    The President and COO, Martin Kon, told TechCrunch: “Today, chatbots don’t have access to the world. They don’t know about what happened ten minutes ago. They have to memorize everything within themselves, and they only have a memory of what they saw during training. With search and retrieval, you can require a model to cite sources, so users don’t need to blindly trust a model; everything links out to a site that you can verify and fact-check.”

    Cohere plans to build additional models that can take action and work for customers, like booking a flight, scheduling a meeting, or filing an expense report on a person’s behalf. In that way, it’s chasing after competitors like Adept, Inflection, and OpenAI, all of which are building systems to connect AI with third-party apps, services, and products.

  • Meta Will Provide Generative AI for Advertisers in Instagram or Facebook

    Meta Will Provide Generative AI for Advertisers in Instagram or Facebook

    IBL News | New York

    Meta, formerly known as Facebook, announced an AI Sandbox for advertisers to help them improve their ad performance for businesses and spend less time and resources on repurposing creative assets.

    This AI Sandbox testing background, announced last week, will allow the creation of different variations of the same copy for different audiences, background generation through text prompts, and image cropping in different aspect ratios for Instagram or Facebook posts, stories, or short videos like Reels.

    These features, are available today to select advertisers and it will gradually expand to more in July, according to the company.

    These features, called Meta Advantage, are available today to select advertisers, and they will gradually expand to more advertisers in July, according to the company

    Currently, some startups such as Omneky and Movio are leaning toward DALLE-2, GPT-3, and other generative AI-powered ad tools and marketing videos for advertisers.

  • Four Solutions to Integrate ChatGPT Bots on Websites

    Four Solutions to Integrate ChatGPT Bots on Websites

    IBL News | New York

    A WordPress free plugin, AI Engine, allows for the creation of ChatGPT-like chatbots on websites by adding a shortcode.

    An English developer living in Japan, Jordy Meow, launched this tool through its website.

    Users would need to host a WordPress-based website and an account with OpenAI.

    The Chatbot builder allows the user to provide the AI assistant with a name and a starting message.

    It also allows fine-tuning the robot as the plugin comes with a Dataset Builder used to generate a large number of questions and answers based on the website content. Data is gathered in a Google Sheet with two columns, with a minimum of 500 rows. (According to the OpenAI documentation, numbers of 3,000 and 5,000 rows are recommended. But it ultimately depends on what you’re trying to achieve.)

    Once you have your dataset, you can import it into AI Engine using the “Import File” button. You can export a CSV file from Google Sheets and use it here, but it also supports JSON and JSONL formats if you prefer. Alternatively, you can type the data manually.

    The developer has added a paid Pro version, starting at $49 per site. This functionality allows the chat to read the WordPress page that’s hosted. This way, customers can ask questions about the webpage.

    This plugin is free, although getting access to OpenAI’s server has a cost for heavily trafficked sites.

    Another approach is offered by Chatbase.co. This start-up offers an API to create chatbots trained on your data.

    In terms of productivity, Chatbot UI offers an open-source clone of OpenAI’s ChatGPT user interface. Developers can plug in their API key to use this UI with their API.


    Chatshape.com allows the implementation of an AI customer support agent from the user’s website content and adds it as a chat bubble.


    Finally, the website espanol.love uses GPT-4 to accurately translate anything into Spanish with an accent.

    .

  • Google Introduced Its Newest LLM ‘PaLM 2’. It Includes Bard, ChatGPT’s Strongest Competitor Yet [Video]

    Google Introduced Its Newest LLM ‘PaLM 2’. It Includes Bard, ChatGPT’s Strongest Competitor Yet [Video]

    IBL News | New York

    Google went all-in on AI during its I/O 2023 developer annual conference on Wednesday.

    The search giant publicly unveiled its newest large language model (LLM), PaLM 2, which, according to the company, is better at reasoning, writing, math, and logic, and performs better than OpenAI’s GPT-4 in coding and debugging.

    The Mountain View, California-based company also introduced a new multimodal LLM called Gemini, which is currently under training.

    PaLM 2 comes in different sizes, which are weirdly named after animal constellations: Gecko, Otter, Bison, and Unicorn.

    Google’s new code completion and code generation tool, named Codey is the company’s answer to GitHub’s Copilot.

    Codey is specifically trained to handle coding-related prompts and is also trained to handle queries related to Google Cloud in general.

    Google also made its chatbot Bard, the equivalent of ChatGPT, officially available for everyone, removing any waitlist.

    Built on PaLM2, Bard allows export to Google Docs, Sheets, Replit, and Gmail, among others.

    Bard’s users will be able to generate images via Adobe Firefly and then modify them using Express.

    In its search business, Google introduced the AI snapshot feature, which takes content from top links and allows for follow-up questions.

    Sponsored ads will appear above while traditional links will be placed below.

    Google’s productivity suite Workspace was improved with an AI sidekick called Duet, designed to provide better prompts.

    It has automatic prompt suggestions and can do a ton of things, such as converting text to tables in Sheets, finding information in Gmail threads, and creating images in Slides.

    “The Sidekick panel will live in a side panel in Google Docs and is constantly engaged in reading and processing your entire document as you write, providing contextual suggestions that refer specifically to what you’ve written,” said Google.

    Another feature of Google Workplace, particularly in Gmail and Docs, is the ‘help me write’ feature, which allows users to write anything at different lengths.

    The new AI features for Slides and Meet include the ability to type in what kind of visualization the user is looking for, and the AI creates that image. Specifically for Google Meet, that means custom backgrounds.

    Google introduced the Magic Editor in Google Photos.

    Google also announced new AI models heading to Vertex AI, its fully managed AI enterprise service, including a text-to-image model called Imagen.

    Another interesting project introduced by the search giant is Project Tailwind, an AI-powered notebook tool that takes a user’s free-form notes and automatically organizes and summarizes them.

    Essentially, users pick files from Google Drive, then Project Tailwind creates a private AI model with expertise in that information, along with a personalized interface designed to help sift through the notes and docs.

    The tool is available through Labs, Google’s refreshed hub for experimental products.