Category: Top News

  • What Happened at OpenAI: Board Members Believed that Altman Was Dishonest

    What Happened at OpenAI: Board Members Believed that Altman Was Dishonest

    IBL News | New York

    Before firing Sam Altman on November 17, Ilya Sutskever, the chief scientist of OpenAI, and three board members had been whispering behind his back for months, The New York Times revealed in an insightful story this Saturday.

    They believed Sam Altman had been dishonest and should no longer lead the company.

    They were worried that ChatGPT’s success was antithetical to creating safe AI.

    In September, Altman met investors in the Middle East to discuss an AI chip project. The board was concerned that he wasn’t sharing all his plans with it.

    He also believed that Mr. Altman was bad-mouthing the board to OpenAI executives. Other employees have also complained to the board about Mr. Altman’s behavior.

    The ouster was the culmination of years of tensions and divisions at OpenAI.

    Microsoft, which had committed $13 billion to OpenAI, weighed in to protect its investment. Many top Silicon Valley executives and investors, including the CEO of Airbnb, also mobilized to support Altman.

    From his $27 million mansion in San Francisco’s Russian Hill neighborhood, Sam Altman, driven by a hunger for power more than by money, lobbied through social media and voice their displeasure in private text threads, according to the NYT, who interviewed more than 25 people with knowledge of the events.

    On Nov. 21, Sam Altman returned as CEO of OpenAI.

    The San Francisco OpenAI lab was founded by Elon Musk, Sam Altman, Ilya Sutskever, and nine others. Its goal was to build AI systems to benefit all of humanity.

    Unlike most tech start-ups, it was established as a nonprofit with a board that was responsible for making sure it fulfilled that mission.

  • Quora Creates a Revenue Program for Chatbot Creators

    Quora Creates a Revenue Program for Chatbot Creators

    IBL News | New York

    Quora, the company behind the Poe.com chatbot, said it intends to create a new market allowing bot creators in areas such as tutoring, knowledge, therapy, entertainment, analysis, storytelling, roleplay, image, video, and music to generate revenue.

    With this stated goal, this month, Poe.com launched this month a Creator monetization program to support both prompt bots and server bots created by developers who write code and integrate with the platform API.

    “For anyone training their own models and running their own inference, or using a third-party AI service through an API, operating a bot can entail significant infrastructure costs, and we want them to be able to operate sustainably and profitably,” said Adam D’Angelo, CEO at Quora, the company behind Poe.com.

    Poe.com’s monetization structure, currently available only in the US, has two components:

    • If a bot causes a user to subscribe to Poe (measured a few ways), the company will share a cut of the revenue they pay.
    • The bot generator can set a per-message fee and Poe will pay that on every message.

    Users can get started at poe.com/creators, or learn about creating a bot at developer.poe.com.

     

  • The ChatGPT App Gets Nearly $30 Million in Revenue

    The ChatGPT App Gets Nearly $30 Million in Revenue

    IBL News | New York

    The ChatGPT app topped 110 million installs, and its paid version of $20 per month reached $28.6 million in revenue, according to TechCrunch and data.ai.

    Competitor AI chatbot Anthropic’s Claude had 1.18 million monthly active users as of September 2023, according to Apptoppia.

    The top chatbot by revenue was Ask AI, which ranges from one week up to one year, as well as multiple tiers, like “premium” and “elite.”

    Other successful generative AI apps with lots of downloads, as shown in the graphics, are Character AI, Chai, Open Chat, Nova, ChatBot, AI Mirror, Imagine, Artimind, and ChatBox.
    .

  • Google Introduced Its Multimodal Technology ‘Gemini’ and Added It to Bard

    Google Introduced Its Multimodal Technology ‘Gemini’ and Added It to Bard

    IBL News | New York

    Google introduced yesterday its long-awaited answer to ChatGPT, a multimodal, natively designed, and pre-trained AI technology with reasoning capabilities named Gemini.  

    While other multimodal offerings — meaning it can analyze text, audio, video, images, and code —  exist, Gemini was described by Google’s CEO Sundar Pichai as the company’s “most capable and general model yet.”

    “Our first version, Gemini 1.0, is optimized for different sizes: Ultra, Pro, and Nano.”

    Demis Hassabis, CEO and Co-Founder of Google DeepMind, explained that “Gemini Ultra’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.”

    “This makes it especially good at explaining reasoning in complex subjects like math and physics.”

    Gemini can understand, explain, and generate high-quality code in Python, Java, C++, and Go. “Its ability to work across languages and reason about complex information makes it one of the leading foundation models for coding in the world,” said Demis Hassabis.

    Google said that Gemini 1.0 was now rolling out across a range of its products and platforms.

    For example, the chatbot Bard was upgraded with Gemini Pro, while Gemini Ultra will applied early next year in a new experience called Bard Advanced.

    Google was also bringing Gemini to Pixel. Pixel 8 Pro will be engineered to run Gemini Nano, powering new features like Summarize in the Recorder app and rolling out in Smart Reply in Gboard, starting with WhatsApp.

    In the coming months, Gemini will be available in more of our products and services like Search, Ads, Chrome, and Duet AI.

    Starting on December 13, developers and enterprise customers will be able to access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI.

    (Google AI Studio is a free, web-based developer tool to prototype and launch apps quickly with an API key.)
    .

    A chart showing Gemini Ultra’s performance on common text benchmarks, compared to GPT-4 (API numbers calculated where reported numbers were missing).

    A chart showing Gemini Ultra’s performance on multimodal benchmarks compared to GPT-4V, with previous SOTA models listed in places where capabilities are not supported in GPT-4V.

     

    The New York Times: Google Updates Bard Chatbot With ‘Gemini’ A.I. as It Chases ChatGPT

  • Hollywood Studios Will Request Consent Before Making Digital Replicas of Actors

    Hollywood Studios Will Request Consent Before Making Digital Replicas of Actors

    IBL News | New York

    After its tentative agreement with Hollywood studios, the SAG-AFTRA union revealed how studios will handle AI replicas of living and dead actors and how this generative technology will impact the industry for decades.

    According to the deal, companies must request consent before making digital replicas of actors, disclosing what those replicas will be used for. Actors will also receive compensation for the digital replicas.

    The rules will also apply to deceased actors. Heirs or beneficiaries must consent first.

    Regarding synthetic fakes, or fake performers who are based on the image and likeness of an actor, SAG-AFTRA will be notified, having the right to bargain for fair pay.

    It’s expected that Hollywood film and TV production will resume in January 2024, following months of disruption with a 118-day strike.

    “It allows the industry to go forward. It does not block AI, but it makes sure that performers are protected, the rights of consent are protected, the rights to pay compensation, and the rights of employment are protected,” said the Union, which hosts 160,000 members.

    In October, the Writers Guild of America (WGA) also officially ended its strike after ratifying a three-year deal with AI terms approved.

    It was agreed that AI cannot be considered a writer within TV and film projects, and AI-generated material is not considered literary material or assigned material.

    On the other hand, writers were given the option to choose to use AI if they pleased, but could not be forced to use AI software by a company.

    The deal also stipulated that if anything is written by AI, then the company must notify the writer in advance.
    .

  • Amazon Will Train on AI Skills 2 Million Workers by 2025

    Amazon Will Train on AI Skills 2 Million Workers by 2025

    IBL News | New York

    Amazon announced “AI Ready”, a new program to train two million workers globally by 2025 in artificial intelligent skills as the company is falling behind rivals on generative AI.

    The “AI Ready” initiative is in addition to AWS’s commitment to invest hundreds of millions of dollars to provide free cloud computing skills training to 29 million people by 2025, which has already trained more than 21 million people.

    The giant said that workers with AI skills to earn up to 47% more in salaries.

    Currently, hiring AI-skilled talent is a priority among 73% of employers but three out of four who consider it a priority can’t find the AI talent they need.

    The three new initiatives are:

    • Eight new and free AI and generative AI courses, ranging from foundational to advanced.
    • Amazon Web Services (AWS) Generative AI Scholarship, providing $12 million in  scholarships to 50,000 high school and university students globally with access to a new generative AI course on Udacity.
    • New Hour of Code Dance Party: AI Edition collaboration with Code.org designed to help students learn about generative AI.During this hour-long introduction to coding and AI, students will create their own virtual music video set to hit songs from artists including Miley Cyrus, and Harry Styles.

      “Students will code their virtual dancer’s choreography and use emojis as AI prompts to generate animated backgrounds. The activity will give participants an introduction to generative AI, including learning about large language models and how they are used to power the predictive analytics responsible for creating new images, text, and more,” said Amazon.

      Hour of Code will take place globally during Computer Science Education Week, December 4–10, engaging students and teachers in kindergarten through 12th grade.

      Additionally, AWS is providing up to $8 million in AWS Cloud computing credits to Code.org, which runs on AWS, to further support Hour of Code.

    The above mentioned courses for business and nontechnical audiences are available on AWS Educate and AWS Skill Builder. Participants can also learn how to use CodeWhisperer, Amazon’s AI code generator, which produces whole lines of code.

    Courses for developers and technical audiences are:

    A photo of a laptop device on a table that shows part of the tool for Hour of Code Dance Party: AI Edition.
  • Stability.AI Introduces a Research Model for Generative Video

    Stability.AI Introduces a Research Model for Generative Video

    IBL News | New York

    stability.ai released this week Stable Video Diffusion, a foundation model for generative video based on the image model Stable Diffusion.

    Adaptable to numerous video applications, the released model is intended for research but not for real-world or commercial applications at this stage.

    Stable Video Diffusion was released in the form of two image-to-video models, capable of generating 14 and 25 frames at customizable frame rates between 3 and 30 frames per second.

    “Stable Video Diffusion is a proud addition to our diverse range of open-source models. Spanning across modalities including image, language, audio, 3D, and code, our portfolio is a testament to Stability AI’s dedication to amplifying human intelligence,” said the company.

    In addition, stability.ai opens up a waitlist to access a new upcoming web experience featuring a Text-To-Video interface. This tool showcases the practical applications of Stable Video Diffusion in numerous sectors, including Advertising, Education, Entertainment, and beyond.
    .

     

     

  • A Year of the Huge Hit of OpenAI’s ChatGPT

    A Year of the Huge Hit of OpenAI’s ChatGPT

    IBL News | New York

    A year ago, on November 30, 2023, OpenAI’s ChatGPT was launched, starting a new time in EdTech through Generative AI technology.

    The day represents a turning point in the digital world, as it happened with Netscape, Facebook, Netflix, and the iPhone.

    When ChatGPT was launched nobody took the stage, and no one predicted that this apparently simple chatbot would be the fastest-growing consumer technology in history.

    It had a million users in five days, 100 million after just two months, and now boasts of having 100 million users every week.

    ChatGPT, and the model underneath it, also quickly became a billion-dollar business for OpenAI, with the huge backing of Microsoft, which invested over $12 billion.

    In a year otherwise marked by a huge decline in venture capital investing, companies with Generative AI in their pitch have been able to raise $17.9 billion just in the third quarter of 2023, according to Pitchbook.

    A few companies have successfully emerged: Anthropic as the most well-funded competitor; Midjourney and Stable Diffusion as image-generating; Character.ai as a free chatbot-creator; Github and Microsoft’s Bing copilots; and Google’s Duet.

    AI hardware made Nvidia one of the most valuable companies on earth.

    Within a year, we also saw corporate drama at OpenAI. CEO Sam Altman was briefly forced out. It was a power play between board members and executives, apparently on a disagreement over safety.
    .

     

  • OpenAI’s New Board Takes Over, With Microsoft As a Non-Voting Observer

    OpenAI’s New Board Takes Over, With Microsoft As a Non-Voting Observer

    IBL News | New York

    Microsoft, OpenAI’s largest investor, is getting a non-voting observer seat on the new board, the company stated in a blog post yesterday. Currently, Microsoft holds a 49 percent stake in the for-profit entity that the OpenAI’s board controls. 

    At the same time, this week, Sam Altman officially returned as OpenAI’s CEO, following the agreement reached last week. In addition, Mira Murati was confirmed as CTO, and Greg Brockman returned as President. [Both in the picture above, along with Sam Altman]

    OpenAI’s new board consists of chair Bret Taylor, Larry Summers, and Adam D’Angelo, the only remaining holdout from the previous board.

    Three of the four board members who decided to suddenly fire Altman are now gone, including Ilya Sutskever, OpenAI’s co-founder and chief scientist who initially participated in the board coup and changed his mind after nearly all of the company’s employees threatened to quit if Altman didn’t come back.

    In his memo to employees, Altman said that he harbors “zero ill will” toward Ilya Sutskever. “While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.” 

    “I am so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world,” said Sam Altman.

    “We will further stabilize the OpenAI organization so that we can continue to serve our mission. This will include convening an independent committee of the Board to oversee a review of the recent events,” said Bret Taylor, Chair at OpenAI.
    .

  • The US, UK, and 18 Countries Agree on Guidelines to Keep AI Systems Safe

    The US, UK, and 18 Countries Agree on Guidelines to Keep AI Systems Safe

    IBL News | New York

    The United States, Britain, and eighteen countries — including Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore, among others — unveiled this month an international agreement that carries general recommendations on how to keep AI safe from rogue actors, monitor systems, and protect data from tampering.

    The agreement, written in a 20-page nonbinding document, pushes companies to create “secure by design” AI systems, keeping people safe from misuse.

    The director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, said to Reuters that “this is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs.”

    The agreement is the latest in a series of initiatives by nations to shape the development of AI.
    .