Author: IBL News

  • Hollywood Studios Will Request Consent Before Making Digital Replicas of Actors

    Hollywood Studios Will Request Consent Before Making Digital Replicas of Actors

    IBL News | New York

    After its tentative agreement with Hollywood studios, the SAG-AFTRA union revealed how studios will handle AI replicas of living and dead actors and how this generative technology will impact the industry for decades.

    According to the deal, companies must request consent before making digital replicas of actors, disclosing what those replicas will be used for. Actors will also receive compensation for the digital replicas.

    The rules will also apply to deceased actors. Heirs or beneficiaries must consent first.

    Regarding synthetic fakes, or fake performers who are based on the image and likeness of an actor, SAG-AFTRA will be notified, having the right to bargain for fair pay.

    It’s expected that Hollywood film and TV production will resume in January 2024, following months of disruption with a 118-day strike.

    “It allows the industry to go forward. It does not block AI, but it makes sure that performers are protected, the rights of consent are protected, the rights to pay compensation, and the rights of employment are protected,” said the Union, which hosts 160,000 members.

    In October, the Writers Guild of America (WGA) also officially ended its strike after ratifying a three-year deal with AI terms approved.

    It was agreed that AI cannot be considered a writer within TV and film projects, and AI-generated material is not considered literary material or assigned material.

    On the other hand, writers were given the option to choose to use AI if they pleased, but could not be forced to use AI software by a company.

    The deal also stipulated that if anything is written by AI, then the company must notify the writer in advance.
    .

  • Amazon Will Train on AI Skills 2 Million Workers by 2025

    Amazon Will Train on AI Skills 2 Million Workers by 2025

    IBL News | New York

    Amazon announced “AI Ready”, a new program to train two million workers globally by 2025 in artificial intelligent skills as the company is falling behind rivals on generative AI.

    The “AI Ready” initiative is in addition to AWS’s commitment to invest hundreds of millions of dollars to provide free cloud computing skills training to 29 million people by 2025, which has already trained more than 21 million people.

    The giant said that workers with AI skills to earn up to 47% more in salaries.

    Currently, hiring AI-skilled talent is a priority among 73% of employers but three out of four who consider it a priority can’t find the AI talent they need.

    The three new initiatives are:

    • Eight new and free AI and generative AI courses, ranging from foundational to advanced.
    • Amazon Web Services (AWS) Generative AI Scholarship, providing $12 million in  scholarships to 50,000 high school and university students globally with access to a new generative AI course on Udacity.
    • New Hour of Code Dance Party: AI Edition collaboration with Code.org designed to help students learn about generative AI.During this hour-long introduction to coding and AI, students will create their own virtual music video set to hit songs from artists including Miley Cyrus, and Harry Styles.

      “Students will code their virtual dancer’s choreography and use emojis as AI prompts to generate animated backgrounds. The activity will give participants an introduction to generative AI, including learning about large language models and how they are used to power the predictive analytics responsible for creating new images, text, and more,” said Amazon.

      Hour of Code will take place globally during Computer Science Education Week, December 4–10, engaging students and teachers in kindergarten through 12th grade.

      Additionally, AWS is providing up to $8 million in AWS Cloud computing credits to Code.org, which runs on AWS, to further support Hour of Code.

    The above mentioned courses for business and nontechnical audiences are available on AWS Educate and AWS Skill Builder. Participants can also learn how to use CodeWhisperer, Amazon’s AI code generator, which produces whole lines of code.

    Courses for developers and technical audiences are:

    A photo of a laptop device on a table that shows part of the tool for Hour of Code Dance Party: AI Edition.
  • Stability.AI Introduces a Research Model for Generative Video

    Stability.AI Introduces a Research Model for Generative Video

    IBL News | New York

    stability.ai released this week Stable Video Diffusion, a foundation model for generative video based on the image model Stable Diffusion.

    Adaptable to numerous video applications, the released model is intended for research but not for real-world or commercial applications at this stage.

    Stable Video Diffusion was released in the form of two image-to-video models, capable of generating 14 and 25 frames at customizable frame rates between 3 and 30 frames per second.

    “Stable Video Diffusion is a proud addition to our diverse range of open-source models. Spanning across modalities including image, language, audio, 3D, and code, our portfolio is a testament to Stability AI’s dedication to amplifying human intelligence,” said the company.

    In addition, stability.ai opens up a waitlist to access a new upcoming web experience featuring a Text-To-Video interface. This tool showcases the practical applications of Stable Video Diffusion in numerous sectors, including Advertising, Education, Entertainment, and beyond.
    .

     

     

  • A Year of the Huge Hit of OpenAI’s ChatGPT

    A Year of the Huge Hit of OpenAI’s ChatGPT

    IBL News | New York

    A year ago, on November 30, 2023, OpenAI’s ChatGPT was launched, starting a new time in EdTech through Generative AI technology.

    The day represents a turning point in the digital world, as it happened with Netscape, Facebook, Netflix, and the iPhone.

    When ChatGPT was launched nobody took the stage, and no one predicted that this apparently simple chatbot would be the fastest-growing consumer technology in history.

    It had a million users in five days, 100 million after just two months, and now boasts of having 100 million users every week.

    ChatGPT, and the model underneath it, also quickly became a billion-dollar business for OpenAI, with the huge backing of Microsoft, which invested over $12 billion.

    In a year otherwise marked by a huge decline in venture capital investing, companies with Generative AI in their pitch have been able to raise $17.9 billion just in the third quarter of 2023, according to Pitchbook.

    A few companies have successfully emerged: Anthropic as the most well-funded competitor; Midjourney and Stable Diffusion as image-generating; Character.ai as a free chatbot-creator; Github and Microsoft’s Bing copilots; and Google’s Duet.

    AI hardware made Nvidia one of the most valuable companies on earth.

    Within a year, we also saw corporate drama at OpenAI. CEO Sam Altman was briefly forced out. It was a power play between board members and executives, apparently on a disagreement over safety.
    .

     

  • OpenAI’s New Board Takes Over, With Microsoft As a Non-Voting Observer

    OpenAI’s New Board Takes Over, With Microsoft As a Non-Voting Observer

    IBL News | New York

    Microsoft, OpenAI’s largest investor, is getting a non-voting observer seat on the new board, the company stated in a blog post yesterday. Currently, Microsoft holds a 49 percent stake in the for-profit entity that the OpenAI’s board controls. 

    At the same time, this week, Sam Altman officially returned as OpenAI’s CEO, following the agreement reached last week. In addition, Mira Murati was confirmed as CTO, and Greg Brockman returned as President. [Both in the picture above, along with Sam Altman]

    OpenAI’s new board consists of chair Bret Taylor, Larry Summers, and Adam D’Angelo, the only remaining holdout from the previous board.

    Three of the four board members who decided to suddenly fire Altman are now gone, including Ilya Sutskever, OpenAI’s co-founder and chief scientist who initially participated in the board coup and changed his mind after nearly all of the company’s employees threatened to quit if Altman didn’t come back.

    In his memo to employees, Altman said that he harbors “zero ill will” toward Ilya Sutskever. “While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.” 

    “I am so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world,” said Sam Altman.

    “We will further stabilize the OpenAI organization so that we can continue to serve our mission. This will include convening an independent committee of the Board to oversee a review of the recent events,” said Bret Taylor, Chair at OpenAI.
    .

  • The US, UK, and 18 Countries Agree on Guidelines to Keep AI Systems Safe

    The US, UK, and 18 Countries Agree on Guidelines to Keep AI Systems Safe

    IBL News | New York

    The United States, Britain, and eighteen countries — including Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore, among others — unveiled this month an international agreement that carries general recommendations on how to keep AI safe from rogue actors, monitor systems, and protect data from tampering.

    The agreement, written in a 20-page nonbinding document, pushes companies to create “secure by design” AI systems, keeping people safe from misuse.

    The director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, said to Reuters that “this is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs.”

    The agreement is the latest in a series of initiatives by nations to shape the development of AI.
    .

  • Amazon Unveiled ‘Q’, an AI-Powered Chatbot for AWS Customers

    Amazon Unveiled ‘Q’, an AI-Powered Chatbot for AWS Customers

    IBL News | New York

    Amazon Web Services (AWS) unveiled yesterday at AWS re:Invent conference in Las Vegas a business and workplaces focused, but not intended for customers, AI assistant named Amazon Q.

    This chatbot seems like Amazon’s answer to OpenAI’s ChatGPT, Microsoft’s Copilots, and Google’s Bard and Duet.

    Available in public preview, Amazon Q starts at $20 per user per month, while Microsoft and Google both charge $30 a month for each user of the enterprise chatbots that work with their email and other productivity applications.

    It is accessible from the AWS Management Console through a conversational interface — see picture below — and existing chat apps like Slack.

    • “Amazon Q is an expert for customers building, deploying, and operating applications and workloads on AWS,” said the company.

    “Trained on 17 years of AWS knowledge and experience, Amazon Q transforms the way developers and IT professionals build, deploy, and operate applications and workloads on AWS.”

    • “Customers can get crisp answers and guidance by asking questions to learn about AWS capabilities (e.g., “Tell me about Agents for Amazon Bedrock?”), research how an AWS service works (e.g., “What are the scaling limits on a DynamoDB table?”), figure out the best way to architect a solution (e.g., “What are the best practices for building event-driven architectures?”), or identify the best service for their use case (e.g., “What are the ways to build a web app on AWS?”).”

    • “Customers can connect Amazon Q to their business data, information, and systems, so it can synthesize everything and provide tailored assistance to help employees solve problems, generate content, and take actions relevant to their business.”

    Amazon Q comes with over 40 built-in connectors for popular data sources, including Amazon S3, Dropbox, Confluence, Google Drive, Microsoft 365, Salesforce, ServiceNow, and Zendesk, as well as the option to build custom connectors for internal intranets, wikis, and run books.

    It means that employees can use Amazon Q to complete tasks in popular systems like Jira, Salesforce, ServiceNow, and Zendesk. For example, an employee could ask Amazon Q to open a ticket in Jira or create a case in Salesforce.

    Q indexes all connected data and content, “learning” aspects about a business, including its organizational structures, core concepts, and product names. Customers upload a file (a Word doc, PDF, spreadsheet, and the like) and ask questions about that file.

    Amazon Q provides generative AI-powered assistance across QuickSight — its BI service that offers interactive dashboards, paginated reports, embedded analytics, and natural-language querying capabilities — Amazon Connect and AWS Supply Chain.

    AWS mentioned six brands already using its chatbot in addition to Amazon: Accenture, BMW Group, Gilead, Mission Cloud, Orbit Irrigation, and Wunderkind.

    Unlike ChatGPT and Bard, Amazon Q is not built on a specific AI model. Instead, it uses an Amazon platform known as Bedrock, which connects several AI systems together, including its own Titan, as well as ones developed by Anthropic and Meta.

    The name Q is a play on the word “question,” given the chatbot’s conversational nature, said Adam Selipsky, CEO at AWS (in the picture below).
    .

     

    Adam Selipsky speaks in front of a colorful screen that says “A.W.S. re: Invent.”

     

     

     

  • AI Developments Will Speed Up with the Defeat of Anti-Altman Allies

    AI Developments Will Speed Up with the Defeat of Anti-Altman Allies

    IBL News | New York

    Will AI now move faster and be less controlled?

    It seems that the chaotic events of the last week at OpenAI have sped everything up, according to most observers.

    Those warning about the risks of AI lost the battle on the drama over the control of 90 billion’s start-up OpenAI as two of the three external board members were replaced, and the outed CEO Sam Altman was reinstated.

    It all took place within a week, and it resulted this way thanks to the support of Microsoft and investors — the money component — and over 90% of the employees to Altman.

    Created to build a machine version of AGI, OpenAI has been building it as fast as possible while, strategically, its CEO has been pushing an anti-competitive regulatory environment by warning that the innovation on AI is becoming extremely dangerous and governments should get involved.

    Many in tech think that Sam Altman was just trying to get governments to ban competition, especially the open-source models.

    Those at OpenAI who think capitalism impulse should slow down and be careful with AI — the majority of a board hired to do that — mounted a coup against those who think innovation — and therefore profits — should speed up.

    However, the reality is that even machine-learning scientists don’t know when AGI will be achieved, as it is mostly an abstract concept.

    Part of the problem and conflict when it comes to discussing AGI is that it’s an abstract concept.

    “ChatGPT might scale all the way to the Terminator in five years, or in five decades, or it might not,” wrote The Financial Times.

    “Failed coups often accelerate the thing that they were trying to prevent.”
    .

  • Anthropic Introduced an Upgraded Version of Its Conversational AI Assistant

    Anthropic Introduced an Upgraded Version of Its Conversational AI Assistant

    IBL News | New York

    Anthropic released this month Claude 2.1, an upgraded version of its conversational AI assistant, which powers the claude.ai chat experience. It improves its pricing and includes improvements in areas like context length, accuracy, and integration capabilities.

    Claude 2.1 can now process up to 200,000 context tokens, equivalent to around 150,000 words or 500 pages of material. That’s twice as much as its previous token limit.

    It means that users can now upload technical documentation like entire codebases, financial statements like S-1s, or even long literary works like The Iliad or The Odyssey.

    “By being able to talk to large bodies of content or data, Claude can summarize, perform Q&A, forecast trends, and compare and contrast multiple documents,” said the company.

    Anthropic stated that Claude 2.1 reduces significantly hallucinated statements and takes more contextual information, aiming to provide better summaries, question answering, trend forecasting, and other insights.

    Claude 2.1 shows expanded interoperability for day-to-day business operations, although the tool use feature remains in early development. As a result, it might answer a question by querying a private database rather than guessing, translating natural language requests into API calls.

    In other words, with this API tool, the model will decide which tool is required to achieve the task and execute an action on their behalf, such as:

    • Using a calculator for complex numerical reasoning
    • Translating natural language requests into structured API calls
    • Answering questions by searching databases or using a web search API
    • Taking simple actions in software via private APIs
    • Connecting to product datasets to make recommendations and help users complete purchases

    The Claude 2.1 upgrade is already live for Anthropic’s hosted chatbot interface at claude.ai and the paid Claude Pro API tier.

    The 200,000 token context limit is exclusive to Pro users for now on its website, similarly priced to the (currently paused) ChatGPT Plus subscription at $20/month.

    Perplexity Pro subscribers can also use Claude 2.1 by changing the model used in Settings. The platform is also offering two months for free for those who want to give it a try.

    In addition to the new version of Claude 2.1, Anthropic introduced system prompts, which allow users to provide custom instructions and context in order to improve performance.

    System prompts allow users to set goals, specify Claude’s persona or tone and take on specified personalities and roles, structure responses, establish rules and constraints, supply relevant background knowledge, and define standards for verifying outputs.

    By prompting Claude in this way, users can shape more accurate, consistent responses that stay in character for role-playing and more reliably follow provided guidelines.

    System prompts ultimately aim to enhance Claude’s capabilities for intended real-world applications. 

    Founded in 2021, Anthropic develops AI assistant technology focused on safety, honesty, and control. It recently received an investment of $4 billion from Amazon to continue advancing AI with more capable models.
    .

  • AI Agents, the Second Phase of Generative AI

    AI Agents, the Second Phase of Generative AI

    IBL News | New York

    After the release of the bot ChatGPT a year ago, the second phase of personalized, autonomous AI agents is emerging.

    These agents can perform complex tasks, such as sending emails, scheduling meetings, booking flights or tables in a restaurant, or even complex tasks like buying presents for family members or negotiating a raise.

    Personalized chatbots, programmed for specific tasks, that GPT creators will be able to release through the upcoming OpenAI’s GPT Store, are a prelude.

    For now, these custom GPTs are easy to build without knowing how to code.

    Users just answer a few simple questions about their bot — its name, its purpose, the tone used to respond — and the bot builds itself in just a few seconds. Users can upload PDF documents they want to use as reference material or easily look up Q&A. They can also connect the bot to other apps or edit its instructions.

    Although these custom chatbots are far from working perfectly, they can be useful tools for answering repetitive questions in customer service departments.

    Some AI safety researchers fear that giving bots more autonomy could lead to disaster, The New York Times reported. The Center for AI Safety, a nonprofit research organization, listed autonomous agents as one of its “catastrophic AI risks” this year, saying that “malicious actors could intentionally create rogue AI with dangerous goals.”

    For now, these agents look harmless and limited in their scope.

    Its development seems to be dependent on gradual iterative deployment, that is, small improvements at a fast pace rather than a big leap.

    In the last OpenAI developer conference, Sam Atman built on stage a “start-up mentor” chatbot to give advice to aspiring founders, based on an uploaded file of a speech he had given years earlier.

    The San Francisco-based research lab envisions a world where AI agents will be extensions of us, gathering information and taking action on our behalf.
    .

    For now, OpenAI’s bots are limited to simple, well-defined tasks, and can’t handle complex planning or long sequences of actions.
    After a day care’s handbook was uploaded to OpenAI’s GPT creator tool, a chatbot could easily look up answers to questions about it.
    A screenshot of a GPT “Day Care Helper” conversation between the author and the chatbot about circle time.