Twitter rival Instagram’s text-based app Threads announced it crossed the milestone of 100 million active users in less than a week, since its launch on June 6.
Until now, OpenAI’s ChatGPT had the distinction of being the fastest-growing consumer product by achieving 10 million daily users in 40 days and 100 million monthly users in two months.
Mark Zuckerberg’s Meta’s new text-focused social platform lacks some features. The app has a read-only web interface, no support for post search, direct messages, hashtags, and no “Following” feed.
Threads, Meta’s new Twitter clone, is deeply tied into Instagram. Instagram accounts now display a Threads user number so the counting is both transparent and happening in real time.
With Twitter in trouble — and its owner Elon Musk developing a controversial strategy — there’s a massive appetite for a replacement as Mastodon and Bluesky didn’t massively scale.
Code Interpreter lets ChatGPT run code optionally with access to files (up to 100MB in size) that the user has uploaded. The user can ask ChatGPT to analyze data, create charts, edit files, perform math, etc. The tool can write code in Python and manipulate files.
It means that Code Interpreter can generate charts, maps, data visualizations, and graphics, analyze music playlists, create interactive HTML files, clean datasets, and extract color palettes from images. The interpreter unlocks a myriad of capabilities, making it a powerful tool for data visualization, analysis, and manipulation.
Code Interpreter can operate at an advanced level by automating complex quantitative analyses, merging and cleaning data, and even reasoning about data in a human-like manner.
The AI can produce visualizations and dashboards, which users can then refine and customize simply by conversing with the AI. Its ability to create downloadable outputs adds another layer of usability to Code Interpreter.
Experts agree that Code Interpreter is setting a new standard for the future of AI and data science. With this tool, OpenAI is pushing the boundaries of ChatGPT and large language models (LLMs) generally yet again.
Breaking: Now you can turn images into video with ChatGPT
OpenAI this week announced through a blog post that all paying API customers will have access to GPT-4 by the end of this month. GPT-3.5 Turbo, image-generating model DALL·E, and speech-to-text model Whisper APIs are also generally available.
“Today all existing API developers with a history of successful payments can access the GPT-4 API with 8K context; we plan to open up access to new developers by the end of this month, and then start raising rate limits after that depending on compute availability,” said Open AI.
Applications using the stable model names for base GPT-3 models (ada, babbage, curie, davinci) will automatically be upgraded to the new models on January 4, 2024.
Developers using the old models will have to manually upgrade their integrations by that date.
GPT-4 can generate text and code and accept image and text inputs — an improvement over GPT-3.5, its predecessor, which only accepted text.
Like previous GPT models from OpenAI, GPT-4 was trained using publicly available data, including from public webpages, as well as data that OpenAI licensed.
However, GPT-4 isn’t perfect. It hallucinates facts and makes reasoning errors, sometimes with confidence. It doesn’t learn from its experience, failing at hard problems such as introducing security vulnerabilities into the code it generates.
OpenAI said that later this year, it will allow developers to fine-tune GPT-4 and GPT-3.5 Turbo, with their own data.
Also, OpenAI announced the deactivation of the browsing capabilitywith Bing after it launched the feature for Plus subscribers a few weeks ago.
“We’ve learned that the browsing beta can occasionally display content in ways we don’t want, e.g. if a user specifically asks for a URL’s full text, it may inadvertently fulfill this request. We are temporarily disabling Browse while we fix this.”
The users went on to share other ingenious tips and tricks that they use to bypass these paywalls in the thread. Several users illustrated on Reddit that they were able to bypass these paywalls using ChatGPT by prompting the tool to print the text on an article behind a paywall.
It’s unclear when OpenAI will restore the feature.
Another recent announcement from OpenAI refers to the creation of a new team led by its chief scientist to steer and control “superintelligent” AI systems that could arrive within the decade.
The prediction is that AI with intelligence will exceed that of humans.
Class.com announced it would release later this year its ChatGPT API-based Teaching Assistant to improve learner engagement, focus, and outcomes on live online courses.
The chatbot will provide answers based on what was taught in class, highlight the transcript of spoken text, add details, provide a study guide, and supplement instructional materials.
Class.com’s tool will include the option of turning it on or off in the courses following the instructors’ choice.
“Class will work closely with the education community to develop best practices and policies for the use of AI in the classroom,” said Michael Chasen, CEO of the company.
Focused on online synchronous learning, Class’ built-in Zoom platform claims to serve 1,500+ institutions worldwide with 10M+ users.
Generative AI is getting real traction from real companies: models like Stable Diffusion and ChatGPT are setting historical records for user growth and several applications in image generation, copywriting, and code writing have exceeded $100 million of annualized revenue.
• Infrastructure vendors are the biggest winners in this market so far, capturing the majority of dollars.
• Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. Many apps are also relatively undifferentiated since they rely on similar underlying AI models and haven’t discovered obvious network effects, or data/workflows, that is hard for competitors to duplicate.
• Most model providers, though responsible for the very existence of this market, haven’t yet achieved a large commercial scale. However, Given the huge usage of these models, large-scale revenues may not be far behind.
“Predicting what will happen next is much harder. But we think the key thing to understand is which parts of the stack are truly differentiated and defensible,” states the company.
“The first wave of generative AI apps are starting to reach scale, but struggle with retention and differentiation.”
This is Andreessen Horowitz’s preliminary view of the generative AI tech stack.
It’s estimated that 10-20% of total revenue in generative AI today goes to the big three clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
The biggest winner in generative AI so far is Nvidia. The company reported $3.8 billion of data center GPU revenue in the third quarter of its fiscal year 2023, including a meaningful portion for generative AI use cases.
Other hardware options do exist, including Google Tensor Processing Units (TPUs); AMD Instinct GPUs; AWS Inferentia and Trainium chips; and AI accelerators from startups like Cerebras, Sambanova, and Graphcore. Intel, late to the game, is also entering the market with its high-end Habana chips and Ponte Vecchio GPUs.
“Models face unclear long-term differentiation because they are trained on similar datasets with similar architectures; cloud providers lack deep technical differentiation because they run the same GPUs; and even the hardware companies manufacture their chips at the same fabs.”
.
San Francisco – based Typeface.ai, which is building generative AI for brands, closed a $100 million Series B round.
With total capital raised of $165 million at a valuation of $1 billion, Typeface, launched in February 2023, said that its personalized content creation, a unified one-brand approach offer, has eliminated the barriers for enterprises to harness generative AI.
The investment was led by Salesforce Ventures with participation from Lightspeed Venture Partners, Madrona, GV (Google Ventures), Menlo Ventures, and M12 (Microsoft’s Venture Fund).
Typeface provides a wide range of workflows across departments, including marketing, sales, product, and HR.
Recently unveiled new features include an advanced Image Studio for high-resolution product photography, video-to-text conversion, and selective image editing and regeneration.
The Typeface platform consists of three key components:
• A content hub where users can upload assets and guidelines for “on-brand” text and image generation. • Blend, which uses AI to train and personalize content to a brand’s voice and style. • Flow, which provides templates and workflows designed to integrate into existing apps and systems.
Typeface places emphasis on brand governance, content safety, and privacy. For example, using brand-approved wording and assets, a content marketing manager can generate an Instagram post, repurpose an event video into a blog post, or draft a follow-up email.
A competitor, Jasper AI, recently raised $125 million at a $1.5 billion valuation.
Microsoft, this week, announced new AI-powered shopping tools for its new Bing search engine and the Edge sidebar “to make it easier to discover, research, and complete your purchase all in one place.”
Microsoft’s shopping assistant generates a tailored Buying Guide that tells the user what to look for in each category, offers product suggestions, and shows the specifications of multiple, similar items next to each other in a compare table.
“Price Comparison and Price History are built-in browser features that help ensure you’re buying at the right place and time, and Edge helps you automatically apply coupons and cashback when shopping online,” said the company.
“Price Match will be rolling out soon in the US. Price History, Price Comparison, Coupons, Cashback, and Package Tracking are already available in select markets and built-in to Edge.”
Microsoft will get an affiliate fee when the user buys.
.
Trying to avoid being left behind in the generative AI race, Amazon Web Services, Inc. (AWS) announced this month it will put $100 million in a new program to connect affiliated data scientists, strategists, engineers, and solutions architects with customers and partners to accelerate enterprise adoption an innovation.
The program, called AWS Generative AI Innovation Center, will include free workshops, engagements, and training. Use cases, best practices, and industry expertise will be part of the initiative.
“With over 100,000 clients having used AWS AI and ML services, now, customers around the globe are hungry for guidance about how to get started quickly and securely with generative AI,” said Matt Garman, Senior Vice President of Sales, Marketing, and Global Services at AWS.
According to AWS, healthcare and life sciences companies can pursue ways to accelerate drug research and discovery: manufacturers can build solutions to reinvent industrial design and processes; and financial services companies can develop ways to provide customers with more personalized information and advice.
AWS offers several generative AI services, such as Amazon CodeWhisperer, an AI-powered coding companion, and Amazon Bedrock, a fully managed service that makes foundational models (FMs) from AI21 Labs, Anthropic, and Stability AI, along with Amazon’s own family of FMs, Amazon Titan, accessible via an API.
On April, AWS launched a 10-week program for generative AI startups and debuted Bedrock, a platform to build generative AI-powered apps via pretrained third – and first-party models. AWS also recently announced that it would work with Nvidia to build “next-generation” infrastructure for training AI models — complementing its in-house Trainium hardware.
Grand View Researchestimates that generative AI products and solutions could be worth close to $110 billion by 2030.
Salesforce Ventures, Salesforce’s VC division, plans to pour $500 million into startups developing generative AI technologies. Workday recently added $250 million to its existing VC fund specifically to back AI and machine learning startups. OpenAI, the company behind the viral chatbot ChatGPT, has raised a $175 million fund to invest in AI startups. And just this week, Dropbox launched a $50 million AI-focused venture fund.
Accenture and PwC, meanwhile, have announced that they plan to invest $3 billion and $1 billion, respectively, in AI.
According to GlobalData, AI startups received over $52 billion in funding across more than 3,300 deals in the last year alone.
San Francisco – based data storage and management startup Databricks, this week, announced that it will pay $1.3 billion to acquire MosaicML, an open-source startup that enables businesses to build low-cost LLMs (large language models) with proprietary data.
Its two models, MPT-7B and the recent release of MPT-30B, had 3.3 million downloads.
The deal is expected to close during Databricks’ second quarter ending July 31.
“Every organization should be able to benefit from the AI revolution with more control over how their data is used. Databricks and MosaicML have an incredible opportunity to democratize AI and make the Lakehouse the best place to build generative AI and LLMs,” said Ali Ghodsi, Co-Founder and CEO of Databricks.
Databricks intends to combine its Lakehouse Platform with MosaicML’s technology to offer customers a way to train and use LLMs with more control and ownership over how their data is used.
According to MosaicML, “combined with near linear scaling of resources, multi-billion-parameter models can be trained in hours, not days, and it will cost thousands of dollars, not millions.”
Launched in 2021 and with a workforce of 62 employees today, MosaicML had raised $64 million from investors that included DCVC, AME Cloud Ventures, Lux, Frontline, Atlas, Playground Global, and Samsung Next.
Companies like Anthropic and OpenAI license ready-made language models to businesses, which then build generative AI apps on top of them. MosaicML says they can offer similar AI models but at a lower cost and customize with a company’s data. The current cost of training a model on specialized data is estimated at $1 million to $2 million, according to experts.Those kinds of domain-specific models can be more useful for companies than building on top of the entire corpus of data that OpenAI.Large language models are becoming fine-tuned for very specific applications, and at that point, it is so small that they could be embedded into any cellphone.
Some of those models using smaller, pre-trained models are already available in open-source libraries like those offered by machine-learning startup Hugging Face.
.
OpenAI, this week, announced that Plus users on the mobile iOS and Android ChatGPT app can now use Browsing for queries relating to current events and other information that extend original training data, that is, the year 2021.
This feature can be enabled by selecting “GPT-4” and choosing “Browser with Bing.”
The fact that Bing is the only search engine available draws critics, as the users don’t have any alternatives to choose from.
Techcrunch wrote: “Limiting ChatGPT’s search capabilities to Bing seems just short of a user-hostile move. The business motivations are obvious — OpenAI has a close partnership with Microsoft, which has invested over $10 billion in the startup — but Bing is far from the be-all and end-all of search engines.”
Recently, a Stanford study showed evidence that Bing’s top search results contained an “alarming” amount of disinformation.
On the other hand, also this week, OpenAI’s CEO Sam Altman told some developers that his company wants to turn it into a “supersmart personal assistant for work.”
With built-in knowledge about an individual and their workplace, this assistant could carry out tasks such as drafting emails or documents in that person’s style and with up-to-date information about their business.
The assistant features could put OpenAI on a collision course with Microsoft, its primary business partner, investor, and cloud provider, as well as with other OpenAI software customers such as Salesforce.
Those firms also want to use OpenAI’s software to build AI “copilots” for people to use at work.
.