The Quora-owned AI chatbot Poe introduced this month a new revenue model based on setting a per-message price. This way, creators and developers will generate income every time a user messages them.
The new model follows the revenue-sharing program released on October 2023, which gave bot creators a cut of earnings when their users subscribe to Poe’s premium product.
Today we’re introducing a new way for model developers and bot creators to generate revenue on @poe_platform: price per message! Creators can now set a per-message price for their bots and generate revenue every time a user messages them. Thread 👇 pic.twitter.com/yx5mKgGoSQ
Poe offers users to choose from OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and other LLMs.
“This pricing mechanism is important for developers with a substantial model inference or API costs,” Adam D’Angelo noted in a post on X. “Our goal is to enable a thriving ecosystem of model developers and bot creators who build on top of models and covering these operational costs is a key part of that, in areas like tutoring, knowledge, assistants, analysis, storytelling, and image generation,” he added.
Alongside the per-message revenue model, Poe also launched an enhanced analytics dashboard that displays bot usage and revenue earnings for creators across paywalls, subscriptions, and messages.
.
OpenAI’s CEO, Sam Altman, is pitching ChatGPT Enterprise services to executives from Fortune 500 companies, including some Microsoft customers — its main investor and partner.
Altman is promoting roadshow-like events in San Francisco, New York, and London, intending to add new sources of revenue for his company, Reuters reported this week.
At each event, Sam Altman and the OpenAI’s COO, Brad Lightcap, offer product demonstrations, including ChatGPT Enterprise, API capabilities, its new text-to-video Sora video creation model, and other AI services.
Fortune 500 companies range from finance, healthcare, and energy, among other industries.
Meanwhile, Microsoft offers access to OpenAI’s technology through its Azure cloud and by selling Microsoft 365 Copilot, a productivity tool powered by OpenAI’s models targeting enterprises.
To the question of some executives asking why they should pay for OpenAI’s ChatGPT Enterprise if they are already customers of Microsoft, Altman, and Lightcap responded that paying for the enterprise service allowed them to work with the OpenAI team directly, have access to the latest models, and more opportunity to get customized AI products, attendees present told Reuters.
=Valued at $86 billion in a secondary sale, OpenAI has been trying to diversify its revenue stream and it is expected to achieve $1 billion in revenue in 2024, sources have said.
Currently, OpenAI said that it has over 600,000 customers using ChatGPT Enterprise and Team, up from around 150,000 in January.
.
OpenAI announced on its X account that its GPT-4 Turbo with Vision model is now “generally available” through its API. It’s a big improvement to its API for the powerful GPT-4 Turbo LLM, experts say.
GPT-4’s Vision requests can also now use JSON mode and function calling. This generates a JSON code snippet that can be used to streamline the workflow by automating actions within their connected apps, such as making a purchase or sending an email.
“Previously, developers had to use separate models for text and images, but now, with just one API call, the model can analyze images and apply reasoning,” said OpenAI.
By combining text and images, this multimodal model GPT-4 can take AI applications to new heights.
OpenAI has highlighted several examples of using GPT-4 Turbo with Vision (did their investors know it?):
• The health and fitness app Healthify provides nutritional analysis and recommendations of photos of their meals.
• The UK-based startup TLDraw powers its virtual whiteboard and converts user’s drawings into functional websites.
.
Devin, built by @cognition_labs, is an AI software engineering assistant powered by GPT-4 Turbo that uses vision for a variety of coding tasks. pic.twitter.com/E1Svxe5fBu
GPT-4 Turbo with Vision is now generally available in the API. Vision requests can now also use JSON mode and function calling.https://t.co/cbvJjij3uL
Devin, built by @cognition_labs, is an AI software engineering assistant powered by GPT-4 Turbo that uses vision for a variety of coding tasks. pic.twitter.com/E1Svxe5fBu
The @healthifyme team built Snap using GPT-4 Turbo with Vision to give users nutrition insights through photo recognition of foods from around the world. pic.twitter.com/jWFLuBgEoA
Make Real, built by @tldraw, lets users draw UI on a whiteboard and uses GPT-4 Turbo with Vision to generate a working website powered by real code. pic.twitter.com/RYlbmfeNRZ
OpenAI made available yesterday its new GPT-4 Turbo to paid users of ChatGPT, including Plus, Team, Enterprise, and the API.
GPT-4 Turbo powers the conversational ChatGPT experience with more direct and less verbose responses, according to the company.
It also comes with improved capabilities in writing, math, logical reasoning, and coding.
GPT-4 Turbo is trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off.
On the other hand, this week, according to The Information, OpenAI recently fired two researchers — including an ally of chief scientist Ilya Sutskever, who was among those who pushed for the ouster of CEO Sam Altman late last year — for allegedly leaking information.
.
New York, home to the headquarters of 44 Fortune 500 companies, is creating a concentration of top decision-makers and AI buyers.
There are currently 35 New York AI unicorn companies that have raised a total of $17B.
Some investors say that San Francisco’s concentration of tech makes for boom-and-bust cycles, while New York is more balanced due to its diverse array of industries including finance, fashion, biotech, media, and entertainment.
“New York City is currently a bustling hub for AI talent, entrepreneurs, and investors,” said Brian Schechter, Partner at Primary Ventures. “While San Francisco has established its dominance in foundation models, New York City is on the brink of becoming synonymous with commercial prosperity in AI.”
NY VC Activity, 2018-2023
AI and machine learning (ML) companies fall into four categories:
Vertical applications: These companies bake AI into their core products, building on top of language models such as ChatGPT.
AI model development: This core group is enabling the AI revolution by developing the principal technology of algorithms and language models, such as OpenAI’s ChatGPT-3.
Autonomous machines: Factory robots and self-driving vehicles make up the lion’s share of this group, which is applying computer vision algorithms to improve manufacturing and transportation.
Computing infrastructure: Hardware companies are reinventing computing infrastructure for the massive new scale that AI computing demands with graphics processing units (GPUs) and semiconductors.
Current and former venture-backed AI companies
• In 2024, New York state unveiled plans to establish itself as an AI hub through a $400M public-private partnership: the Empire AI consortium.
The consortium consists of seven universities (Columbia, Cornell, New York University, Rensselaer Polytechnic Institute (RPI), the State University of New York (SUNY), the City University of New York (CUNY) and the Flatiron Institute) in collaboration with NVIDIA.
• In 2023, Columbia University received a $20M grant from the National Science Foundation (NSF) to spearhead the AI Institute for Artificial and Natural Intelligence (ARNI).
• The global headquarters of IBM Research, for example, is located just outside the city where many of IBM’s breakthroughs in AI and semiconductors have taken place.
• Amazon, Google, Notion, and Meta (including Chief AI Scientist, is Yann Lecun) all have AI research talent in New York.
Select 2023 AI/ML VC Deals
Prominent VC companies investing in AI in New York are today Lux Capital Investment, FirstMark Capital, Greycroft, Primary Venture Partners, and Wing.
.
Google announced this week that it will bring for free to users its Photos application AI-powered editing features, which were previously limited to Pixel phones and tablets and paid subscribers. This enhancement, which will start rolling out on May 15, includes Magic Editor, Magic Eraser, Photo Unblur, and Portrait Light.
Magic Editor, the most notable feature, uses generative AI to do more complicated photo edits, like filling in gaps in a photo and repositioning the subject to the foreground or background.
On the other hand, Google announced that the Gemma family of lightweight, open-source AI models will be extended with two new variants: CodeGemma for code generation, and RecurrentGemma, designed to improve inference at higher batch sizes, which is useful for researchers.
CodeGemma models are available as a 7B pre-trained variant that specializes in code completion and code generation tasks, a 7B instruction-tuned variant for code chat and instruction-following, and a 2B pre-trained variant for fast code completion.
Finally, Google unveiled that will make a new chip for AI work, named Axion, trying to combat the dominance of NVIDIA and other large companies.
.
“AI could have societal consequences that rival the printing press, the internet, and electricity,” JPMorgan Chase CEO Jamie Dimon told shareholders this week.
JPMorgan Chase, the largest U.S. bank, now includes more than 2,000 AI/machine learning (ML) experts and data scientists.
“We have been actively using predictive AI and ML for years – and now have over 400 use cases in production in areas such as marketing, fraud, and risk – and they are increasingly driving real business value across our businesses and functions,” he explained.
“We’re also exploring the potential that generative AI (GenAI) can unlock across a range of domains, most notably in software engineering, customer service, and operations, as well as in general employee productivity,” Dimon explained.
“In the future, we envision GenAI helping us reimagine entire business workflows. We will continue to experiment with these AI and ML capabilities and implement solutions in a safe, responsible way.”
In terms of the creation or elimination of jobs, Dimon stated, “Over time, we anticipate that our use of AI has the potential to augment virtually every job, as well as impact our workforce composition. It may reduce certain job categories or roles, but it may create others as well. As we have in the past, we will aggressively retrain and redeploy our talent to make sure we are taking care of our employees if they are affected by this trend.”
Dimon said the company is working to “proactively stay in front of AI-related risks, particularly as the regulatory landscape evolves” and that AI is part of JPMorgan Chase’s toolset to stop “bad actors using AI to try to infiltrate companies’ systems to steal money and intellectual property or simply to cause disruption and damage.”
Currently, JPMorgan Chase is using AI to analyze vast troves of data to prevent fraud, risk management, provide financial insights, and assess security risks.
.
Indeed.com now allows individuals to use AI-powered writing to fill their work experience. It also launched a suite of AI products for recruiters, such as candidate summaries and custom messages.
These generative AI features, called Smart Sourcing, will revamp Recruiter Holdings-owned hiring platform Indeed to better compete with rivals like LinkedIn, Talent.com, and ZipRecruiter.
Another remarkable feature is saving up to five resumes so that an individual can easily pick the most relevant copy when applying for different roles. This feature will roll out soon, Indeed said.
On their side, employers can get instant recommendations for ideal candidates for their open jobs.
The tool recommends and prioritizes qualified candidates based on an employer’s distinct job requirements, focusing on people actively looking for a new job, especially those active on Indeed over the past 30 days.
Smart Sourcing also helps employers quickly review matched candidates, directly connect with them, and ultimately hire faster. The tool generates custom AI-powered messages based on their job criteria.
Indeed said that AI features “make sourcing and hiring talent more efficient for employers.”
.
2U said in a blog post last Friday that it had $140 million in cash to support its operations and is not considering ceasing operations or ending programs for students.
The company said that it is encountering “unfounded attacks from special interest groups seeking to harm our business and scare our partners and students.”
“Currently, these groups are citing our efforts to manage our balance sheet challenges as a pretext to amplify their attacks.”
“These groups’ recent predictions, that 2U is on the verge of an imminent shutdown, are unequivocally false and represent a blatant attempt to confuse students and the public.”
2U argues that its commitment to continue operations is supported by the “high graduation and completion rates” of its partners’ programs.
The stock of 2U has been below $1 since January 10, 2024, signaling to the markets that the company was in huge distress. The stock price has lost 93% in the last year. Currently, the market capitalization is $31 million.
The Internet is replete with prompt engineering guides, cheat sheets, and advice threads to help you get the most out of an LLM.
However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer, wrote Dina Genkin at IEEE Spectrum.
As a consequence, many prompt-engineering jobs may disappear.
Some researchers found how unpredictable LLM performance is in response to prompting techniques.
For example, asking models to explain their reasoning step-by-step—a technique called chain-of-thought—improves their performance on a range of math and logic questions. There is a surprising lack of consistency.
Given a few examples and a quantitative success metric, these tools will iteratively find the optimal phrase to feed into the LLM.
Researchers found that in almost every case, this automatically generated prompt did better than the best prompt found through trial and error, and, the process was much faster, a couple of hours rather than several days of searching.
.