Pro-Palestinian protests sweep U.S. college campuses following dozens of students arrested at Columbia, NYU, and Yale on Monday night due to alleged antisemitic messages. Meanwhile, Columbia University canceled in-person classes. [Photos: See scenes of protests.]
Protests over the war in Gaza at a handful of elite American universities had officials scramble to defuse demonstrations.
In addition to rallies, encampments have been set up at the University of California at Berkeley, MIT, the University of Michigan, Emerson College, and Tufts.
“We stand with Palestine and we stand with the liberation of all people,” one protester said. Others were likening the rallies to historic demonstrations over the Vietnam War and apartheid in South Africa.
Recent videos posted online have appeared to show some protesters near Columbia expressing support for the unprecedented Hamas attack on Israel. Democratic Congresswoman Kathy Manning, who toured Columbia on Monday, said she had seen protesters there calling for Israel’s destruction.
The wave of demos has been marred by alleged antisemitic incidents, which the White House has condemned.
When asked about the rallies on Monday, President Joe Biden said he condemned both “the antisemitic protests” as well as “those who don’t understand what’s going on with the Palestinians”.
Students on both sides say there has been a rise in both antisemitic and Islamophobic incidents since Israel’s campaign in Gaza.
The NYU protesters were calling on their institution to disclose and divest its “finances and endowments from weapons manufacturers and companies with an interest in the Israeli occupation”.
The attack on southern Israel on 7 October saw about 1,200 Israelis and foreigners, mostly civilians, killed and 253 others taken back to Gaza as hostages. Israel responded by launching its most intense-ever war in Gaza, intending to destroy Hamas and free the hostages. More than 34,000 Palestinians in Gaza, most of them children and women, have been killed in the conflict.
.
Google launched this month, during its Cloud Next 2024 event, Vertex AI Agent Builder, a no-code console for developers to create production-grade AI agents using natural language.
It uses open-source frameworks like LangChain on Vertex AI. Developers create agents by defining the goal, providing step-by-step instructions, and sharing conversational examples.
They can stitch together multiple agents, with one agent functioning as the main agent and others as subagents for complex goals. Agents can call functions or connect to applications to perform tasks for the user.
Vertex AI Agent Builder can improve accuracy and user experience by grounding model outputs using vector search or Google Search to build custom embeddings-based RAG systems.
“Vertex AI Agent Builder allows people to very easily and quickly build conversational agents,” said Google Cloud CEO Thomas Kurian [video].
During the same event, Google also announced Gemini 1.5 Pro, which can process up to 1 million tokens, around four times the amount of data that Anthropic’s Claude 3 model can handle and eight times as much as OpenAI’s GPT-4 Turbo.
This allows tasks like analyzing code libraries, reasoning across lengthy documents, and holding long conversations.
Google has made Gemini 1.5 Pro available in a public preview via the Gemini API in Google AI Studio.
Meanwhile, Google’s Vertex AI Model Garden is expanding the variety of available open-source models, currently providing developers with over 130 curated models.
.
The next big breakthrough in AI is AI Agents. This is when AI goes from being used as an assistant to chat with, to using AI to accomplish complete tasks that a human might otherwise have to perform. This moves AI from being a "read-only" operation to fundamentally a "read/write"…
The 2024 annual ASU+GSV Summit drew over 7,000 learning leaders in an event that took place on April 14–17, 2024 at the Manchester Grand Hyatt San Diego under the theme ‘Here Comes the Sun.’
This ASU+GSV Summit, which celebrated its 15th year, featured groundbreaking discussions, mostly related to generative AI, that are shaping the future of education. [Watch video talks.]
In addition, about 15,000 attendees participated in a special AIR Show on AI revolution in education on April 13–15 at the San Diego Convention Center.
This new event was free, while the regular access to the ASU+GSV Summit was priced at $5,000. It featured more than 400 speakers and showcased academic and commercial initiatives on generative AI for K-12, higher ed, and the workforce. It also held live concerts and performances.
The two conferences highlighted the idea of the impact and fast widespread adoption of generative AI in education, at a speed faster than any other technology has reached in the past 50 years.
The ASU+GSV Summit, co-founded by Michael Moe and Deborah Quazzo, began in 2010 with acollaboration between venture capital firm Global Silicon Valley (GSV) and Arizona State University (ASU).
“Obstacles create opportunities” – Michael Moe (Founder & CEO, GSV).
Meta released this week Llama 3, with two models: Llama 3 8B, which contains 8 billion parameters, and Llama 3 70B, with 70 billion parameters. (The higher-parameter-count models are more capable than lower-parameter-count models.)
Llama 3 models are now available for download and experience at meta.ai. They will soon be hosted in managed form across a wide range of cloud platforms, including AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM’s WatsonX, Microsoft Azure, Nvidia’s NIM, and Snowflake. In the future, versions of the models optimized for hardware from AMD, AWS, Dell, Intel, Nvidia, and Qualcomm will also be made available.
Llama 3 models power Meta’s Meta AI assistant on Facebook, Instagram, WhatsApp, Messenger, and the web.
“Our goal in the near future is to make Llama 3 multilingual and multimodal, have longer context, and continue to improve overall performance across core [large language model] capabilities such as reasoning and coding,” Meta wrote in a blog post.
The company said that these two 8B and 70B models, trained on two custom-built 24,000 GPU clusters, are among the best-performing generative AI models available today. To support this claim, Meta pointed to the scores on popular AI benchmarks like MMLU (which attempts to measure knowledge), ARC (which attempts to measure skill acquisition), and DROP (which tests a model’s reasoning over chunks of text).
Llama 3 8B bests other open models such as Mistral’s Mistral 7B and Google’s Gemma 7B, both of which contain 7 billion parameters, on at least nine benchmarks: MMLU, ARC, DROP, GPQA (a set of biology-, physics- and chemistry-related questions), HumanEval (a code generation test), GSM-8K (math word problems), MATH (another mathematics benchmark), AGIEval (a problem-solving test set) and BIG-Bench Hard (a commonsense reasoning evaluation).
Llama 3 70B beats Gemini 1.5 Pro on MMLU, HumanEval, and GSM-8K, and — while it doesn’t rival Anthropic’s most performant model, Claude 3 Opus — Llama 3 70B scores better than the second-weakest model in the Claude 3 series, Claude 3 Sonnet, on five benchmarks (MMLU, GPQA, HumanEval, GSM-8K and MATH).
Meta also developed its own test set covering use cases ranging from coding and creative writing to reasoning to summarization. Llama 3 70B came out on top against Mistral’s Mistral Medium model, OpenAI’s GPT-3.5, and Claude Sonnet.
.
Meta Llama 3 is very good, especially for such a small model. We can put in a multi-page prompt like our negotiation simulator (https://t.co/j6BcWh4zFb) & it is able to follow the complexity reasonably well. It doesn’t have the “smarts” of GPT-4 class, but impressive nonetheless. pic.twitter.com/qMaRtwogqA
The more I use Llama 3 the more I think that Zuck may have just killed OpenAI and all other large proprietary AI vendors. The gap between latest GPT4 and Llama 70b is virtually non existent. Even if OpenAI releases GPT5 now, 400b Llama 3 is still training and will most likely be…
Both college professors and students are increasingly automating some tasks through AI to free up time and avoid fatigue or boredom, allowing them more personalized instruction — despite issues such as accuracy, plagiarism, and ethical integrity.
A report by Tyton Partners and Turnitin found half of college students used AI tools in Fall 2023. The percentage of faculty members grew to 22% in the fall of 2023.
A variety of AI tools and platforms — such as ChatGPT, Writable, Grammarly, and EssayGrader — can assist teachers in grading papers faster and more accurately, writing feedback, developing lesson plans, and creating assignments, quizzes, polls, videos, and interactives pieces for the classroom.
Students, on the other hand, are mostly using ChatGPT and Microsoft CoPilot — which is built into Word, PowerPoint, and other products.
Schools have formed policies for students but many do not have guidelines for teachers, said CNN in a report.
Grading should remain personalized so teachers can provide more specific feedback and get to know a student’s work, and, therefore, progress over time.
In terms of grading, experts suggest using AI to look at certain metrics — such as structure, language use, and grammar — and give a numerical score on those figures. However, teachers should then grade students’ work themselves when looking for novelty, creativity, and depth of insight.
An example of a lack of integrity can be uploading a student’s work to ChatGPT as well as a potential breach of their intellectual property.
AI tools like ChatGPT use such entries to train their algorithms on everything from patterns of speech to how to make sentences to facts and figures.
Some teachers lean on Houghton Mifflin Harcourt’s Writable which uses ChatGPT to help grade papers but is “tokenized,” so essays do not include any personal information, and it’s not shared directly with the system.
.
The Quora-owned AI chatbot Poe introduced this month a new revenue model based on setting a per-message price. This way, creators and developers will generate income every time a user messages them.
The new model follows the revenue-sharing program released on October 2023, which gave bot creators a cut of earnings when their users subscribe to Poe’s premium product.
Today we’re introducing a new way for model developers and bot creators to generate revenue on @poe_platform: price per message! Creators can now set a per-message price for their bots and generate revenue every time a user messages them. Thread 👇 pic.twitter.com/yx5mKgGoSQ
Poe offers users to choose from OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and other LLMs.
“This pricing mechanism is important for developers with a substantial model inference or API costs,” Adam D’Angelo noted in a post on X. “Our goal is to enable a thriving ecosystem of model developers and bot creators who build on top of models and covering these operational costs is a key part of that, in areas like tutoring, knowledge, assistants, analysis, storytelling, and image generation,” he added.
Alongside the per-message revenue model, Poe also launched an enhanced analytics dashboard that displays bot usage and revenue earnings for creators across paywalls, subscriptions, and messages.
.
OpenAI’s CEO, Sam Altman, is pitching ChatGPT Enterprise services to executives from Fortune 500 companies, including some Microsoft customers — its main investor and partner.
Altman is promoting roadshow-like events in San Francisco, New York, and London, intending to add new sources of revenue for his company, Reuters reported this week.
At each event, Sam Altman and the OpenAI’s COO, Brad Lightcap, offer product demonstrations, including ChatGPT Enterprise, API capabilities, its new text-to-video Sora video creation model, and other AI services.
Fortune 500 companies range from finance, healthcare, and energy, among other industries.
Meanwhile, Microsoft offers access to OpenAI’s technology through its Azure cloud and by selling Microsoft 365 Copilot, a productivity tool powered by OpenAI’s models targeting enterprises.
To the question of some executives asking why they should pay for OpenAI’s ChatGPT Enterprise if they are already customers of Microsoft, Altman, and Lightcap responded that paying for the enterprise service allowed them to work with the OpenAI team directly, have access to the latest models, and more opportunity to get customized AI products, attendees present told Reuters.
=Valued at $86 billion in a secondary sale, OpenAI has been trying to diversify its revenue stream and it is expected to achieve $1 billion in revenue in 2024, sources have said.
Currently, OpenAI said that it has over 600,000 customers using ChatGPT Enterprise and Team, up from around 150,000 in January.
.
OpenAI announced on its X account that its GPT-4 Turbo with Vision model is now “generally available” through its API. It’s a big improvement to its API for the powerful GPT-4 Turbo LLM, experts say.
GPT-4’s Vision requests can also now use JSON mode and function calling. This generates a JSON code snippet that can be used to streamline the workflow by automating actions within their connected apps, such as making a purchase or sending an email.
“Previously, developers had to use separate models for text and images, but now, with just one API call, the model can analyze images and apply reasoning,” said OpenAI.
By combining text and images, this multimodal model GPT-4 can take AI applications to new heights.
OpenAI has highlighted several examples of using GPT-4 Turbo with Vision (did their investors know it?):
• The health and fitness app Healthify provides nutritional analysis and recommendations of photos of their meals.
• The UK-based startup TLDraw powers its virtual whiteboard and converts user’s drawings into functional websites.
.
Devin, built by @cognition_labs, is an AI software engineering assistant powered by GPT-4 Turbo that uses vision for a variety of coding tasks. pic.twitter.com/E1Svxe5fBu
GPT-4 Turbo with Vision is now generally available in the API. Vision requests can now also use JSON mode and function calling.https://t.co/cbvJjij3uL
Devin, built by @cognition_labs, is an AI software engineering assistant powered by GPT-4 Turbo that uses vision for a variety of coding tasks. pic.twitter.com/E1Svxe5fBu
The @healthifyme team built Snap using GPT-4 Turbo with Vision to give users nutrition insights through photo recognition of foods from around the world. pic.twitter.com/jWFLuBgEoA
Make Real, built by @tldraw, lets users draw UI on a whiteboard and uses GPT-4 Turbo with Vision to generate a working website powered by real code. pic.twitter.com/RYlbmfeNRZ
OpenAI made available yesterday its new GPT-4 Turbo to paid users of ChatGPT, including Plus, Team, Enterprise, and the API.
GPT-4 Turbo powers the conversational ChatGPT experience with more direct and less verbose responses, according to the company.
It also comes with improved capabilities in writing, math, logical reasoning, and coding.
GPT-4 Turbo is trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off.
On the other hand, this week, according to The Information, OpenAI recently fired two researchers — including an ally of chief scientist Ilya Sutskever, who was among those who pushed for the ouster of CEO Sam Altman late last year — for allegedly leaking information.
.