July 2024
• Groqbook: Generate entire books in seconds using Groq and Llama3 | Story at IBL News
• Vanderbilt University’s Amplify GenAI platform | Story at IBL News
• LLM models | Story at IBL News | Five Top Open Source LLMs

July 2024
• Groqbook: Generate entire books in seconds using Groq and Llama3 | Story at IBL News
• Vanderbilt University’s Amplify GenAI platform | Story at IBL News
• LLM models | Story at IBL News | Five Top Open Source LLMs

IBL News | New York
September 2024
July 2024
• NVIDIA Gen AI Videos
How Educators Are Integrating Generative AI, Simulation and Design Into Their Curricula
• Imagine Live conference videos
• Mindvalley AI Summit 2024 Day 3
• Mindvalley AI Summit 2024 Day 2
• Mindvalley AI Summit 2024 Day 1
January – June 2024
• Forbes: Forging the Future of Business with AI 2024 (YouTube, 38 Videos)
• MIT: Imagination in Action
• Davos 2024: Imagination in Action
• ASU+GSV:
ASU+GSV & Emeritus Summit (YouTube, 45 videos)
2024 ASU+GSV AIR Show (YouTube, 39 videos)
2024 ASU+GSV Summit (YouTube, 111 videos)

IBL News | New York
• Stanford University: Artificial Intelligence Index Report 2024 – Education | News Story at IBL News
• JPMorgan Chase: CEO’s letter to Shareholders
• OpenAI Cookbook with Example Code and Guides for Using Its API
• Coursera: The Business Leader’s Playbook to Generative AI Skills Training
• AI for Education: GenAI Chatbot Prompt Library for Educators
• ASU: Online AI Tools | Assessment Question Generator
• ELAI: Resources of Interest
• Auburn University: Teaching with AI course. Fully online and self-paced. Currently used at 40+ institutions.
• University of Florida: Ai Prompt Cookbook to Enhance Teaching and Learning
• Complete College America: A Compendium of Practical Applications for Generative AI in Higher Education
• Google: 10-Step Guide to Getting Started With Generative AI
• Google for Education: On-Demand Webinars & Distance Learning Series
• SUNY: AI Research
• UK Parliament: Use of AI in Education Delivery and Assessment
• McKinsey: The economic potential of generative AI: The next productivity frontier
• OpenAI: Tools, Guides, and Courses on Generative AI Around the Web

IBL News | New York
NVIDIA released eight free AI courses this month. Five are hosted at NVIDIA’s Deep Learning Institute (DLI) platform, two on Coursera, and one on YouTube.
1. Generative AI Explained
2. Building A Brain in 10 Minutes
3. Augment your LLM with Retrieval Augmented Generation
4. AI in the Data Center
5. Accelerate Data Science Workflows with Zero Code Changes
6. Mastering Recommender Systems
7. Networking Introduction
8. Building RAG Agents with LLMs
1. Generative AI Explained
Learn learn how to:
• Define Generative AI and explain how Generative AI works
• Describe various Generative AI applications
• Explain the challenges and opportunities in Generative AIhttps://t.co/JTNyqVZY5v pic.twitter.com/7UC7Ac3IWI— Roni Rahman (@heyronir) March 24, 2024
2. Getting Started with AI on Jetson Nano
Learn how to:
• Set up your Jetson Nano and camera
• Collect image data for classification models
• Annotate image data for regression models
• Train a neural network on your data to create your modelshttps://t.co/LHFbgLIPNQ pic.twitter.com/pjTLnKWyzS— Roni Rahman (@heyronir) March 24, 2024
3. Building A Brain in 10 Minutes
You’ll learn:
• How neural networks use data to learn
• Understand the math behind a neuronhttps://t.co/E9VEKRKWX1 pic.twitter.com/8VPBXseT15— Roni Rahman (@heyronir) March 24, 2024
4. Building Video AI Applications on Jetson Nano:
Learn to:
• Create DeepStream pipelines for video processing
• Handle multiple video streams
• Use alternate inference engines like YOLOhttps://t.co/RuBQDoXMOU pic.twitter.com/PAxhAtfVVm— Roni Rahman (@heyronir) March 24, 2024
5. Augment your LLM Using RAG
• Understand the basics of RAG
• Learn about the RAG retreival process
• Discover NVIDIA AI Foundations and the key components of a RAG model.https://t.co/cG0AbeyxS5 pic.twitter.com/GqjrLjDuuM— Roni Rahman (@heyronir) March 24, 2024
6. Building RAG Agents with LLMs
Learning Objectives:
• Explore scalable deployment strategies
• Learn about microservices and development
• Experiment with LangChain paradigms for dialog management
• Practice with state-of-the-art modelshttps://t.co/eIqjaBgIU2 pic.twitter.com/MeKd2RGU4r— Roni Rahman (@heyronir) March 24, 2024
7. Accelerate Data Science Workflows with Zero Code Changes
In this course you will:
• Learn the benefits of unified CPU and GPU workflows
• GPU-accelerate data processing and machine learning
• See faster processing times with GPUhttps://t.co/UJ9RJkJTWh pic.twitter.com/XoQ1goBncB— Roni Rahman (@heyronir) March 24, 2024
8. Introduction to AI in the Data Center
Learn about AI, machine learning, deep learning, GPU architecture, deep learning frameworks, and deploying AI workloads.
Understand requirements for multi-system AI clusters and infrastructure planning.https://t.co/u96QGB6rVG pic.twitter.com/yLlUWv1e9N
— Roni Rahman (@heyronir) March 24, 2024
[Disclosure: IBL works for NVIDIA by powering its learning platform]

IBL News | New York
After the release of the bot ChatGPT a year ago, the second phase of personalized, autonomous AI agents is emerging.
These agents can perform complex tasks, such as sending emails, scheduling meetings, booking flights or tables in a restaurant, or even complex tasks like buying presents for family members or negotiating a raise.
Personalized chatbots, programmed for specific tasks, that GPT creators will be able to release through the upcoming OpenAI’s GPT Store, are a prelude.
For now, these custom GPTs are easy to build without knowing how to code.
Users just answer a few simple questions about their bot — its name, its purpose, the tone used to respond — and the bot builds itself in just a few seconds. Users can upload PDF documents they want to use as reference material or easily look up Q&A. They can also connect the bot to other apps or edit its instructions.
Although these custom chatbots are far from working perfectly, they can be useful tools for answering repetitive questions in customer service departments.
Some AI safety researchers fear that giving bots more autonomy could lead to disaster, The New York Times reported. The Center for AI Safety, a nonprofit research organization, listed autonomous agents as one of its “catastrophic AI risks” this year, saying that “malicious actors could intentionally create rogue AI with dangerous goals.”
For now, these agents look harmless and limited in their scope.
Its development seems to be dependent on gradual iterative deployment, that is, small improvements at a fast pace rather than a big leap.
In the last OpenAI developer conference, Sam Atman built on stage a “start-up mentor” chatbot to give advice to aspiring founders, based on an uploaded file of a speech he had given years earlier.
The San Francisco-based research lab envisions a world where AI agents will be extensions of us, gathering information and taking action on our behalf.
.



This is really worth your time – a very solid technical introduction to LLMs, great if you’ve not been paying close attention but I picked up quite a few useful details from it too https://t.co/eOfOyWSxC5
— Simon Willison (@simonw) November 23, 2023
The most clearest and crisp explanation, I've ever heard, of how large language models compress and capture a "world-model" in their weights simply by learning to predict the next word accurately.
Furthermore, how the raw power of these base models can then be tamed by teaching… pic.twitter.com/0g7Z5wXOlc
— Zain Hasan (@ZainHasan6) November 21, 2023

IBL News | New York
The output generated by ChatGPT and other LLMs presents legal and compliance risks that every organization has to face or face dire consequences, according to the consultancy firm Gartner, Inc, which has identified six areas.
“Failure to do so could expose enterprises to legal, reputational, and financial consequences,” said Ron Friedmann, Senior Director Analyst at Gartner Legal & Compliance Practice.
ChatGPT is also prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” said Friedmann.
Only accurate training of the robot with limited sources will mitigate this tendency to provide incorrect information.
Sensitive, proprietary, or confidential information used in prompts may become a part of its training dataset and incorporated into responses for users outside the enterprise if chat history is not disabled,
“Legal and compliance need to establish a compliance framework and clearly prohibit entering sensitive organizational or personal data into public LLM tools,” said Friedmann.
“Complete elimination of bias is likely impossible, but legal and compliance need to stay on top of laws governing AI bias and make sure their guidance is compliant,” said Friedmann.
“This may involve working with subject matter experts to ensure output is reliable and with audit and technology functions to set data quality controls,” he added.
As ChatGPT is trained on a large amount of internet data that likely includes copyrighted material, its outputs – which do not offer source references – have the potential to violate copyright or IP protection.
“Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn’t infringe on copyright or IP rights.”
Bad actors are already using ChatGPT to generate false information at scale, like fake reviews, for instance.
Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which
A hacking technique known as “prompt injection” brings criminals to write malware codes or develop phishing sites that resemble well-known sites.
“Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann.
Businesses that fail to disclose that they are using ChatGPT as a customer support chatbot run the risk of being charged with unfair practices under various laws and face the risk of losing their customers’ trust.
For instance, the California chatbot law mandates that in certain consumer interactions, organizations must disclose that a consumer is communicating with a bot.
Legal and compliance leaders need to ensure their organization’s use complies with regulations and laws.
.

IBL News | New York
When choosing and orchestrating an LLM, there are many critical technical factors, such as training data, dataset filtering, fine-tuning process, capabilities, latency, technical requirements, and price.
Experts state that implementing an LLM API, like GPT-4 or others, is not the only option.
As a paradigm-shifting technology and with the pace of innovation moving really fast, the LLMs and Natural Language Processing market is projected to reach $91 billion by 2030 growing at a CAGR of 27%.
Beyond the parameter count, recent findings showed that smaller models trained on more data are just as effective, and can even lead to big gains in latency and a significant reduction in hardware requirements. In other words, the largest parameter count is not what matters.
Training data should include conversations, games, and immersive experiences related to the subject rather than creating general-purpose models that knew a little about everything. For example, a model whose training data is 90% medical papers performs better on medical tasks than a much larger model where medical papers only make up 10% of its dataset.
In terms of dataset filtering, certain kinds of content have to be removed to reduce toxicity and bias. OpenAI recently confirmed that for example erotic content has been filtered.
It’s also important to create vocabularies based on how commonly words appear, removing colloquial conversation and common slang datasets.
Models have to be fine-tuned intend to ensure the accuracy of the information and avoid false information in the dataset.
LLMs are not commoditized, and some models have unique capabilities. GPT-4 accepts multimodal inputs like video and photos and writes up 25,000 words at a time while maintaining context. Google’s PaLM can generate text, images, code, videos, audio, etc.
Other models can provide facial expressions and voice.
Inference latency is higher in models with more parameters, adding extra milliseconds between query and response, which significantly impacts real-time applications.
Google’s research found that just half a second of added latency cause traffic to drop by 20%.
For low or real-time latency, many use cases, such as financial forecasting or video games, can’t be fulfilled by a standalone LLM. It’s required the orchestration of multiple models, specialized features, or additional automation, for text-to-speech, automatic speech recognition (ASR), machine vision, memory, etc.

IBL News | New York
Axim Collaborative — MIT’s and Harvard University’s non-profit organization that manages the Open edX software and its community — released the 16th version of the platform, called Palm.
This release spans changes in the code of the edX platform — used at edx.org — from October 11, 2022, to April 11, 2023.
To date, Open edX releases have been Olive, Nutmeg, Maple, Lilac, Koa, Juniper, Ironwood, Hawthorn, Ginkgo, Ficus, Eucalyptus, Dogwood, Cypress, Birch, and Aspen.
In Palm, the minimum required versions will be Docker v20.10.15 and Compose v2.0.0.Ecommerce now supports the new Stripe Payment Intents API and no longer uses the Stripe Charges API.
Palm includes discussion improvements, with posts streamlined, allowing users to see more information at once. In addition, comments and responses can now be sorted in reverse order.
The iOS and Android apps are seeing an update on the dashboard, header, and course navigation.
The release notes feature additional breaking changes.

IBL News | New York
2U’s edX.org released six ChatGPT-related courses this month.
These are one-to-two hours, self-paced, free courses, designed to educate audiences in the characteristics and opportunities around the new technologies pioneered by OpenAI.
These online classes have been developed in partnership with IBL Education, an AI software development company and course production studio based in New York.
The led instructor is IBL’s CTO, Miguel Amigot II. The production took place at the company’s film and video production studio in Brooklyn, New York.
Filming another course on generative AI — this time, with the great Sunder Sai, MPH from Columbia University pic.twitter.com/jbsgwwzMjC
— ibleducation.com🗽 (@ibleducation) May 8, 2023

IBL News & IBL Education | New York
There are many important learning analytics, but some of the most important ones include completion rates, time on task, engagement levels, achievement rates, and the use of learning resources. These metrics can provide valuable insights into how well students are learning and how effective a given teaching method or learning environment is.
By tracking these metrics, educators can identify areas for improvement and make more informed decisions about how to best support student learning.
Other important learning analytics might include:
– Student progress over time: This metric can help educators understand how well students are progressing in their learning, and whether they are making the expected amount of progress given their starting point.
– Student feedback: Gathering and analyzing student feedback can provide valuable insights into how students perceive their learning experience, and can help identify areas where students are struggling or where the learning environment is not meeting their needs.
– Learner demographics: Understanding the demographics of the students in a given class or program can help educators tailor their teaching approach and learning materials to better meet the needs of their students.
– Learner behavior: Analyzing how students interact with learning materials and resources can provide valuable insights into how they approach learning and what strategies are most effective for them.
– Learning outcomes: Tracking learning outcomes can help educators understand the effectiveness of their teaching methods and the overall quality of the learning experience.
By comparing learning outcomes across different classes or programs, educators can identify best practices and make more informed decisions about how to improve student learning.
What’s the best way to track learner feedback?
One of the best ways to track learner feedback is to use surveys or other tools that allow students to provide their opinions and experiences with the learning environment.
Surveys can be administered regularly (e.g., at the end of each unit or course) to gather ongoing feedback from students.
Surveys can be designed to ask specific questions about different aspects of the learning experience, such as the quality of the materials, the effectiveness of the teaching methods, and the overall satisfaction with the learning environment.
A SERIES OF ARTICLES ABOUT ‘AI, CLOUD, AND ADVANCED TECHNOLOGIES IN EDUCATION’ WRITTEN BY THE IBL AI ENGINE IN DECEMBER 2022*
*The IBL AI/ML Engine extends and hosts leading language models (LLMs) via a combination of fine-tuning, customized datasets and REST APIs to provide an all-in-one AI platform for education featuring content recommendations, assessment creation and grading, chatbots and mentors, and predictive analytics.