Category: Platforms

  • NVIDIA Released Eight Free Courses on Generative AI

    NVIDIA Released Eight Free Courses on Generative AI

    IBL News | New York

    NVIDIA released eight free AI courses this month. Five are hosted at NVIDIA’s Deep Learning Institute (DLI) platform, two on Coursera, and one on YouTube.

    1. Generative AI Explained
    2. Building A Brain in 10 Minutes
    3. Augment your LLM with Retrieval Augmented Generation
    4. AI in the Data Center
    5. Accelerate Data Science Workflows with Zero Code Changes
    6. Mastering Recommender Systems
    7. Networking Introduction
    8. Building RAG Agents with LLMs

    [Disclosure: IBL works for NVIDIA by powering its learning platform]

  • AI Agents, the Second Phase of Generative AI

    AI Agents, the Second Phase of Generative AI

    IBL News | New York

    After the release of the bot ChatGPT a year ago, the second phase of personalized, autonomous AI agents is emerging.

    These agents can perform complex tasks, such as sending emails, scheduling meetings, booking flights or tables in a restaurant, or even complex tasks like buying presents for family members or negotiating a raise.

    Personalized chatbots, programmed for specific tasks, that GPT creators will be able to release through the upcoming OpenAI’s GPT Store, are a prelude.

    For now, these custom GPTs are easy to build without knowing how to code.

    Users just answer a few simple questions about their bot — its name, its purpose, the tone used to respond — and the bot builds itself in just a few seconds. Users can upload PDF documents they want to use as reference material or easily look up Q&A. They can also connect the bot to other apps or edit its instructions.

    Although these custom chatbots are far from working perfectly, they can be useful tools for answering repetitive questions in customer service departments.

    Some AI safety researchers fear that giving bots more autonomy could lead to disaster, The New York Times reported. The Center for AI Safety, a nonprofit research organization, listed autonomous agents as one of its “catastrophic AI risks” this year, saying that “malicious actors could intentionally create rogue AI with dangerous goals.”

    For now, these agents look harmless and limited in their scope.

    Its development seems to be dependent on gradual iterative deployment, that is, small improvements at a fast pace rather than a big leap.

    In the last OpenAI developer conference, Sam Atman built on stage a “start-up mentor” chatbot to give advice to aspiring founders, based on an uploaded file of a speech he had given years earlier.

    The San Francisco-based research lab envisions a world where AI agents will be extensions of us, gathering information and taking action on our behalf.
    .

    For now, OpenAI’s bots are limited to simple, well-defined tasks, and can’t handle complex planning or long sequences of actions.
    After a day care’s handbook was uploaded to OpenAI’s GPT creator tool, a chatbot could easily look up answers to questions about it.
    A screenshot of a GPT “Day Care Helper” conversation between the author and the chatbot about circle time.

     

  • Legal and Compliance Risks that ChatGPT Presents to Organizations, According to Gartner

    Legal and Compliance Risks that ChatGPT Presents to Organizations, According to Gartner

    IBL News | New York

    The output generated by ChatGPT and other LLMs presents legal and compliance risks that every organization has to face or face dire consequences, according to the consultancy firm Gartner, Inc, which has identified six areas.

    “Failure to do so could expose enterprises to legal, reputational, and financial consequences,” said Ron Friedmann, Senior Director Analyst at Gartner Legal & Compliance Practice.

    • Risk 1: Fabricated and Inaccurate Answers

    ChatGPT is also prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” said Friedmann.

    Only accurate training of the robot with limited sources will mitigate this tendency to provide incorrect information.

    •  Risk 2. Data Privacy and Confidentiality

    Sensitive, proprietary, or confidential information used in prompts may become a part of its training dataset and incorporated into responses for users outside the enterprise if chat history is not disabled,

    “Legal and compliance need to establish a compliance framework and clearly prohibit entering sensitive organizational or personal data into public LLM tools,” said Friedmann.

    • Risk 3. Model and Output Bias

    “Complete elimination of bias is likely impossible, but legal and compliance need to stay on top of laws governing AI bias and make sure their guidance is compliant,” said Friedmann.

    “This may involve working with subject matter experts to ensure output is reliable and with audit and technology functions to set data quality controls,” he added.

    • Risk 4.  Intellectual Property (IP) and Copyright risks

    As ChatGPT is trained on a large amount of internet data that likely includes copyrighted material, its outputs – which do not offer source references – have the potential to violate copyright or IP protection.

    “Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn’t infringe on copyright or IP rights.”

    • Risk 5. Cyber Fraud Risks

    Bad actors are already using ChatGPT to generate false information at scale, like fake reviews, for instance.

    Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which

    A hacking technique known as “prompt injection” brings criminals to write malware codes or develop phishing sites that resemble well-known sites.

    “Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann.

    • Risk 6. Consumer Protection Risks

    Businesses that fail to disclose that they are using ChatGPT as a customer support chatbot run the risk of being charged with unfair practices under various laws and face the risk of losing their customers’ trust.

    For instance, the California chatbot law mandates that in certain consumer interactions, organizations must disclose that a consumer is communicating with a bot.

    Legal and compliance leaders need to ensure their organization’s use complies with regulations and laws.
    .

  • Critical Factors When Orchestrating an Optimized Large Language Model (LLM)

    Critical Factors When Orchestrating an Optimized Large Language Model (LLM)

    IBL News | New York

    When choosing and orchestrating an LLM, there are many critical technical factors, such as training data, dataset filtering, fine-tuning process, capabilities, latency, technical requirements, and price.

    Experts state that implementing an LLM API, like GPT-4 or others, is not the only option.

    As a paradigm-shifting technology and with the pace of innovation moving really fast, the LLMs and Natural Language Processing market is projected to reach $91 billion by 2030 growing at a CAGR of 27%.

    Beyond the parameter count, recent findings showed that smaller models trained on more data are just as effective, and can even lead to big gains in latency and a significant reduction in hardware requirements. In other words, the largest parameter count is not what matters.

    Training data should include conversations, games, and immersive experiences related to the subject rather than creating general-purpose models that knew a little about everything. For example, a model whose training data is 90% medical papers performs better on medical tasks than a much larger model where medical papers only make up 10% of its dataset.

    In terms of dataset filtering, certain kinds of content have to be removed to reduce toxicity and bias. OpenAI recently confirmed that for example erotic content has been filtered.

    It’s also important to create vocabularies based on how commonly words appear, removing colloquial conversation and common slang datasets.

    Models have to be fine-tuned intend to ensure the accuracy of the information and avoid false information in the dataset.

    LLMs are not commoditized, and some models have unique capabilities. GPT-4 accepts multimodal inputs like video and photos and writes up 25,000 words at a time while maintaining context. Google’s PaLM can generate text, images, code, videos, audio, etc.

    Other models can provide facial expressions and voice.

    Inference latency is higher in models with more parameters, adding extra milliseconds between query and response, which significantly impacts real-time applications.

    Google’s research found that just half a second of added latency cause traffic to drop by 20%.

    For low or real-time latency, many use cases, such as financial forecasting or video games, can’t be fulfilled by a standalone LLM. It’s required the orchestration of multiple models, specialized features, or additional automation, for text-to-speech, automatic speech recognition (ASR), machine vision, memory, etc.

     

  • Axim Collaborative Releases Palm, the 16th Version of the Open edX Platform

    Axim Collaborative Releases Palm, the 16th Version of the Open edX Platform

    IBL News | New York

    Axim Collaborative — MIT’s and Harvard University’s non-profit organization that manages the Open edX software and its community — released the 16th version of the platform, called Palm.

    This release spans changes in the code of the edX platform — used at edx.org — from October 11, 2022, to April 11, 2023.

    To date, Open edX releases have been Olive, Nutmeg, Maple, Lilac, Koa, Juniper, Ironwood, Hawthorn, Ginkgo, Ficus, Eucalyptus, Dogwood, Cypress, Birch, and Aspen.

    In Palm, the minimum required versions will be Docker v20.10.15 and Compose v2.0.0.Ecommerce now supports the new Stripe Payment Intents API and no longer uses the Stripe Charges API.

    Palm includes discussion improvements, with posts streamlined, allowing users to see more information at once. In addition, comments and responses can now be sorted in reverse order.

    The iOS and Android apps are seeing an update on the dashboard, header, and course navigation.

    The release notes feature additional breaking changes.

  • edX.org Releases Six Free, Short, Online Courses About ChatGPT

    edX.org Releases Six Free, Short, Online Courses About ChatGPT

    IBL News | New York

    2U’s edX.org released six ChatGPT-related courses this month.

    These are one-to-two hours, self-paced, free courses, designed to educate audiences in the characteristics and opportunities around the new technologies pioneered by OpenAI.

    These online classes have been developed in partnership with IBL Education, an AI software development company and course production studio based in New York.

    The led instructor is IBL’s CTO, Miguel Amigot II. The production took place at the company’s film and video production studio in Brooklyn, New York.

    • Introduction to ChatGPT
      This course provides a practical introduction to ChatGPT, from signing up to mastering its advanced features. Topics covered include conversing with ChatGPT, customizing it, using it for productivity, and building chatbots, as well as advanced applications like language translation and generating creative content. Best practices and tips for using ChatGPT are also included. To date, the course has attracted over 18,200 enrollments.
    • Prompt Engineering and Advanced ChatGPT
      This course is designed to teach advanced techniques in ChatGPT, an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It covers advanced techniques for prompting ChatGPT, applications for multiple use cases, integrating it with other tools, and developing applications on top while considering its limitations.
    • How to Use ChatGPT in Tech/Coding/Data
      In this course, users will learn how to harness the power of ChatGPT to revolutionize their coding process. From ideation to testing and debugging, ChatGPT can generate code programmatically, saving valuable time and energy.
    • How to Use ChatGPT in Education
      This course is designed for students and instructors to explore the many ways that ChatGPT can be used to enhance the learning experience.
    • How to Use ChatGPT in Business
      This course is designed to introduce learners to the world of ChatGPT and how it can transform various aspects of business operations and take businesses to the next level.
    • How to Use ChatGPT in Healthcare
      This course explores AI’s impact and transformation in healthcare. It shows ChatGPT use cases, navigate ethics and legalities, and streamlines patient care, data access, and administration.

  • What Are the Most Important Learning Analytics?

    What Are the Most Important Learning Analytics?

    IBL News & IBL Education | New York

    There are many important learning analytics, but some of the most important ones include completion rates, time on task, engagement levels, achievement rates, and the use of learning resources. These metrics can provide valuable insights into how well students are learning and how effective a given teaching method or learning environment is.

    By tracking these metrics, educators can identify areas for improvement and make more informed decisions about how to best support student learning.

    Other important learning analytics might include:

    Student progress over time: This metric can help educators understand how well students are progressing in their learning, and whether they are making the expected amount of progress given their starting point.

    Student feedback: Gathering and analyzing student feedback can provide valuable insights into how students perceive their learning experience, and can help identify areas where students are struggling or where the learning environment is not meeting their needs.

    Learner demographics: Understanding the demographics of the students in a given class or program can help educators tailor their teaching approach and learning materials to better meet the needs of their students.

    Learner behavior: Analyzing how students interact with learning materials and resources can provide valuable insights into how they approach learning and what strategies are most effective for them.

    Learning outcomes: Tracking learning outcomes can help educators understand the effectiveness of their teaching methods and the overall quality of the learning experience.

    By comparing learning outcomes across different classes or programs, educators can identify best practices and make more informed decisions about how to improve student learning.

    What’s the best way to track learner feedback?

    One of the best ways to track learner feedback is to use surveys or other tools that allow students to provide their opinions and experiences with the learning environment.

    Surveys can be administered regularly (e.g., at the end of each unit or course) to gather ongoing feedback from students.

    Surveys can be designed to ask specific questions about different aspects of the learning experience, such as the quality of the materials, the effectiveness of the teaching methods, and the overall satisfaction with the learning environment.

    A SERIES OF ARTICLES ABOUT ‘AI, CLOUD, AND ADVANCED TECHNOLOGIES IN EDUCATION’ WRITTEN BY THE IBL AI ENGINE IN DECEMBER 2022*

     

     

    *The IBL AI/ML Engine extends and hosts leading language models (LLMs) via a combination of fine-tuning, customized datasets and REST APIs to provide an all-in-one AI platform for education featuring content recommendations, assessment creation and grading, chatbots and mentors, and predictive analytics.

     

  • What’s the Best Way to Measure Learner Engagement?

    What’s the Best Way to Measure Learner Engagement?

    IBL News & IBL Education | New York

    There are a few different ways to measure learner engagement, and the best approach will depend on the specific context and goals of your learning program.

    Some common methods for measuring engagement include assessing learners through quizzes and tests, observing their performance on tasks or projects, tracking their participation in online discussions or other collaborative activities, collecting feedback from learners or others who have observed their performance, conducting surveys or polls to gauge learners’ interest and satisfaction, and monitoring learners’ progress through course materials or assessments to see how much they are retaining and applying what they are learning.

    Ultimately, the key is to use a combination of methods to get a well-rounded picture of learners’ engagement and to regularly check in with learners to gather their feedback and make any necessary adjustments to your program.

    It’s important to use a balanced mix of assessment methods and to regularly evaluate and adjust the assessment process to ensure that it accurately reflects learners’ progress and achievement.

    How can I use AI to improve learner performance?

    AI can be used in a variety of ways to improve learner performance. For example, AI-powered tutoring systems can provide personalized instruction and feedback to help learners improve their knowledge and skills.

    AI-powered data analysis can help educators identify trends and patterns in learners’ performance, allowing them to tailor their instruction and support to individual learners’ needs.

    Additionally, AI-powered tools can help automate some of the administrative tasks associated with education, freeing up educators to focus on supporting learners. It’s important to remember, however, that AI is not a substitute for human instruction and support, and should be used in conjunction with other teaching methods.

    How can I predict if a learner is going to become disengaged?

    It can be difficult to predict with certainty if a learner is going to become disengaged, as there are many factors that can affect a person’s level of engagement in a learning program.

    However, there are some warning signs that a learner may be at risk of becoming disengaged, such as a lack of participation in online discussions or other collaborative activities, a lack of progress or a drop in performance on assessments, or a decline in the quality of their work.

    If you notice any of these signs, it may be a good idea to reach out to the learner to see if there is anything you can do to support them and keep them engaged in the learning program.

    You could also try to identify any potential barriers to engagement that the learner may be facing and work with them to address those barriers.

    What’s the best way to engage a learner?

    The best way to engage a learner will depend on their individual needs, preferences, and learning style.

    Some general strategies for engaging learners include making the learning material relevant and applicable to their lives, providing opportunities for learners to collaborate and connect with each other, and incorporating a variety of teaching methods and learning activities to cater to different learning styles.

    Additionally, providing regular feedback, support, and encouragement can help learners stay motivated and engaged in the learning process.

    It’s also important to regularly check in with learners to gauge their engagement and to gather their feedback on how the learning program can be improved.

    A SERIES OF ARTICLES ABOUT ‘AI, CLOUD, AND ADVANCED TECHNOLOGIES IN EDUCATION’ WRITTEN BY THE IBL AI ENGINE IN DECEMBER 2022*

     

     

    *The IBL AI/ML Engine extends and hosts leading language models (LLMs) via a combination of fine-tuning, customized datasets and REST APIs to provide an all-in-one AI platform for education featuring content recommendations, assessment creation and grading, chatbots and mentors, and predictive analytics.

     

  • What Are Language Models (LLMs) and Why Are They Interesting?

    What Are Language Models (LLMs) and Why Are They Interesting?

    IBL News & IBL Education | New York

    Language models are algorithms that can predict the next word in a sequence of words, based on the words that have come before it.

    They are interesting because they can be used in a variety of natural languages processing tasks, such as machine translation, speech recognition, and text generation.

    Language models are typically trained on large amounts of text data, which allows them to capture the statistical patterns and relationships between words in a language. This allows them to make predictions about the next word in a sequence that is often very accurate.

    What kinds of predictions can language models make?

    Language models can make predictions about the next word in a sequence of words, based on the words that have come before it.

    They can also be used to generate text that is similar to a given input, by predicting the next word in a sequence and then using that prediction as the input for the next prediction, and so on.

    This can be used to generate text that is similar to a given input or to complete sentences or paragraphs that are missing some words.

    Language models can also be used in other natural language processing tasks, such as machine translation, speech recognition, and text summarization.

    What’s the best way to measure the performance of a language model?

    One way to measure the performance of a language model is to evaluate its ability to predict the next word in a sequence of words, based on the words that have come before it. This can be done by using a test set of text data that the model has not seen during training, and comparing the model’s predictions to the actual next word in the sequence. The accuracy of the model’s predictions can then be used as a measure of its performance. Other metrics, such as the perplexity of the model, can also be used to evaluate its performance. Perplexity is a measure of how well a language model predicts a given test set of text data, and is calculated as the exponentiated average of the model’s prediction errors on the test set. A lower perplexity score indicates a better-performing language model.

    What does it mean to fine-tune a language model?

    Fine-tuning a language model means adjusting its parameters to improve its performance on a specific task or dataset. This is typically done by training the language model on a large amount of text data that is relevant to the task or dataset, in addition to the training data that the model was originally trained on. This allows the model to learn the statistical patterns and relationships between words that are specific to the task or dataset and can improve its performance on that task or dataset. Fine-tuning can be a useful technique for adapting a pre-trained language model to a new task or dataset.

    A SERIES OF ARTICLES ABOUT ‘AI, CLOUD, AND ADVANCED TECHNOLOGIES IN EDUCATION’ WRITTEN BY THE IBL AI ENGINE IN DECEMBER 2022*

     

     

    *The IBL AI/ML Engine extends and hosts leading language models (LLMs) via a combination of fine-tuning, customized datasets and REST APIs to provide an all-in-one AI platform for education featuring content recommendations, assessment creation and grading, chatbots and mentors, and predictive analytics.

  • How AI Can Support Learners

    How AI Can Support Learners

    IBL News & IBL Education | New York

    AI can support learners in a number of ways. For example, AI can be used to create personalized learning plans that cater to the specific needs and abilities of individual learners.

    This can help ensure that each learner is able to learn at their own pace and receive targeted support in areas where they may be struggling.

    AI can also be used to create interactive and engaging learning materials, such as virtual tutors or educational games, which can make the learning process more enjoyable for learners.

    Additionally, AI can be used to analyze data about learners’ progress and performance, providing teachers with valuable insights into how to best support their students.

    Here is an example of how AI can be used to support learners:

    1. A learner logs into a learning platform that uses AI to create personalized learning plans.

    2. The AI system collects data about the learner’s background, abilities, and learning goals, and uses this information to create a customized learning plan for the learner.

    3. The learning platform presents the learner with a series of lessons and activities tailored to their specific needs and abilities. These may include interactive games, videos, quizzes, and other engaging materials.

    4. As the learner progresses through the lessons, the AI system tracks their progress and performance and provides them with real-time feedback and support. For example, if the learner is struggling with a particular concept, the AI system may provide additional explanations or examples to help them understand it better.

    5. The AI system also provides teachers with insights into the learners’ progress and performance, allowing them to identify areas where the learners may need additional support and adjust their teaching accordingly.

    Overall, AI can support learners by providing them with personalized and engaging learning experiences, and by providing teachers with valuable data and insights to help them better support their students.

    A SERIES OF ARTICLES ABOUT ‘AI, CLOUD, AND ADVANCED TECHNOLOGIES IN EDUCATION’ WRITTEN BY THE IBL AI ENGINE IN DECEMBER 2022*

     

     

    *The IBL AI/ML Engine extends and hosts leading language models (LLMs) via a combination of fine-tuning, customized datasets and REST APIs to provide an all-in-one AI platform for education featuring content recommendations, assessment creation and grading, chatbots and mentors, and predictive analytics.