IBL News

    • About IBL News
    • Contact Us
    • cronicas-iframe-ibl
    • Do you want to advertise, write for IBLNews.org or share a story?
    • Events: IBL Picks
    • footer – teme footer
    • Home
    • RSS Feeds
    • Terms of Use

Category: Top News

  • The Video Editing App CapCut Introduced Its Tools for Business

    The Video Editing App CapCut Introduced Its Tools for Business

    IBL News | New York

    CapCut, the ByteDance-owned video editing app, introduced this month CapCut for Business targeting advertisers and marketers with features such as, AI ad scripts and AI-generated presenters, so they can be able to generate branded content.

    These tools — available across the CapCut app for desktop, mobile, and tablets — help advertisers to come up with script ideas based on their product or business description, as well as commercially licensed business templates, allowing to convert URLs of products or landing pages into videos.

    Tightly integrated with TikTok, CapCut has been a top consumer video editing app that regularly ranks in the top 20 in the iOS App Store.

    The company is positioning its editing app as a way for consumers to make compelling videos for social media, including TikTok, and for marketers to easily do so as well, without having to spend heavily on advanced video editing software.

    CapCut surpassed Splice to become the most profitable video editing app globally during the first half of 2023, pulling in a record high of $50 million, making it ByteDance’s second app to top $100 million globally.

     

     

     

     

     

    October 25, 2023
  • A Majority of CEOs Prioritize Investments in Generative AI

    A Majority of CEOs Prioritize Investments in Generative AI

    IBL News | New York

    Nearly three in four global CEOs say that investing in generative AI is a top spending priority, despite uncertain economic conditions, according to a survey done by KPMG on 1,300 global managers, including 400 in the U.S.

    They expect to see a return on their investment in three to five years.

    They also look forward to increased profitability in new products, market growth opportunities, enhanced innovation, and aid cybersecurity efforts.

    Fewer CEOS — less than one-third — expect a faster ROI of one to three years.

    For CIOs, the focus is finding real value from implementation, as they have to find the proper foundational models and characteristics.

    “Increased disruption and structural changes to the economy are compounding risks, requiring CEOs to move forward with long-term growth strategies while remaining agile to take advantage of new opportunities and respond to unforeseen challenges,” said Paul Knopp, KPMG U.S. Chair and CEO.

    October 24, 2023
  • American Federation of Teachers Partners with AI Identification Platform GPTZero

    American Federation of Teachers Partners with AI Identification Platform GPTZero

    IBL News | New York

    The American Federation of Teachers (AFT), the second-largest teacher’s union in the U.S., signed a deal with the identification platform GTPZero to detect when students use artificial intelligence to do their homework.

    The teacher’s union will be paying for access to more tailored AI detection and certification tools and assistance.

    “If we don’t guard against its perils upfront, we’re going to repeat the terrible transitions that happened with the industrial revolution,” AFT President Randi Weingarten told CBS MoneyWatch. “ChatGPT can be a really important supplement and complement to educators if the guardrails are in place.”

    “You can’t stop technology and innovation. You need to ride it and harness, it and that’s what we are talking to our members about,” she said.

    Founded in January by Princeton graduate Edward Tian, GPTZero is a 15-person company that says “it’s working with teachers to figure out where AI fits into education and empower students to use AI responsibly.”

    • Wired: Kids Are Going Back to School. So Is ChatGPT

     

     

    October 23, 2023
  • Nvidia Announced an AI Agent Powered by GPT-4 That Can Teach Robots Complex Skills

    Nvidia Announced an AI Agent Powered by GPT-4 That Can Teach Robots Complex Skills

    IBL News | New York

    NVIDIA Research announced yesterday that it developed an AI agent called Eureka powered by GPT-4 LLM and generative AI. Eureka can teach robots complex skills by writing code that rewards robots for reinforcement learning.

    One of the 30 tasks is a robotic hand to perform rapid pen-spinning tricks for the first time as well as a human can.

    Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.

    The Eureka research includes a paper and the project’s AI algorithms, which developers can experiment with.

    “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks,”
    said Anima Anandkumar, Senior Director of AI Research at NVIDIA and an author of the Eureka paper.

    The results from nine Isaac Gym GPU-accelerated simulation environments are showcased in visualizations generated using NVIDIA Omniverse.

    “It’s breakthrough work bound to get developers’ minds spinning with possibilities, adding to recent NVIDIA Research advancements like Voyager, an AI agent built with GPT-4 that can autonomously play Minecraft.

    NVIDIA Research comprises hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars, and robotics.

    October 21, 2023
  • LLMs Models Will Continue to Drive Real-World Breakthroughs, Says ‘State of the AI Report’

    LLMs Models Will Continue to Drive Real-World Breakthroughs, Says ‘State of the AI Report’

    IBL News | New York

    LLMs models will continue to drive real-world breakthroughs, especially in the life sciences, with meaningful steps forward in both molecular biology and drug discovery.

    This is one of the conclusions of The State of AI Report, a classic research study produced by AI investors Nathan Benaich and the Air Street Capital and reviewed by AI practitioners.

    Other key findings include:

    • GTP-4 is beating every other LLM, validating the power of proprietary architectures and reinforcement learning from human feedback.

    • Efforts are growing to try to clone or surpass proprietary performance through smaller models, better datasets, and longer context.

    • Compute is the new oil, with NVIDIA printing record earnings and startups wielding their GPUs as a competitive edge. As the US tightens its restrictions on trade restrictions on China and mobilizes its allies in the chip wars, NVIDIA, Intel, and AMD have started to sell export-control-proof chips at scale.

    • Generative AI startups raised over $18 billion from VC and corporate investors, while other tech industry valuations are on a slump.

    • Safety concerns are prompting action from governments and regulators around the world.
    .

    > Download the Report

    October 20, 2023
  • Stanford, MIT, and Princeton Researchers Rate 10 LLM on How Transparent They Are

    Stanford, MIT, and Princeton Researchers Rate 10 LLM on How Transparent They Are

    IBL News | New York

    Stanford University, MIT, and Princeton researchers ranked ten major AI models on how openly they operate after applying a newly created scoring system.

    Included in the index are popular models like OpenAI’s GPT-4 (which powers the paid version of ChatGPT), Google’s PaLM 2 (which powers Bard), and Meta’s LLaMA 2. It also includes lesser-known models like Amazon’s Titan Text and Inflection AI’s Inflection-1, the model that powers the Pi chatbot.

    Three systems in the list (Meta, Hugging Face, and Stability AI) develop open foundation models (Llama 2, BLOOMZ, and Stable Diffusion 2, respectively), and the other seven developers built closed foundation models accessible via an API.

    Llama 2 led at 54%, GPT-4 placed third at 48%, and PaLM 2 took fifth at 40%. See the ranking in the table above.

    The Stanford ‘Foundation Model Transparency Index’ paper research featured 100 indicators, including the social aspects of training foundation models (the impact on labor, environment, and the usage policy for real-world use) in addition to technical aspects (data, computing, and details about the model training process).

    “The indicators are based on, and synthesize, past interventions aimed at improving the transparency of AI systems, such as model cards, datasheets, evaluation practices, and how foundation models intermediate a broader supply chain,” explained one of the authors in a blog post.

    “Transparency is poor on matters related to how models are built. In particular, developers are opaque on what data is used to train their model, who provides that data and how much they are paid, and how much computation is used to train the model.” 

    The researchers released a repository with all of their analysis on our GitHub repository as well.

    The New York Times reported that several AI companies have already been sued by authors, artists, and media companies, accusing them of illegally using copyrighted works to train their models.

    So far, most of the lawsuits have targeted open-source AI projects or projects that disclosed detailed information about their models.

    October 19, 2023
  • Open Source Software Projects Will Dominate LLMs

    Open Source Software Projects Will Dominate LLMs

    IBL News | New York

    Experts say that, as it happened with Linux, the world-class operating system, open source will dominate the future of LLMs and image models. Even Google acknowledged that they have no moat in this new world of open source AI.

    “If you’re building an AI native product, your primary goal is getting off of OpenAI as soon as you possibly can,” wrote Varun Shenoy in the viral article titled “Why Open Source AI Will Win.”

    Furthermore, using closed-source model providers such as OpenAI or Anthropic for the long haul exposes an AI-native company to undue risk. Every business needs to own its core product, and its core product is a model trained on proprietary data.

    The consensus is that open source models are incredibly good at the most valuable tasks, as they can be fine-tuned to cover likely up to 99% of use cases when a product has collected enough labeled data.

    While contexts have scaled up, the hardware requirements to run massive models have also scaled down.

    The original Llama has a context length of 2k tokens. Llama 2 has a context length of 4k. However, we still don’t have access to GPT-4 32k. This is the speed of open source.

    Users can now run state-of-the-art massive language models from their Macbook thanks to projects like Llama.cpp.

    On the image generation side, Stable Diffusion XL (SDXL), the best open source model, is on-par with Midjourney. Hugging Face is the new Red Hat.

    • “Linux succeeded because it was built in the open. Users knew exactly what they were getting and had the opportunity to file bugs or even attempt to fix them on their own with community support. The same is true for open source models.
    • Open source is much harder to use than closed source models. It seems like you need to hire a team of machine learning engineers to build on top of open source as opposed to using the OpenAI API. This is ok and will be true in the short term. This is the cost of control and the rapid pace of innovation. 
    • Closed-source model providers have captured the collective mindshare of this AI hype cycle. People don’t have time to mess around with open source, nor do they have the awareness of what open source is capable of. But they do know about OpenAI, Pinecone, and LangChain.
    • As open source offerings mature and become more user-friendly and customizable, they will emerge as the superior choice for many applications.
    • Rather than getting swept up in the hype, forward-thinking organizations will use this period to deeply understand their needs and lay the groundwork to take full advantage of open source AI. They will build defensible and differentiated AI experiences on open technology. This measured approach enables a sustainable competitive advantage in the long run.
    • The future remains bright for pragmatic adopters who see past the hype and keep their eyes on the true prize: truly open AI.”
      .

      > State of the AI Report Analyzes the Success of Llama Open Source Model

    October 18, 2023
  • Character.AI Launched a Feature for Group Chat that Mixes AI Companions and Humans

    Character.AI Launched a Feature for Group Chat that Mixes AI Companions and Humans

    IBL News | New York

    Character.AI — the platform backed by VC a16z that offers AI companions with distinct personalities — launched this month a group chat functionally for paid subscribers ($9.99 per month) on mobile apps.

    This tool, named ‘Character Group Chat’, allows users to create group chats based on their interests and hobbies, with multiple AI characters and humans in the same room.

    “Imagine a group chat with history’s smartest figures such as Albert Einstein, Marie Curie, Nikola Tesla, and Stephen Hawking, or a discussion with Napoleon, Athena, Genghis Khan, and Julius Caesar to speak about strategy and power,” said the company.

    In education, this feature can be used to create study group sessions, book clubs, language practice, or brainstorming sessions.

    The company said it plans to later open up to feature to the general public.

    The idea of adding AI chatbots into a group chat is not unique to Character.AI. Snapchat’s My AI is already using it. Also, Meta recently introduced AI-powered bots based on celebrities, like Mr. Beast, Paris Hilton, Tom Brady, Charli D’Amelio, Snoop Dog, and others, across its WhatsApp, Messenger, and Instagram apps.
    .

    Meta’s AI personas are here. Buckle up y’all

    Meet -K̶e̶n̶d̶a̶l̶l̶ ̶J̶e̶n̶n̶e̶r̶- … I mean Billie pic.twitter.com/EK1BczSs4B

    — Jules Terpak (@julesterpak) October 10, 2023

    October 17, 2023
  • Educause’s 2024 Top 10 Report Encourages to Develop an Institutional Approach to AI

    Educause’s 2024 Top 10 Report Encourages to Develop an Institutional Approach to AI

    IBL News | Chicago

    “Educational institutions must expand beyond growth and innovation to address risk and to prepare for what may be ahead,” said Susan Grajek, Vice President of Partnerships, Communities, and Research at Educause, last week in Chicago during the association’s annual conference.

    Grajek presented, in a much-awaited session, the “2024 Educause Top 10” IT issues list, in the main auditorium of the McCormick Convention Center in Chicago, filled with thousands of educators, administrators, and IT managers who attended the three-day event.

    Educause’s annual list, a classic report, addresses how higher education IT leaders can contribute to their institution’s overall success.

    These are the 2024 Educause Top Ten Issues:

    1. Cybersecurity as a Core Competency. Adopting a formal risk management framework can help institutions to balance cost, risk, and opportunity.

    2. Driving to Better Decisions. Improving data quality and governance can help lead to more informed decision-making. Data is a strategic asset now.

    3. The Enrollment Crisis. Data can empower decision-makers to determine course offerings, identify prospective students, or spot opportunities and tap into new markets.

    4. Diving Deep into Data. Analytics can help institutions to harness actionable insights to improve learning and student success.

    5. Administrative Cost Reduction. Streamlining processes, data, and technologies can lead to cost savings.

    6. Meeting Students Where They Are. Providing students with universal access to institutional services can lead to better outcomes.

    7. Hiring Resilience. Recruiting and retaining IT talent under adverse circumstances.
    can help human resources leaders.

    8. Financial Keys to the Future. Using technology and data to develop financial models and projections can help higher ed leaders make tough choices.

    9. Balancing Budgets. Taking control of IT costs and vendor management can help institutions build strong relationships with solution providers and industry partners.

    10. Adapting to the Future. Cultivating institutional agility means preparing for a range of possible future scenarios.

    11. Honorary issue: AI Institutions have the need to develop an institutional approach.

    “AI has the potential to help people skill up rapidly, including those who traditionally lacked access to effective educational opportunities and resources,” Grajek said.

    “AI can potentially help reduce administrative costs if applied to administrative processes, job descriptions, project charters, meeting summaries, and onboarding and training. Academic applications may include assessment reform, developing course materials for introductory level courses, and tutoring. We will almost certainly create more and more powerful use cases in the coming months and years.”

    AI buzz dominated the three-day conference, with one out of eight talks focused in this technology, as Inside Higher Ed reported.

    • Resources and links: 2024 EDUCAUSE Top 10: Institutional Resilience

    • #1. Cybersecurity as a Core Competency
    • #2. Driving to Better Decisions
    • #3. The Enrollment Crisis
    • #4. Diving Deep into Data
    • #5. Administrative Cost Reduction
    • #6. Meeting Students Where They Are
    • #7. Hiring Resilience
    • #8. Financial Keys to the Future
    • #9. Balancing Budgets
    • #10. Adapting to the Future

    October 16, 2023
  • How to Add Your Own Data to a Large Language Model

    How to Add Your Own Data to a Large Language Model

    IBL News | New York

    To create a corporate chatbot for customer support, generate personalized posts and marketing materials, or develop a tailored automation application, the Large Language Model (LLM), like GPT-4 has to include the ability to answer questions about private data.

    However, training or retraining the model is impractical due to the cost, time, and privacy concerns associated with mixing datasets, as well as the potential security risks.

    Usually, the approach taken is “content injection,” a technique called “embedding” that involves providing the model with additional information from a desired database of knowledge alongside the user’s query.

    This data collection can include product information, internal documents, or information scraped from the web, customer interactions, and industry-specific knowledge.

    At this stage, it’s essential to consider data privacy and security, ensuring that sensitive information is handled appropriately and in compliance with relevant information, as expert Shelly Palmer details in a post.

    The data to be embedded has to be cleaned and structured to ensure compatibility with the AI model.

    Also, it has to be tokenized and converted into a suitable format by setting the correct indexes.

    After data is preprocessed, the AI model has to be fine-tuned and pre-trained.

    The next step is to interact with the API. Query vectors will be matched to the database, pulling the content that will be injected.

    The number of tokens is calculated to know the cost. Usually, each token corresponds to four or five English-language words.

    To run an effective content injection schema, a prompt must be engineered. This is an example of a prompt:

    “You are an upbeat, positive employee of Our Company. Read the following sections of our knowledge base and answer the question using only the information provided here. If you do not have enough information to answer the question from the knowledge base below, please respond to the user with ‘Apologies. I am unable to provide assistance.’

    Context Injection goes here.

    Questions or input from the user go here.”

    There are three more considerations for the right implementation: Any personally identifiable information (PII) must be anonymized in order to protect the privacy of your customers and also ensure compliance with data protection regulations like GDPR (General Data Protection Regulation).

    Robust access control measures will help prevent unauthorized access and reduce the risk of data breaches.

    Continuous monitoring is in place in order to check for any signs of bias or other unintended consequences before they escalate.

    • Blog Replit: How to train your own Large Language Models

    • Andreessen Horowitz: Navigating the High Cost of AI Compute

     

     

     

    October 14, 2023
←Previous Page
1 … 61 62 63 64 65 … 288
Next Page→

IBL News

Global Education, Innovation and Technology: Insights + Breaking News

  • Blog
  • About
  • FAQs
  • Authors
  • Events
  • Shop
  • Patterns
  • Themes

Twenty Twenty-Five

Designed with WordPress