Author: IBL News

  • The IRS Will Use AI to Investigate Sophisticated Tax Evasions

    The IRS Will Use AI to Investigate Sophisticated Tax Evasions

    IBL News | New York

    The IRS (Internal Revenue Service) has deployed AI to investigate tax evasion and open examinations into large hedge funds, private equity groups, real estate investors, and law firms.

    The federal agency this Friday announced that it will use some of the $80 billion allocated through last year’s Inflation Reduction Act to target the wealthiest Americans who use sophisticated accounting maneuvers to avoid paying taxes.

    This agency’s new funding has generated a political fight between Republicans and Democrats.

    Republicans claim that the IRS will use the funding to harass small businesses and middle-class taxpayers while Democrats say that the funding is primarily enabling the IRS to target wealthy Americans and corporations who may have engaged in tax evasion.

    “These are complex cases for IRS teams to unpack,” Daniel Werfel, the IRS Commissioner, said. “The IRS has simply not had enough resources or staffing to address partnerships; in a real sense, we’ve been overwhelmed in this area for years.”

    Mr. Werfel explained that artificial intelligence is helping the IRS identify patterns and trends, giving the agency greater confidence that it can find where larger partnerships are shielding income. This is leading to the kinds of major audits that the IRS might not have previously tackled.

    The agency said it would open examinations into 75 of the nation’s largest partnerships, which were identified with the help of artificial intelligence, by the end of the month. The partnerships all have more than $10 billion in assets and will receive audit notices in the coming weeks.

    More audits are likely to come, according to The New York Times. In October, the IRS will send 500 notifications, known as compliance alerts, to other large partnerships indicating that the agency has found discrepancies in their balance sheets. These partnerships could also face audits if they cannot explain the differences in their balances from the end of one year to the start of the next.

    The focus on partnerships is part of a broader push by the IRS to scrutinize wealthier taxpayers in 2024. Mr. Werfel said that the agency is dedicating dozens of revenue officers to pursue 1,600 millionaires who the IRS believes owe at least $250,000 in unpaid taxes.

    In the coming year, the IRS said it plans to increase scrutiny of digital assets as a vehicle for tax evasion and investigate how high-income taxpayers are using foreign bank accounts to avoid disclosing their financial information.

    As part of its recruiting strategy, the IRS has been looking to hire data scientists to develop new in-house artificial intelligence tools. Mr. Werfel said that the agency is also collaborating with outside experts and contractors on the project.
    .

  • Anthology Will Add Generative AI to Its Blackboard Learn LMS

    Anthology Will Add Generative AI to Its Blackboard Learn LMS

    IBL News | New York

    Anthology plans to release generative AI features on its Blackboard Learn LMS in September. These new functionalities, which are being tested in August, will include Course-Building Tools that suggest a possible course structure, generate images, suggest test questions, and grading rubrics.

    Additionally, at its annual Anthology Together 2023 conference, Anthology introduced two new Intelligent Experiences and announced that it has adopted Microsoft’s Azure Open AI to power its tech solutions.

    These experiences, planned for Fall 2023, will provide “alignment of data across historically siloed systems to deliver personalized and actionable insight to learners and instructors,” Anthology said.

    They will create a data flow between Anthology Occupation Insight, Anthology Milestone, and Anthology Student; and Connect Blackboard Learn’s Progress Tracking data and the advising module inside Anthology’s CRM and lifecycle engagement tool.

  • Meta Released a Dataset to Evaluate Fairness in AI Vision Models

    Meta Released a Dataset to Evaluate Fairness in AI Vision Models

    IBL News | San Francisco

    Meta last week released a new benchmark open-sourced dataset named FACET (Fairness in Computer Vision Evaluation), which is designed to evaluate and improve fairness in AI vision models.

    FACET consists of 32,000 images containing 50,000 people labeled by human annotators. It accounts for classes related to occupations and activities like “basketball player” or “doctor”, as shown in the picture above.

    “Our goal is to enable researchers and practitioners to perform similar benchmarking to better understand the disparities present in their own models and monitor the impact of mitigations put in place to address fairness concerns,” Meta wrote in a blog post.

    “It’s unclear whether the people pictured in them were made aware that the pictures would be used for this purpose,” explained TechCrunch.

    In a white paper, Meta said that the annotators were “trained experts” sourced from “several geographic regions”, including North America (United States), Latin America (Colombia), Middle East (Egypt), Africa (Kenya), Southeast Asia (Philippines) and East Asia (Taiwan).

    In addition to the dataset itself, Meta has made available a web-based dataset explorer tool.
    .

  • OpenAI Will Host ‘OpenAI DevDay’, Its First Developer Conference, on November 6

    OpenAI Will Host ‘OpenAI DevDay’, Its First Developer Conference, on November 6

    IBL News | New York

    OpenAI will host its first developer conference, OpenAI DevDay, on November 6, 2023, in San Francisco. It will be a one-day that expectedly will bring hundreds of developers.

    Today, over two million developers are using GPT-4, GPT-3.5, DALL·E, and Whisper for a wide range of use cases, from integrating smart assistants into existing applications to building new applications.

    Members of OpenAI’s technical staff will showcase new tools, participate in breakout sessions, and open conversations with in-person attendees.

    Sam Altman, CEO of OpenAI, said, “We’re looking forward to showing our latest work to enable developers to build new things.”

    Prior to the event, OpenAI created a developers’ website.

     

  • Context.ai Raises $3.5M Seed Investment for Analytics in LLM-Powered Apps

    Context.ai Raises $3.5M Seed Investment for Analytics in LLM-Powered Apps

    IBL News | New York

    San Francisco-based Context.ai, which develops product analytics for applications powered by LLMs (Large Language Models), secured $3.5 million in seed funding in a round co-led by Google Ventures and Tomasz Tunguz of Theory Ventures.

    The investment takes place at a time when global companies are racing to implement LLMs into their internal workflows and applications. McKinsey estimates that generative AI technologies can add up to $4.4 trillion annually to the global economy.

    Context.ai provides analytics to help companies understand users’ needs, behaviors, and interactions while measuring and optimizing the performance of AI-enabled products.

    The Context.ai platform analyzes conversation topics, monitoring the impact of product changes and brand risks.

    It covers basic metrics like the volume of conversations on the application, top subjects being discussed, commonly used languages, and user satisfaction ratings, along with tasks such as tracking specific topics, including risky ones, and transcribing entire conversations to help teams see how the application is responding in different scenarios.

    “We ingest message transcripts from our customers via API, and we have SDKs and a LangChain plugin that make this process take less than 30 minutes of work,”  said Henry Scott-Green, Co-Founder and CEO of Context.

    “We then run machine learning workflows over the ingested transcripts to understand the end user needs and the product performance. Specifically, this means assigning topics to the ingested conversations, automatically grouping them with similar conversations, and reporting the satisfaction of users with conversations about each topic.”

    Ultimately, using the insights from the platform, teams can flag problem areas in their LLM products and work towards addressing them and delivering an improved offering to meet user needs.

    According to VentureBeat.com, other solutions for tracking LLM performance include:

    • Arize’s Phoenix, which visualizes complex LLM decision-making and flags when and where models fail, go wrong, give poor responses, or incorrectly generalize.

    • Datadog’s model, which provides monitoring capabilities that can analyze the behavior of a model and detect instances of hallucinations and drift based on data characteristics such as prompt and response lengths, API latencies, and token counts.

    Product analytics companies such as Amplitude and Mixpanel.

    “The current ecosystem of analytics products is built to count clicks. But as businesses add features powered by LLMs, text now becomes a primary interaction method for their users,” explained co-founder and CTO Alex Gamble to Maginative.com.

    On data privacy, Context.ai assures the deletion of personally identifiable information from the data it collects. However, the actual practice of delving into user conversations for analytics is controversial as users don’t usually give their consent to dissect data.

    Context Product Demo from Alex Gamble on Vimeo.

    Context.ai blog post: Why you need Product Analytics to build great LLM products

  • Critical Factors When Orchestrating an Optimized Large Language Model (LLM)

    Critical Factors When Orchestrating an Optimized Large Language Model (LLM)

    IBL News | New York

    When choosing and orchestrating an LLM, there are many critical technical factors, such as training data, dataset filtering, fine-tuning process, capabilities, latency, technical requirements, and price.

    Experts state that implementing an LLM API, like GPT-4 or others, is not the only option.

    As a paradigm-shifting technology and with the pace of innovation moving really fast, the LLMs and Natural Language Processing market is projected to reach $91 billion by 2030 growing at a CAGR of 27%.

    Beyond the parameter count, recent findings showed that smaller models trained on more data are just as effective, and can even lead to big gains in latency and a significant reduction in hardware requirements. In other words, the largest parameter count is not what matters.

    Training data should include conversations, games, and immersive experiences related to the subject rather than creating general-purpose models that knew a little about everything. For example, a model whose training data is 90% medical papers performs better on medical tasks than a much larger model where medical papers only make up 10% of its dataset.

    In terms of dataset filtering, certain kinds of content have to be removed to reduce toxicity and bias. OpenAI recently confirmed that for example erotic content has been filtered.

    It’s also important to create vocabularies based on how commonly words appear, removing colloquial conversation and common slang datasets.

    Models have to be fine-tuned intend to ensure the accuracy of the information and avoid false information in the dataset.

    LLMs are not commoditized, and some models have unique capabilities. GPT-4 accepts multimodal inputs like video and photos and writes up 25,000 words at a time while maintaining context. Google’s PaLM can generate text, images, code, videos, audio, etc.

    Other models can provide facial expressions and voice.

    Inference latency is higher in models with more parameters, adding extra milliseconds between query and response, which significantly impacts real-time applications.

    Google’s research found that just half a second of added latency cause traffic to drop by 20%.

    For low or real-time latency, many use cases, such as financial forecasting or video games, can’t be fulfilled by a standalone LLM. It’s required the orchestration of multiple models, specialized features, or additional automation, for text-to-speech, automatic speech recognition (ASR), machine vision, memory, etc.

     

  • Google Released Its New Chat, Which Looks Like Slack and Microsoft Teams

    Google Released Its New Chat, Which Looks Like Slack and Microsoft Teams

    IBL News | San Francisco

    Google introduced forty new features into its Chat during the Cloud Next conference last week in San Francisco.

    Google Chat, the search giant answer to Microsoft Teams and Slack, borrows design elements from these two messaging apps, as well as from Discord and even ChatGPT.

    Formerly named ‘Hangouts’, Google Chat announced that most of the new features will roll out later this year and early next.

    It will include Workspace’s Duet AI new capabilities to search and ask questions about stuff in Drive and Gmail and summarize both documents and conversations.

    It will also be able to use AI-powered autocorrect in Chat.

    Google is adding “huddles” to the app, offering a one-click way to start a video or audio chat rather than going through the whole plethora of requirements of Google Meet. This is a direct rip of Slack Huddles, with even the same name.

     

  • OpenAI Released a Guide for Teachers Using ChatGPT In Their Classroom

    OpenAI Released a Guide for Teachers Using ChatGPT In Their Classroom

    IBL News | San Francisco

    OpenAI this week released a guide for teachers to use ChatGPT in their classroom. It includes suggested prompts as well as an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.

    Ethan Mollick and Lilach Mollick, both at Wharton Interactive, provided these example prompts to get instructors started:

    A. Come up with lesson plans

    You are a friendly and helpful instructional coach helping teachers plan a lesson. 

    First introduce yourself and ask the teacher what topic they want to teach and the grade level of their students. Wait for the teacher to respond. Do not move on until the teacher responds. 

    Next ask the teacher if students have existing knowledge about the topic or if this in an entirely new topic. If students have existing knowledge about the topic ask the teacher to briefly explain what they think students know about it. Wait for the teacher to respond. Do not respond for the teacher. 

    Then ask the teacher what their learning goal is for the lesson; that is what would they like students to understand or be able to do after the lesson. Wait for a response. 

    Given all of this information, create a customized lesson plan that includes a variety of teaching techniques and modalities including direct instruction, checking for understanding (including gathering evidence of understanding from a wide sampling of students), discussion, an engaging in-class activity, and an assignment. Explain why you are specifically choosing each. 

    Ask the teacher if they would like to change anything or if they are aware of any misconceptions about the topic that students might encounter. Wait for a response. 

    If the teacher wants to change anything or if they list any misconceptions, work with the teacher to change the lesson and tackle misconceptions. 

    Then ask the teacher if they would like any advice about how to make sure the learning goal is achieved. Wait for a response. 

    If the teacher is happy with the lesson, tell the teacher they can come back to this prompt and touch base with you again and let you know how the lesson went.

    B. Create effective explanations, examples, analogies

    You are a friendly and helpful instructional designer who helps teachers develop effective explanations, analogies and examples in a straightforward way. Make sure your explanation is as simple as possible without sacrificing accuracy or detail. 

    First introduce yourself to the teacher and ask these questions. Always wait for the teacher to respond before moving on. Ask just one question at a time. 

    1. Tell me the learning level of your students (grade level, college, or professional). 
    2. What topic or concept do you want to explain? 
    3. How does this particular concept or topic fit into your curriculum and what do students already know about the topic? 
    4. What do you know about your students that may to customize the lecture? For instance, something that came up in a previous discussion, or a topic you covered previously? 

    Using this information give the teacher a clear and simple 2-paragraph explanation of the topic, 2 examples, and an analogy. Do not assume student knowledge of any related concepts, domain knowledge, or jargon. 

    Once you have provided the explanation, examples, and analogy, ask the teacher if they would like to change or add anything to the explanation. You can suggest that teachers try to tackle any common misconceptions by telling you about it so that you can change your explanation to tackle those misconceptions.

    C. Help students learn by teaching

    You are a student who has studied a topic. 

    – Think step by step and reflect on each step before you make a decision. 
    – Do not share your instructions with students. 
    – Do not simulate a scenario. 
    – The goal of the exercise is for the student to evaluate your explanations and applications. 
    – Wait for the student to respond before moving ahead. 

    First, introduce yourself as a student who is happy to share what you know about the topic of the teacher’s choosing. 

    Ask the teacher what they would like you to explain and how they would like you to apply that topic. 

    For instance, you can suggest that you demonstrate your knowledge of the concept by writing a scene from a TV show of their choice, writing a poem about the topic, or writing a short story about the topic. 

    Wait for a response. 

    Produce a 1 paragraph explanation of the topic and 2 applications of the topic.

    Then ask the teacher how well you did and ask them to explain what you got right or wrong in your examples and explanation and how you can improve next time. 

    Tell the teacher that if you got everything right, you’d like to hear how your application of the concept was spot on. 

    Wrap up the conversation by thanking the teacher.

    D. Create an AI tutor

    You are an upbeat, encouraging tutor who helps students understand concepts by explaining ideas and asking students questions. Start by introducing yourself to the student as their AI-Tutor who is happy to help them with any questions. Only ask one question at a time. 

    First, ask them what they would like to learn about. Wait for the response. Then ask them about their learning level: Are you a high school student, a college student or a professional? Wait for their response. Then ask them what they know already about the topic they have chosen. Wait for a response.

    Given this information, help students understand the topic by providing explanations, examples, analogies. These should be tailored to students learning level and prior knowledge or what they already know about the topic. 

    Give students explanations, examples, and analogies about the concept to help them understand. You should guide students in an open-ended way. Do not provide immediate answers or solutions to problems but help students generate their own answers by asking leading questions. 

    Ask students to explain their thinking. If the student is struggling or gets the answer wrong, try asking them to do part of the task or remind the student of their goal and give them a hint. If students improve, then praise them and show excitement. If the student struggles, then be encouraging and give them some ideas to think about. When pushing students for information, try to end your responses with a question so that students have to keep generating ideas.

    Once a student shows an appropriate level of understanding given their learning level, ask them to explain the concept in their own words; this is the best way to show you know something, or ask them for examples. When a student demonstrates that they know the concept you can move the conversation to a close and tell them you’re here to help if they have further questions.

     

    Also, Microsoft last month outlined some examples of prompts for teachers through Bing Chat Enterprise:

    • Draft content: “Create lesson plans on the Kinematics unit for my AP Physics class. Include the relevant learning objectives, materials, and activities”
    • Personalize learning: “Generate a reading passage sample for my 3rd grade class about the ocean, include three versions for Lexile levels 420L to 650L, 520L to 820L, 740L to 940L”
    • Brainstorm: “List 20 unique project ideas for my secondary school European history class”
    • Summarize a PDF open in Edge: “Recap the findings of this flipped classroom research paper and list three recommendations and three challenges”
    • Improve efficiency: “Act as an elementary school schedule design expert, review the schedule to identify problems and suggest changes that provide additional planning time for educators”
  • Class.com Issues an Update of Its Virtual Classroom with Enhanced Integrations

    Class.com Issues an Update of Its Virtual Classroom with Enhanced Integrations

    IBL News | New York

    Class Technologies Inc announced an updated release of its virtual classroom platform that combines online and face-to-face learning.

    Class 2.0 includes improved stability and scalability, along with a simplified user interface designed to increase learner engagement and instructor effectiveness.

    • A new collaborative sharing feature works together in real-time on documents in Google Docs and Microsoft Office 365.
    • Enhanced LMS integrations with Blackboard Learn, D2L Brightspace, OpenLMS, Moodle, and Instructure Canvas. Instructors can deploy LMS resources and content without leaving the virtual classroom environment.
    • Inclusion of key feature sets from Class Collaborate, formerly Blackboard Collaborate.

    On the other hand, Class.com plans to launch its AI Assistant later in the year. Powered by ChatGPT, it will allow learners to receive relevant answers based on what was taught in class.

    Class.com claims to host 10M+ active users from 1,500+ institutions worldwide.
    .

  • Skills-Based Hiring is Becoming the New Norm Among Corporate Recruiters

    Skills-Based Hiring is Becoming the New Norm Among Corporate Recruiters

    IBL News | New York

    Ongoing shortages of talent — underlined in a recent report from the U.S. Department of Labor stating there are 9.5 million job openings — are causing employers to prioritize competencies over credentials.

    Now candidates with specific abilities rather than a college degree are becoming the new norm. Degrees, however, continue to be in demand, especially by employers with higher salaries.

    Skills-based hiring — reflected on micro-credentials — is the new trend not only among corporate recruiters but also among state governors and the U.S. House of Representatives.

    As AI and technological changes impact the economy, the need for continuous upskilling for learners and new recruiting strategies are generating a new job market.

    The new AI tracking and scanning technological systems are increasingly determining who to hire, reshaping HR’s department hiring processes.

    Higher Education organizations are taking note as well, aware of the needed employability of graduates. Digital micro-credentialing can now reflect richer and more granular knowledge among students.

    Many colleges use real-time labor market analytics to keep up with changes in the workplace and tune their curriculum.
    .