IBL News | New York
OpenAI issued fifteen impactful GPT templates — specialized versions of ChatGPT — for faculty, students, and staff.
These interactive tutors can be deployed across ChatGPT Edu campuses and OpenAI-hosted GPTs.
They are AI assistants for building lecture slides, auto-generating student feedback, summarizing complex datasets, and instantly answering HR questions.
Each one comes with detailed, ready-to-use instructions that are available for personalization, allowing users to upload relevant files and select the desired tools. Each template is plug-and-play.
1. Class Companion – Acts as a 24/7 course assistant, turning course files into an interactive tutor that provides explanations, examples, and guided practice based solely on approved materials.
2. Quiz & Exam Creator – Designs quizzes and exam questions in multiple formats, ready to use and tailored to learning objectives.
3. Lesson Planner – Builds structured lesson plans and teaching materials in minutes, aligned to curriculum goals.
4. Research Simplified – Breaks down academic papers and highlights key insights for faster review.
5. Feedback Helper – Drafts constructive, personalized comments on student work to speed up grading.
1. Personal Tutor – Explains complex concepts step by step and provides guided practice.
2. Smart Quiz Partner – Creates unlimited practice quizzes that adapt to skill level and topic mastery.
3. Career Coach – Crafts strong resumes, cover letters, and tailored interview prep strategies for target companies.
4. Code Helper – Reviews, debugs, and explains code with clear examples and solutions.
5. Writing Coach – Guides brainstorming, outlining, and refining essays or projects with actionable feedback.
Top 5 GPTs for Staff and Administrators
1. HR & Policy Assistant – Provides instant answers to HR questions and campus policies.
2. Tech Support Bot – Troubleshoots IT issues and checks accessibility of digital content.
3. Prompt Coach – Helps staff craft better prompts to save time on daily tasks.
4. Data Reporter – Turns raw data into concise summaries and simple charts.
5. Email Assistant – Drafts polished emails and announcements that match institutional tone.
Inside each template, these are the fields to be completed:
- Purpose & Impact – Why the GPT exists and the value it delivers.
- Who Uses It – Intended audience.
- Build Checklist – Name, description, conversation starters, knowledge files, and tool toggles.
- Core Instructions (System Prompt) – Step-by-step guidance for the GPT’s behavior.
- Safety & Guardrails – How to maintain compliance, academic integrity, and privacy.
- Starter User Prompts – Examples to help users get started quickly.
- Metrics – Suggestions for measuring success.
- Maintenance – Tips for keeping the GPT relevant.
- Extensions – Optional upgrades and advanced uses.
An example is Class Companion GPT:
Purpose & Impact: Acts as a 24/7 course assistant, turning course files into an interactive tutor that provides explanations, examples, and guided practice based solely on approved materials. Increases engagement, supports independent learning, and reinforces concepts outside class time.
Who Uses It: Faculty (builders), students (users), TAs (moderators).
- Name: [Insert Course Code] Class Companion for [Insert Course Name].
- Description: “Interactive tutor for [Insert Course Name] that explains concepts, gives step-by-step reasoning, and asks reflective follow-up questions—using only the uploaded course materials.”
- Conversation Starters:
- “Explain today’s lecture topic in simple terms for [Insert Course Name].”
- “Summarize key points from Week [Insert Number].”
- “Create a practice problem on [Insert Topic].”
- Knowledge: Upload syllabus, slides (PDF), lecture notes, readings, lab manuals—no answer keys.
- Toggles: • Browsing: OFF • Code Interpreter: ON if quantitative • Image Generation: Optional • File Uploads: ON • Conversation History: ON
Core Instructions (System Prompt)
- Clarify the student’s goal before answering.
- Provide layered explanations: (a) Overview, (b) Key Steps, (c) Worked Example.
- After each answer, offer one: (a) Comprehension Check, (b) Reflection Question, or (c) Practice Problem.
- Cite source (file name + page/section). If unsure, acknowledge uncertainty and request the relevant file.
- If asked to solve active graded work, refuse and switch to hints or study pathway.
- For quantitative questions, show concise reasoning, then the final answer. Tone: Encouraging, concise, academically rigorous.
- No external sources unless browsing is explicitly enabled.
- Never provide solutions to active graded assessments.
- “Explain how [Insert Concept] works using our Week [Insert Number] slides.”
- “Give me a mini-quiz on [Insert Topic].”
- “Walk me through the lab setup from the [Insert File Name] PDF.”
- Weekly active students
- % answers with proper citations
- Student satisfaction (1–5).
- Upload new PDFs weekly
- Bi-weekly quality spot checks.
- Create a separate browsing-enabled variant for “real-world context” connections.

An Anthropic and Northeastern Show How Faculty Use AI to Automate Tasks
Mikel Amigot, IBL News | New York
Anthropic released new research, in collaboration with Northeastern University, this week on how educators utilize AI. The company analyzed 74,000 anonymized conversations from higher education professionals on Claude.ai in May and June of this year.
Survey’s findings reveal how AI adoption is expanding and driving a pedagogical shift as educators utilize these tools to create tangible educational resources.
A recent Gallup survey noted that AI tools save teachers an average of 5.9 hours per week.
• The report found that educators use AI for designing lessons and developing course materials, writing grant proposals, advising students, and managing administrative tasks such as admissions and financial planning.
• The most prominent use is curriculum development, followed by conducting academic research and assessing student performance, as the second and third most common uses. Educators find AI useful for providing students and me with individualized, interactive learning experiences that go beyond what one instructor could offer.
Grading
However, AI for grading and evaluation is less frequently used, as it is perceived as the least effective, and it remains an ethically contentious issue. “Students are not paying tuition for the LLM’s time; they’re paying for my time. It’s my moral obligation to do a good job (with the assistance, perhaps, of LLMs),” said an instructor.
Anyway, some educators are heavily using it for automating assessment tasks, and it emerges as the second most automation-heavy task.
This includes subtasks such as providing feedback on student assignments and grading their work using rubrics.
Claude Artifacts
Faculty is using Claude Artifacts to create interactive and engaging educational materials for student development, such as chemistry simulations, data visualization dashboards, grading rubrics, podcasts, and videos.
The report shows the following creations:
> Data visualization: interactive displays to help students visualize everything from historical timelines to scientific concepts
> Assessment and evaluation tools: HTML-based quizzes with automatic feedback systems, CSV data processors for analyzing student performance, and comprehensive grading rubrics
> Subject-specific learning tools: specialized resources like chemistry stoichiometry games, genetics quizzes with automatic feedback, and computational physics models
> Interactive educational games: web-based games, including escape rooms, platform games, and simulations that teach concepts through gamification across various subjects and levels
> Academic calendars and scheduling tools: interactive calendars that can be automatically populated, downloaded as images, or exported as PDFs for displaying class periods, exam times, professional development sessions, and institutional events
> Budget planning and analysis tools: budget documents for educational institutions with specific expense categories, cost allocations, and budgetary management tools
> Academic documents: meeting minutes, emails for grade-related communications and academic integrity issues, recommendation letters for faculty awards, tenure appeals, grant applications, interview invitations, and committee appointments
Other Uses
Other interesting uses discovered in the Claude.ai data include:
- Create mock legal scenarios for educational simulations.
- Develop vocational education and workforce training content;
- Draft recommendation letters for academic or professional applications;
- Create meeting agendas and related administrative documents.
Trends
Claude.ai data note tasks that will augment along the way, such as creating educational and practice materials, writing grant proposals to secure external funding, academic advising and student organization mentorship, and supervising student academic work.
In addition, educators will likely delegate tasks to AI, including managing institutional finances and fundraising, maintaining student records, evaluating academic performance, advising on doctoral-level academic research, and managing academic admissions and enrollment.
Many AI interactions will often require significant context and thus collaboration between the AI and the professor.
Many educators recognize that AI is putting pressure on them to change the way, what, and how they teach and how they conduct assessments.
In coding, for example, according to one professor, “AI-based coding has completely revolutionized the analytics teaching/learning experience. Instead of debugging commas and semicolons, we can spend our time talking about the concepts around the application of analytics in business.”
In one particular Northeastern professor’s case, they shared that they “will never again assign a traditional research paper” after struggling with too many students submitting AI-written assignments. Instead, they shared: “I will redesign the assignment so it can’t be done with AI next time.”
Campus Technology: Top 3 Faculty Uses of Gen AI
- Developing curricula (57%). Common requests included designing educational games, creating interactive tools, and creating multiple-choice assessment questions.
- Conducting academic research (13%). Common requests included supporting bibliometric analysis and academic database operations, implementing and interpreting statistical models, and revising academic papers in response to reviewer feedback.
- Assessing student performance (7%). Common requests included providing detailed feedback on student assignments, evaluating academic work against assessment criteria, and summarizing student evaluation reports.

Instructure Launched ‘Canvas Career’, a Platform for Non-Credit, Continuing Education and Workforce Development Programs
IBL News | New York
Instructure announced last month the launch in beta for select customers of its workforce-aligned, employee-centric, skills-first LMS named Canvas Career. General availability of the platform is expected in January 2026.
This platform is oriented toward upskilling and reskilling adult learners, helping them build in-demand skills, advance in their careers, and stay competitive in a rapidly changing job market.
A recent survey conducted by The Harris Poll, commissioned by Instructure, stated that 73% of U.S. workers reported feeling unprepared to adapt to changes or disruptions in their careers over the next five years.
Additionally, about 50% expressed uncertainty about which skills, certifications, or credentials employers value.
Canvas Career is explicitly built for non-credit, continuing education, career switchers, and training for internal workforces and external customers, including short courses and skills-based learning programs.
With built-in AI tools, credentialing, video content, and enterprise integrations, Canvas Career focuses on what to teach and how to deliver it effectively.
The antecedent of this platform was Bridge, which Instructure finally sold.

D2L Enhances Its AI Toolset on Tutor, Support, Insights, and Feedback
IBL News | New York
D2L announced last month new enhancements to its AI Lumi solution, designed to provide learners with personalized support.
Many of those Lumi tools, offered through a partnership with LearnWise, will be available soon.
- Lumi Tutor: This chat, integrated into course content, helps learners with due dates, study plans, quizzes, instant practice, flashcards, and roleplay.
- Study Support: It provides learners with customized feedback and study recommendations based on their quiz performance.
- Lumi Insights: Educators see students’ performance on quizzes, alongside adaptive recommendations. It helps identify where students struggle by highlighting what is and isn’t working, such as problematic quiz questions.
- Lumi Feedback: Instructors automate grading by generating text and rubric feedback based on their own notes.
These Lumi modules are available separately as add-ons, with additional costs often around a third of the base price of the LMS.

John Baker, President, Founder, and CEO at the Canadian D2L LMS, said, “By putting humans in the driver’s seat, we’re designing and harnessing AI-native capabilities in our learning platform.”
D2L also introduced enhancements to D2L Link, with automated workflows and improved data accuracy, to help institutions create a more connected learning ecosystem, unlocking a more holistic view of learner progress.
As part of the core product, D2L unveiled Createspace, described as the future of authoring and sharing. The first components are available now. Instructors can now create, version, reuse, template, and share content in a separate tool, rather than creating content directly within a course.

Finally, D2L announced that it is placing a much stronger emphasis on the corporate market, holding 480 corporate clients today.
• Glenda Morgan: D2L Fusion Conference Notes 2025
• D2L Roadmap

Blackboard LMS Adds a New Set of AI Capabilities Within its ‘Anthology Virtual Assistant (AVA)’
IBL News | New York
Anthology, maker of Blackboard LMS, announced last month a new set of AI capabilities within its Anthology Virtual Assistant (AVA), complementing the existing AI Design Assistant to accelerate content creation.
- AVA Automations: Instructors can set performance or time-based rules to automatically send personalized messages and nudges to keep students engaged and on track, such as celebrating a high grade or reminding them to log in. These messages are instructor-written, fully customizable, and logged for complete transparency.
- AVA Responses: Instant, AI-generated answers based on course content and syllabus, such as questions about deadlines or grading criteria. Instructors can review and confirm as needed all of these common student questions.
- AVA Feedback Assistant: Instructors can deliver high-quality, student-friendly feedback in less time.
- Summarize Feedback: It auto-generates a clear summary based on rubric selections and grading criteria.
- Rewrite Feedback: It turns informal notes or fragments into polished, constructive messages.
These two features enable instructors to save time on grading tasks while still providing clear, personalized feedback to students.
Other new features in Blackboard include the AI Badge Creator and Outcomes, which enable the measurement, management, and showcasing of student learning.
> AI Product Video Demos
> Phil Hill: Anthology Together Conference Notes 2025

Canvas LMS Adds to Its Platform an Agentic Solution, ‘IgniteAI’
IBL News | New York
Instructure, the maker of Canvas LMS, announced last month the launch of its native AI solution called IgniteAI.
Powered by AWS Bedrock, IgniteAI is embedded within Canvas and Mastery to conduct tasks such as creating quizzes, generating rubrics, summarizing discussions, and aligning content to outcomes.
It also leverages the Model Context Protocol (MCP) standard and extends the LTI framework. This way, Canvas LMS’ ecosystem of 1,100 edtech partners and LLMs like Anthropic and OpenAI can integrate their agentic AI solutions.
IgniteAI emphasizes data protection compliance with COPPA, FERPA, and GDPR. In terms of accessibility, Canvas LMS and other products are achieving WCAG 2.2 AA compliance as part of a recent Voluntary Product Accessibility Test (VPAT).
In addition, Instructure announced several updates to its product suite Canvas, featuring redesigned dashboards and modules, an improved mobile app, specialized STEM items, enhanced proctoring capabilities in New Quizzes, and new student portfolios that showcase diverse learning progress.

OpenAI Embeds Its Tool Into Canvas LMS, Allowing Instructors to Create Assignments With AI
IBL News | New York
OpenAI announced this week a partnership with Instructure’s Canvas LMS under its program called IgniteAI, to allow teachers to create AI-powered assignments and other instructional activities.
Meanwhile, students can engage with the AI assistant, and as they interact, learning evidence is captured and returned to the Gradebook.
Steve Daly, CEO of Instructure, said, “This collaboration with OpenAI showcases our ambitious vision: creating a future-ready ecosystem that fosters meaningful learning and achievement at every stage of education.”
The first tool integrated into Canvas LMS is a new type of assignment called the LLM-Enabled Assignment, which allows teachers to define, through text prompts, how AI interacts with students, set specific learning goals and objectives, and determine what evidence of learning it should track.
Through this tool, students submit their assignments and create visible learning evidence that teachers can use, as it’s mapped to the learning objectives, rubrics, and skills.
Shiren Vijiasingam, Chief Product Officer at Instructure, said that “teachers will gain a high-level view of overall progress, key learning indicators, and potential gaps, each supported by clear evidence.” “They can then dive into specific indicators to see exactly where and how a student demonstrated the required understanding in the conversation.”
“What’s powerful about this tool is that it enables educators to assess the student’s learning process — not just the final outcome,” said Vijiasingam. “This is only the first in a set of tools we will develop with OpenAI over the coming quarters.”
• Instructure announces the launch of IgniteAI agent at InstructureCon 25.
• rProfessors: I watched Instructure’s Canvas AI demo last week. I have thoughts (Reddit, July 31, 2025)
“I’ve seen this topic discussed a few times now in relation to Instructure’s recent press release about partnering with OpenAI on a new integration. I attended the InstructureCon conference last week, where among other things Instructure gave a tech demo of this integration to a crowd of about 2,500 people. I don’t think they’ve released video of this demo publicly yet, but it’s not like they made us sign an NDA or anything, so I figured I’d write up my notes. I’m recreating this based on hastily-written notes, so they may not be perfectly accurate recreations of what we were shown.
During the demonstrations they made it clear that these were very much still in development, were not finished products, and were likely to change before being released. It was also a carefully controlled, partially pre-programmed tech demo. They did disclose which parts were happening live and which parts were pre-recorded or simulated.
In the tech demo they showed off three major examples.
1. Course Admin Assistant. This demo had a chat interface similar to every LLM, but its function was specifically limited to canvas functions. The example they showed was typing in a prompt like, “Emily Smith has an accommodation for a two-day extension on all assignments, please adjust her access accordingly,” and the AI was able to understand the request, access the “Assign To” function of every assignment in the class, and give the Emily student extended access.
In the demo it never took any action without explicitly asking the instructor to approve the action. So it gave a summary of what it proposed to do, something like “I see twenty-five published assignments in this class that have end dates. Would you like me to give Emily separate “Assign to” Until Dates with two extra days of access in each of these assignments?” It’s not clear what other functions the AI would have access to in a canvas course, but I liked the workflow, and I liked that it kept the instructor in the loop at every stage of the process.
The old “AI Sandwich,” principle. Every interaction with an AI tool should with a human and end with a human. I also liked that it was not engaging with student intellectual property at any point in this process, it was targeted solely at course administration settings.
My analysis: I think this feature could be genuinely cool and useful, and a great use case for AI agents in Canvas. Streamline the administrative busywork so that the instructor can spend more time on instruction and feedback. Interesting. Promising. Want to see more.
AI Assignment Assistant. Another function was a little more iffy, and again a tightly controlled demo that didn’t provide many details. The demo tech guy created a new blank Assignment in Canvas, and opened an AI assistant interface within that assignment. He prompted it with something like, “here is a PDF document of my lesson. turn it into an assignment that focuses on the Analysis level of Bloom’s Taxonomy,” and then he uploaded his document.
We were not shown what the contents of the document looked like, so this is very vague, but it generated what looked like a competent-enough analysis paper assignment. One thing that I did like about this is that whenever the AI assistant generates any student-facing content, it surrounds it with a purple box that denotes AI-generated content, and that purple box doesn’t go away unless and until the instructor actually interacts with that content and modifies or approves it. So AI Sandwich again, you can’t just give it a prompt and walk away.
The demo also showed the user asking for a grading rubric for the assignment, which the AI also populated directly into the Rubric tool, and again every level, criteria, etc. was highlighted in purple until the user interacted with that item.
My analysis: This MIGHT useful in some circumstances, with the right guardrails. Plenty of instructors are already doing things like this anyway, in LLMs that have little to no privacy or intellectual property protections, so this could be better, or at least less harmful. But there’s a very big, very scary devil in the details here, and we don’t have any details yet. My unanswered questions about this part surrounds data and IP. What was the AI trained on in order to be able to analyze and take action on a lesson document? What did it do with that document as it created an assignment? Did that document then become part of its training data, or not? All unknown at this point.
AI Conversation Assignment. They showed the user creating an “AI Conversation” assignment, in which the instructor set up a prompt, something like “You are to take on the role of the famous 20th century economist John Keynes, and have a conversation with the student about Supply and Demand.” Presumably you could give it a LOT of specific guidance on how the AI is to guide and respond to the conversation, but they didn’t show much detail.
Then they showed a sequence of a student interacting with the AI Keynes inside of an LLM chat interface within a Canvas assignment. It showed the student trying to just game the AI and ask for the answer to the fundamental question, and the AI told it that the goal was learning, not getting the answer, or something like that. Of course, there’s nothing here that would stop a student from just copying and pasting the Canvas AI conversation into a different AI tool, and pasting the response back into Canvas. Then it’s just AI talking to AI, and nothing worthwhile is being accomplished.
Then the part that I disliked the most was that it showed the instructor SpeedGrader view of this Conversation assignment, which showed a weird speedometer interface showing “how engaged” the student was in the conversation. It did allow the instructor to view the entire conversation transcript, but that was hidden underneath another button. Grossest of all, it gave the instructor the option of asking for the AI’s suggested grade and written feedback for the assignment. Again, AI output was purple and wanted instructor refinement, but… gross.
My analysis: This example, I think, was pure fluff and hype. The worst impulses of AI boosterism. It wasn’t doing anything that you can’t already do in copilot or ChatGPT with a sufficient starting prompt. It paid lip service to academic integrity but didn’t show any actual integrity guardrails. The amount of AI agency being used was gross. The faith it put in the AI’s ability to actually generate accurate information without oversight is negligent. I think there’s a good chance that this particular function is either going to never see the light of day, or is going to be VERY different after it goes through some refinement and feedback processes.”

Web Search, Built on Links, Starts to Shift Away Toward LLM Platforms
Mikel Amigot, IBL News | New York
Web search, built on links, started to shift away from traditional browsers toward LLM platforms in 2025, according to a report by Andreessen Horowitz.
The foundation of the $80 billion+ SEO market just cracked with Apple’s announcement that AI-native search engines like Perplexity and Claude will be built into Safari, said the VC firm. This put Google’s distribution chokehold in question.
“A new paradigm is emerging, one driven not by page rank, but by language models. We’re entering Act II of search: Generative Engine Optimization (GEO),” stated the report.
Page ranks are determined by indexing sites based on keyword matching, content depth and breadth, backlinks, and user experience engagement.
However, today, it’s not about ranking high on the results page. LLMs are the new interface for how people find information. Visibility is obtained by showing up directly in the answers of LLMs like GPT-4, Gemini, and Claude.
Users’ queries are longer (averaging 23 words vs. 4), sessions are deeper (averaging 6 minutes), and responses provide personalized, multi-source synthesis, remembering and showing reasoning, rather than just relying on keywords.
Additionally, the business model and incentives have changed. Google monetizes user traffic through ads; users are paid with their data and attention. In contrast, most LLMs are paywalled, subscription-driven services.
However, an ad market may eventually emerge on top of LLM interfaces, but the rules, incentives, and participants would likely look very different than traditional search.
New monitoring platforms, such as Profound, Goodie, and Daydream, enable brands to analyze how they appear in AI-generated responses.
Tools like Ahrefs’ Brand Radar track brand mentions in AI Overviews, enabling companies to understand how they’re framed and remembered by generative engines. Semrush has a dedicated AI toolkit designed to help brands track perception across generative platforms, optimize content for AI visibility, and respond quickly to emerging mentions in LLM outputs.
referral traffic from the LLMs is still low overall <5%, but growing and perhaps better targeted? marketing and ad tech is and will evolve to fit this with the large LLM platforms building their own products + the AI software companies to cover the long tail (as we’re seeing) https://t.co/itqzMBu40S
— Seema Amble (@seema_amble) May 8, 2025
ChatGPT now refers 10% of new @vercel signups, which have also accelerated https://t.co/LzatDz8n8u
— Guillermo Rauch (@rauchg) April 9, 2025

Syracuse University Introduced Its New AI Platform for Teaching and Learning
IBL News | New York
Syracuse University, this month, during the forum “AI at Work,” presented its AI platform developed in collaboration with ibl.ai, the parent company of this news service.
At the center is MentorAI, a platform run entirely inside Syracuse’s cloud tenancies.
Andrew Joncas, Leader, Architect, and Technology Evangelist, at Syracuse University, explained, “Creating an AI tutor no longer requires prompt-engineering expertise. Instructors upload a syllabus, slide deck, or even an MP4 lecture; Mentor AI generates an agent that can answer student questions, surface key points, or embed directly in Blackboard.”
Syracuse University owns data and code and pays by the API call rather than per-seat license; therefore, there’s no premium license, and administrators can mix and match models — from OpenAI GPT-4o to Google Gemini or open-source Llama. This approach also allows the university to adopt newer models as they mature.
The same event highlighted the Blackboard AI Design Assistant, where AI suggests quiz items, assignments, and rubrics, as Michael Morrison stressed, the instructor remains in charge.

Time Released the Ranking of the 2025 World’s Top EdTech Companies and Rising Stars
IBL News | New York
TIME Magazine and Statista released their annual ranking of the 350 top edtech companies worldwide after reviewing data from over 7,000 companies. These firms were evaluated using a formula that combined financial strength and industry impact.
According to this list, the U.S. had the most high-scoring edtech companies, with 138 making up 39.4% of the list. India came in second with 33 companies, making up 9.4% of providers. China placed third with 23 companies, which made up 6.6% of the list.
TIME Magazine and Statista also created a list of the rising stars.
In both cases, AI remains the focus of the edtech industry.
| Rising Stars | ||
|---|---|---|
| 1 | AASOKA | India |
| 2 | Copyleaks | United States |
| 3 | uLesson | Nigeria |
| 4 | UNIVO | India |
| 5 | Scaler | India |
| 6 | Workera | United States |
| 7 | Quizizz | United States |
| 8 | Promova | Cyprus |
| 9 | SATs Companion | United Kingdom |
| 10 | GoMyCode | Tunisia |
| 11 | myFirst | Singapore |
| 12 | EPICODE | Italy |
| 13 | TryHackMe | United Kingdom |
| 14 | Vivi | Australia |
| 15 | Academy Xi | Australia |
| 304 | SpeakingPal | Israel | 71.1 |
|---|---|---|---|
| 305 | Cybrary | United States | 71.1 |
| 306 | Panorama Education | United States | 71.1 |
| 307 | Degreed | United States | 70.9 |
| 308 | Blackbird | United States | 70.9 |
| 309 | YESNYOU | France | 70.9 |
| 310 | Lingopanda | India | 70.9 |
| 311 | Aulalivre | Brazil | 70.9 |
| 312 | Cloverleaf | United States | 70.8 |
| 313 | Qkids | China | 70.8 |
| 314 | Noon | United Kingdom | 70.7 |
| 315 | Lessonbee | United States | 70.7 |
| 316 | Alchemie | United States | 70.7 |
| 317 | PaGamO | Taiwan | 70.7 |
| 318 | Stukent | United States | 70.7 |
| 319 | Gong | United States | 70.6 |
| 320 | Digital House | Uruguay | 70.6 |
| 321 | Darwinbox | India | 70.6 |
| 322 | Next Education | India | 70.6 |
| 323 | Makar | Taiwan | 70.5 |
| 324 | Loora | Israel | 70.5 |
| 325 | SimConverse | Australia | 70.4 |
| 326 | BoomWriter | United States | 70.4 |
| 327 | GoodHabitz | The Netherlands | 70.2 |
| 328 | OMS Education | China | 70.2 |
| 329 | Vocal Image | Estonia | 70.2 |
| 330 | Vocareum | United States | 70.0 |
| 331 | Vedantu | India | 69.8 |
| 332 | Seesaw | United States | 69.8 |
| 333 | Knowledge Platform | Singapore | 69.6 |
| 334 | Upswing | United States | 69.6 |
| 335 | Veduca | Brazil | 69.5 |
| 336 | Fast Campus | South Korea | 69.5 |
| 337 | Ravenna | United States | 69.4 |
| 338 | zick learn | Ireland | 69.4 |
| 339 | Mrs Wordsmith | United Kingdom | 69.3 |
| 340 | CourseStorm | United States | 69.3 |
| 341 | EdLight | United States | 69.3 |
| 342 | Educational Initiatives | India | 69.2 |
| 343 | Wheebox | India | 69.2 |
| 344 | VIPKid | United States | 69.2 |
| 345 | DeansList | United States | 69.2 |
| 346 | BibliU | United Kingdom | 69.2 |
| 347 | sofatutor | Germany | 69.1 |
| 348 | One on One | Jamaica | 69.0 |
| 349 | Beekast | France | 69.0 |
| 350 | Treehouse | United States | 68.9 |
