Author: IBL News

  • International Baccalaureate Assessment System Allows Students to Use ChatGPT

    International Baccalaureate Assessment System Allows Students to Use ChatGPT

    IBL News | New York

    International Baccalaureate (IB), which offers an alternative qualification to A-levels and Highers, said that their students can use ChatGPT in their essays. But they must be clear when they were quoting its responses.

    “Content created by this chatbot must be treated like any other source and attributed when used,” said IB.

    IB is taken by thousands of children every year in the UK at more than 120 schools and all over Europe.

    ChatGPT has become a sensation since its public release in November 2022, due to its ability to produce plausible responses to text prompts, including requests to write essays.

    However, ChatGPT-based cheating capabilities have alarmed teachers and the academic profession.

    Matt Glanville, the IB’s Head of Assessment Principles and Practice, said the chatbot should be embraced as “an extraordinary opportunity”.

    “The clear line between using ChatGPT and providing original work is exactly the same as using ideas taken from other people or the internet. As with any quote or material adapted from another source, it must be credited in the body of the text and appropriately referenced in the bibliography,” he added.

    “When AI can essentially write an essay at the touch of a button, we need our pupils to master different skills, such as understanding if the essay is any good or if it has missed context, has used biased data or if it is lacking in creativity. These will be far more important skills than writing an essay, so the assessment tasks we set will need to reflect this.”

  • Bloomberg Introduces a 50-Billion Parameter LLM Built For Finance

    Bloomberg Introduces a 50-Billion Parameter LLM Built For Finance

    IBL News | New York

    Bloomberg released this week a research paper introducing BloombergGPT, a new large-language (LLM) AI model with 50 billion parameter built from scratch for finance.

    The company said that BloombergGPT, that has been specifically trained on a wide range of financial data, outperforms similarly-sized models by significant margins (as shown in the table below).

    “It represents the first step in the development and application of this new technology for the financial industry,” said the company.

    “This model will assist Bloomberg in improving existing financial NLP tasks, such as sentiment analysis, named entity recognition, news classification, and question answering, among others. Furthermore, BloombergGPT will unlock new opportunities for marshalling the vast quantities of data available on the Bloomberg Terminal.”

    Bloomberg researchers pioneered a mixed approach that combines both finance data with general-purpose datasets to train a model that achieves best-in-class results on financial benchmarks, while also maintaining competitive performance on general-purpose LLM benchmarks.

    Bloomberg’s data analysts collected financial language documents over the span of forty years, pulled from this extensive archive to create a comprehensive 363 billion token dataset consisting of English financial documents.

    This data was augmented with a 345 billion token public dataset to create a large training corpus with over 700 billion tokens. Using a portion of this training corpus, the team trained a 50-billion parameter decoder-only causal language model.

     

  • 2U Sues U.S. Department of Education Over “New Regulation that Overreaches Its Authority”

    2U Sues U.S. Department of Education Over “New Regulation that Overreaches Its Authority”

    IBL News | New York

    OPM (Online Program Management) provider 2U Inc, the owner of edX.org, sued the U.S. Department of Education in federal court this Tuesday over guidance it issued in February governing the relationships between colleges and third-party contractors that perform key services for them.

    Hundreds of U.S. colleges use OPM services to start and run online programs, often trading upfront capital from the companies for a portion of their programs’ revenue.

    In the filed suit against the Department of Education and its Secretary Miguel Cardona, 2U says that the agency has overreached its authority.

    Under the Education Department’s new definition, OPMs that provide colleges with recruiting and retention services, as well as educational content, like 2U, will broadly be considered third-party services.

    2U’s lawsuit alleges the department overstepped its power by independently rewriting the Higher Education Act’s definition of a third-party servicer. The suit was filed in U.S. District Court for the District of Columbia.

    The agency is particularly focused on entities that receive a share of tuition revenue in exchange for their services, arguing that “it can drive up the price of higher education and draw students to low-value academic programs at subpar institutions.”

    “2U cares deeply about our partnerships with leading non-profit colleges and universities across the nation,” Matthew Norden, chief legal officer at 2U, said in a statement. “We believe this recent action by the Department of Education will not only impinge on our ability to serve their students but also ultimately hurt their quality of education.”

    According to the lawsuit, 2U would face substantial and irreparable harm if it is classified as a third-party servicer in the eyes of the department, being forced to amend current contracts, undergo “burdensome and intrusive” audits, and pay nonrefundable compliance costs.

    The company would also be forced to cut off its South African subsidiary due to the guidance’s ban on foreign-owned and foreign-based subcontractors.

     

  • The OpenAI’s CEO Envisions a Universal Income Society to Compensate Jobs Replaced by AI

    The OpenAI’s CEO Envisions a Universal Income Society to Compensate Jobs Replaced by AI

    IBL News | New York

    Sam Altman, the CEO of OpenAI, an organization that has moved at record speed from a small research nonprofit into a multibillion-dollar company, with the help of Microsoft, showed his contradictions in an interview with The Wall Street Journal last week.

    He is featured as an entrepreneur who made a fortune investing in young startups, owner of the three mansions in California, and a family office now employing dozens to manage those properties along with investments in companies such as Worldcoin, Helion Energy, and Retro.

    Sam Altman said he fears what could happen if AI is rolled out into society recklessly and argues that is uniquely dangerous to have profits be the main driver of developing powerful AI models.

    Meanwhile he says that his ultimate mission is to build AGI (artificial general intelligence) while stating a goal of forging a new world order in which machines free people to pursue more creative work. In his vision, universal basic income will help compensate for jobs replaced by AI and humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”

    In the long run, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology.

    “Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life,” writes The Wall Street Journal.

    “OpenAI’s headquarters — with 400 employees —, in San Francisco’s Mission District, evoke an affluent New Age utopia more than a nonprofit trying to save the world. Stone fountains are nestled amid succulents and ferns in nearly all of the sun-soaked rooms.”

    Elon Musk, one of OpenAI’s critics who co-founded the nonprofit in 2015 but parted ways in 2018 after a dispute over its control and direction, said that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.” 

    Billionaire venture capitalist Peter Thiel, a close friend of Mr. Altman’s and an early donor to the nonprofit, has long been a proponent of the idea that humans and machines will one day merge.

    Behind OpenAI there is a for-profit arm, OpenAI LP, that reports to the nonprofit parent.

    According to some employees, the partnership of Sam Altman with Satya Nadella, the Microsoft CEO, started in 2019, contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world. They saw the deal as a Faustian bargain.

    Microsoft initially invested $1 billion in OpenAI and obtained exclusivity using Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, giving the tech giant the sole right to license OpenAI’s technology for future products.

    Altman’s other projects include Worldcoin, a company he co-founded that seeks to give cryptocurrency to every person on earth.

    He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.

    He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts.

     

  • Artificial Intelligence Enters a New Phase of Corporate Dominance

    Artificial Intelligence Enters a New Phase of Corporate Dominance

    IBL News | New York

    The 2023 AI Index [read in full here] — compiled by researcher from Stanford University as well as AI companies including Google, Anthropic, McKinsey, LinkedIn, and Hugging Face — suggests that AI is entering an era of corporate control, with industry players dominating over academia and government in deploying and safeguarding AI applications.

    Decisions about how to deploy this technology and how to balance risk and opportunity lie firmly in the hands of corporate players, as we’ve seen over the past years with AI tools, like ChatGPT, Bing, and image-generating software Midjourney, going mainstream.

    The report, released today, states: “Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, compute, and money, resources that industry actors inherently possess in greater amounts compared to nonprofits and academia.”

    Many experts in the AI world, mentioned by The Verge, worry that the incentives of the business world will also lead to dangerous outcomes as companies rush out products and sideline safety concerns.

    As AI tools become more widespread, the number of errors and malicious use cases are increasing. Such incidents might include fatalities involving Tesla’s self-driving software; the use of audio deepfakes in corporate scams; the creation of nonconsensual deepfake nudes; and numerous cases of mistaken arrests caused by faulty facial recognition software.

  • The Open edX Platform Reaches 4,5K Deployments, with 70K Courses, and 77M Users

    The Open edX Platform Reaches 4,5K Deployments, with 70K Courses, and 77M Users

    IBL News | Cambridge, Massachusetts

    The Open edX Platform has reached 4,500 deployments, hosts 70,000 courses, and has 77 million registered users worldwide, including 45 million at 2U’s edx.org.

    The organization behind Open edX, now renamed Axim Collaborative, presented the state of this international community during its annual conference at MIT’s Stata Center in Cambridge, Massachusetts last week.

    Ed Zarecor, Vice President of Engineering at Axim Collaborative (pictured above), explained that there are currently 24 contributing organizations and 333 individuals providing code.

    Jenna Makowski, Senior Product Manager at Axim Collaborative (pictured below), presented current and near-future priorities, including the Open edX platform roadmap.

    She highlighted the Learner Analytics (OARS) project, which aligns with open data standards and provides near real-time statistics.

    The goal of Axim Collaborative is to leverage open-source technology to democratize education and drive advancements in learning.

     

    During the opening talk at the 2023 Open edX Conference, Anant Agarwal, Chief Platform Officer at 2U, Founder of edX, and Professor, presented his ideas on “Reimagining the ‘3 Rs’ for Higher Ed in 2023.”

    Agarwal highlighted the top priorities that should be emphasized to help adult learners thrive in today’s world.

    “We should offer programs that teach today’s most in-demand tech skills, such as coding and data science, or develop human skills for the digital age, like resilience, storytelling and negotiation. Use a mix of live instruction, rich multimedia, and asynchronous learning. And give them options like part-time/full-time and online/in-person—or both.”

    He also said “that the world of education has completely changed by AI.”

  • Udacity Incorporates an OpenAI Provided Chatbot

    Udacity Incorporates an OpenAI Provided Chatbot

    IBL News | New York

    Udacity became the first MOOC learning platform to incorporate an AI chatbot. Powered by OpenAI’s GPT-3.5 Turbo model, it is intended to provide a real-time complement to the platform’s human mentors.

    Udacity says it seeks to enhance personalized support and guidance for learners.

    “We created an intelligent virtual tutor that can handle thousands of interactions at once,“ added the company. “We’re thrilled to be at the forefront of this change in education.”

    Udacity’s chatbot is able to:

    Summarize concepts to better understand complex material and retain information more effectively.

    Pose deeper questions by asking the bot for definitions, examples, and alternative explanations to deepen understanding of a given topic.

    • Translate to another language specific words, phrases, exercises, and quizzes. This can be a game-changer for non-native English speakers that felt limited by a language barrier.

    Fix errors in your code by asking the bot for help debugging errors, suggest improvements, or fix errors in coding exercises. It allows to learn how to code more efficiently and effectively.

    The Udacity chat icon appears in lower right corner of the screen. Udacity warned that learners should review the output and the advice provided by the chatbot.
    .

  • TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

    TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

    IBL News | Cambridge, Massachusetts

    The MIT and Harvard non-profit organization — Center for Reimagining Learning (or “tCRIL”) — that handles the Open edX platform named its first CEO: Stephanie Khurana [in the picture]. She assumed her role on April 3.

    In parallel, this organization which started by the two universities with the $800 million of proceed from the sale of edX Inc to 2U, changed its name into Axim Collaborative.

    Axim Collaborative’s mission is to make learning more accessible, more relevant, and more effective.

    The name Axim (a hybrid of the two ideas) was selected to underscore the centrality of access and impact,

    Khurana brings two decades of experience in social venture philanthropy and in technology innovation space. Most recently she served as managing partner and chief operating officer of Draper Richards Kaplan Foundation, a global venture philanthropy that identifies and supports innovative social ventures tackling complex societal problems.

    Earlier in her career, Khurana was on the founding teams of two technology start-ups: Cambridge Technology Partners (CTP) and Surebridge, both of which went on to be sold.

    Khurana also served in numerous roles at Harvard University, working on initiatives to support academic progress and build communities of belonging with undergraduates.

    Stephanie Khurana introduced herself to the Open edX community members in a town hall style which took place last Friday, March 31st, at the end of the annual developers conference.

    The gathering, celebrated at MIT’s Stata Center in Cambridge, Massachusetts, last week, attracted over 250 attendants, a similar number to past editions.

    One of the stories of the event was the acquisition of French-based company Overhang.IO, creator of the distribution tool Tutor. Pakistani American Edly purchased it for an undisclosed amount.

    Régis Behmo, the Founder and only developer in of Overhang, assumed the role of VP of Engineering at Edly.

    “Edly understands how contributing to open source creates value both for the company and for the whole edTech community. This partnership will help us drive this movement forward to serve learners and educators worldwide,” Behmo said.

    “Régis’s experience and leadership will be invaluable as we increase our impact on educational technology. In coming weeks and months, we’ll be making further announcements around our expanded roadmap for open source contributions to Open edX,” said Yasser Bashir, the founder and CEO of Arbisoft LLC, that operates with Edly its edTech brand.
    .

  • Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

    Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

    IBL News | New York

    Italy’s data protection authority said on Friday it will immediately block and investigate OpenAI from processing data of Italian users. The order is temporary until the company respects the European Union’s landmark privacy law, the General Data Protection Regulation (GDPR).

    Italy’s ban to ChatGPT come amid calls to block OpenAI’s releases over a range of risks for privacy, cybersecurity and disinformation on both Europe and the U.S.

    The Italian authority said reminded that ChatGPT also suffered a data breach and exposed users conversations and payment information last week.

    Moreover, ChatGPT has been shown producing completely false information about named individuals, apparently making up details its training data lacks.

    Consumer advocacy groups are saying that OpenAI is getting a “mass collection and storage of personal data to train the algorithms of ChatGPT” and is “processing data inaccurately.”

    This week, Elon Musk and dozens of AI experts this week called for a six-month pause on training systems more powerful than GPT-4. 

    Over 1,100 signatories — including Steve Wozniak, Tristan Harris of the Center for Humane Technology, some engineers from Meta and Google, Stability AI CEO Emad Mostaque signed an open letter, that was posted online, calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

    • “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

    • “AI labs have been locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

    • “The pause should be public and verifiable, and include all key actors. If it cannot be enacted quickly, governments should step in and institute a moratorium.”

    • “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

    • “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

    No one from OpenAI nor anyone from Anthropic signed this letter.

    Wednesday, OpenAI CEO Sam Altman spoke with the WSJ, saying OpenAI has not started training GPT-5.

    Pause Giant AI Experiments: An Open Letter:

    AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

    Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

    Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

    AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

    AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

    In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

    Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

     

  • Generative AI Will Impact Labor Market and Have Notable Economic, Social, and Policy Implications

    Generative AI Will Impact Labor Market and Have Notable Economic, Social, and Policy Implications

    IBL News | New York

    Generative AI or GTP (Generative Pre-trained Transformer) models will have notable economic, social, and policy implications.

    They will impact 80% of the U.S. workforce, with at least 10% of their work task affected. Around 19% of workers will see at least 50% of their task impacted.

    This is the main conclusion of a research paper posted online authored by four researchers at Cornell University — Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock.

    According to the research, the influence spans all wage levels, with higher-income jobs potentially facing greater exposure.

    Language Models (LLMs) — via ChatGPT or the OpenAI Playground — can process and produce various forms of sequential data, including assembly language, protein sequences, and chess games, extending beyond natural language applications alone.