Author: IBL News

  • 2U Warns of “Substantial Doubt of Ability to Continue as a Going Concern”

    2U Warns of “Substantial Doubt of Ability to Continue as a Going Concern”

    IBL News | New York

    Online learning platform company 2U / edX warned yesterday of substantial doubt about its ability to continue as a going concern.

    Referring to its liquidity and cash flow, the Lanham, Md.-based company said:

    “The company expects that if it does not amend or refinance its term loan, or raise capital to reduce its debt in the short term, and in the event the obligations under its term loan accelerate or come due within twelve months from the date of its financial statement issuance in accordance with its current terms, there is substantial doubt about its ability to continue as a going concern.”

    2U Inc., now under the leadership of a new CEO, presented its results for the fourth quarter and the full year of 2023. “We are resetting and enhancing our operations with renewed financial discipline,” said Paul Lalljie, Chief Executive Officer of 2U.

    “Looking ahead, we believe this renewed focus, along with our market-proven offerings, robust partner network, and scalable technology and services, will allow us to take advantage of increasing demand for high-quality online education and continue to deliver on our mission.”

    “Our immediate focus in 2024 is to strengthen the fundamentals of our business in order to extend our debt maturities and restore a healthy balance sheet,” added Matthew Norden, Chief Financial Officer of 2U.On the results of 2023 compared to 2022, revenue decreased 2% to $946.0 million and net loss was $317.6 million. Costs and expenses for the year totaled $1.17 billion, a 4% decrease from $1.22 billion in 2022.

    The results for the fourth quarter of 2023 compared to fourth quarter 2022 showed a revenue increased of 8% to $255.7 million, while degree program segment revenue increased 19% to $163.5 million and alternative credential segment revenue decreased 7% to $92.2 million.

    Looking forward, the company expects to increase its revenue in the first quarter of 2024 from $195 million to $198 million with a net loss ranging from $60 million to $55 million and adjusted EBITDA to range from $10 million to $12 million.

    For the full year of 2024, it expects revenue to range from $805 million to $815 million, net loss to range from $90 million to $85 million, and adjusted EBITDA to range from $120 million to $125 million.
    .

  • Apple Released ‘MGIE’, an Open Source AI Multimodal Model for Image Editing

    Apple Released ‘MGIE’, an Open Source AI Multimodal Model for Image Editing

    IBL News | New York

    Apple released last week MGIE (MLLM-Guided Image Editing), a new open-source AI model that edits images based on natural language instructions. It leverages multimodal large language models (MLLMs) to interpret user commands and perform pixel-level manipulations.

    Experts agreed that MGIE represents a major breakthrough, highlighting that the pace of progress in multimodal AI systems is accelerating quickly.

    The model can handle a wide range of editing scenarios, such as simple color and brightness adjustments, photo optimization, object manipulations, and Photoshop-style modification, such as cropping, resizing, rotating, flipping, and adding filters.

    For example, an instruction can make the sky more blue, and MGIE produces the instruction to increase the saturation of the sky region by 20%.

    MGIE — which was presented in a paper accepted at the International Conference on Learning Representations (ICLR) 2024 — is the result of a collaboration between Apple and researchers from the University of California, Santa Barbara.

    MGIE is available as an open-source project on GitHub. The project also provides a demo notebook that shows how to use MGIE for various editing tasks. Users can also try out MGIE online through a web demo hosted on Hugging Face Spaces.
    .

  • Brave’s AI Assistant Integrates the Open-Source Mixtral 8x7B as the default LLM

    Brave’s AI Assistant Integrates the Open-Source Mixtral 8x7B as the default LLM

    IBL News | New York

    Brave announced that its AI browser assistant ‘Leo’ integrated the open-source LLM Mixtral 8x7B as the default model. The free version is rate-limited, and subscribers to Leo Premium ($15/month) get higher rate limits.

    In addition, the privacy-focused Brave made improvements to the Leo user experience, adding clearer onboarding, context controls, input and response formatting, and a general UI polish.

    Mixtral 8x7B, an open-source LLM released by the French start-up Mistral AI, gained popularity and usage among the developer community since its December release. It currently outperforms ChatGPT 3.5, Claude Instant, Llama 2, and many others, according to the LMSYS Chatbot Arena Leaderboard. Mixtral also shows improvements in reducing hallucinations and biases, according to the BBQ benchmark.

    Among other benefits, Mixtral generates code, handles larger contexts, and interacts in English, French, German, Italian, and Spanish.

    Brave is already using Mixtral for its newly released Code LLM feature for programming-related queries in Brave Search.

    Brave Leo also offers Claude Instant from Anthropic as well as Llama 2 13B model from Meta in the free version (with rate limits) and for Premium.

    Feature Free Leo Leo Premium
    Models Mixtral 8x7B (strict rate limits)
    Claude Instant (strict rate limits)
    Llama 2 13B (higher rate limits)
    Mixtral 8x7B
    Claude Instant
    Llama 2 13B
    Rate limits Various rate limits Higher rate limits
    Quality of conversations Very high, dependent on models (upgraded with release 1.62) Very high
    Privacy Inputs are always submitted anonymously through a reverse-proxy and are not retained. Inputs are always submitted anonymously through a reverse-proxy and are not retained.
    Subscription Free $15 monthly

     

    Leo helps users with tasks in the context of the page they are on by creating real-time summaries of web pages or videos. It can also answer questions about content, generate new content, translate pages, analyze them, and rewrite them. “Whether you’re looking for information, trying to solve a problem, writing code, or creating content, Leo is integrated in the browser for enhanced productivity,” said the company.

    To access Leo, Brave desktop users can simply ask a question in the address bar and click  “Ask Leo”, or clickBrave Leo sidebar icon.

     

  • Microsoft Issued a Redesigned Copilot with Image Creation Capabilities

    Microsoft Issued a Redesigned Copilot with Image Creation Capabilities

    IBL News | New York

    Microsoft issued this week an update to its Copilot chatbot with further image creation capabilities and a new GPT 4-based model, Deucalion. It also released new apps on iOS and Android.

    The launch was coincident with a Super Bowl ad (see below). It also marked one year since the entry of Microsoft into the consumer AI sphere with Bing Chat.

    Powered by OpenAI’s DALL-E 3, the new Copilot comes with a cleaner, sleeker look UI with a cleaner look, more white space, less text, and a visual carousel of cards.

    In addition, it includes Microsoft Designer, which allows users to customize the generated images right inside Copilot without leaving the chat.

    Images can be regenerated between square and landscape, resized, or enhanced with color, blurred background, and different effects like pixel art, resize and regenerate images without leaving chat.

    Microsoft announced that it will soon roll out a Designer GPT inside Copilot.

    Here are images of the old Bing Chat and the new Microsoft Copilot design, one after another.
    .

     

     

    https://youtu.be/SaCVSUbYpVc?si=YJKEC9EGEVKJXens

     

  • Google Rebranded ‘Bard’ Chatbot as ‘Gemini’, and Rolled Out a Paid Subscription Model

    Google Rebranded ‘Bard’ Chatbot as ‘Gemini’, and Rolled Out a Paid Subscription Model

    IBL News | New York

    Google rebranded its Bard chatbot as Gemini — the family of its foundation model —, launched in the U.S. Gemini Ultra 1.0 — priced at $20 per month — and issued a new Gemini app on iOS and Android, as Sundar Pichai, CEO of Google and Alphabet, announced today.

    The API access to the Ultra model will be available in the coming weeks.

    The paid monthly subscription — the same price as ChatGPT 4 — will be available through a new bundle known as Google One Premium Plan that includes two terabytes of cloud storage — typically costing $9.99 monthly — and access to the Google Workspace apps like Docs, Slides, Sheets, and Meet. For now, users can get a two-month subscription trial at no cost.

    With that, Google did sunset the Duet AI brand, which became Gemini for Workspace, responding to Microsoft and its partner OpenAI’s offerings in this manner.

    “Gemini Ultra 1.0 is a model that sets the state of the art across a wide range of benchmarks across text, image, audio, and video,” Google’s Sissie Hsiao said in a press conference today.

    “The largest model Ultra 1.0 is the first to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects — including math, physics, history, law, medicine, and ethics — to test knowledge and problem-solving abilities,” Sundar Pichai stated.

    “Gemini Advanced can be a personal tutor, tailored to your learning style, or it can be a creative partner, helping you plan a content strategy or build a business plan, as explained in this post,” he added.

    Many users said Bard provided middling results, making a rebrand almost a necessity, TechCrunch commented today.
    .

    Video explaining two new experiences — Gemini Advanced and a mobile app — to help you easily collaborate with the best of Google AI.

  • Tech and EdTech Companies Continue Firing Employees In 2024

    Tech and EdTech Companies Continue Firing Employees In 2024

    IBL News | New York

    With 353,000 jobs added in January, the U.S. economy is booming, but tech and edtech companies are firing tens of thousands of workers, despite stabilizing interest rates and a booming job market in other industries. Most of these employees, were hired to meet the pandemic boom in consumer tech spending.

    Even workers with years of experience or deep technical expertise are having trouble getting hired again.

    In January, Google, Amazon, Microsoft, Discord, Salesforce and eBay all made significant cuts. On Tuesday, PayPal said, in a letter to workers, it would cut another 2,500 employees or about 9 percent of its workforce.

    In 2023, they laid off over 260,000 people, according to layoff tracker Layoffs.fyi.

    Last year, the job reduction was mostly due to over-hiring during the pandemic and high interest rates environment — which makes it harder to invest in new business ventures, according to a report in The Washington Post.

    Experts say that companies are under pressure from investors to improve their bottom lines and focus on increasing profits.

    “That is the way the American capitalist system works,” said Mark Zandi, Chief Economist at Moody’s Analytics. “It’s ruthless when it gets down to striving for profitability and creating wealth. It redirects resources very rapidly from one place to another.”

    It seems to be working. In 2022, the Nasdaq Composite, a stock index dominated by tech companies, lost a full third of its value. In 2023, it grew by 43 percent. It rose another 3 percent in January.

    “The tech sector may be able to produce a lot and innovate a lot without as many people going forward,” Zandi, the Moody’s economist, said. “That is a lesson of AI.”
    .

  • Hugging Face Launched ‘Chat Assistants’ As An Open Rival to OpenAI’s GPT Store

    Hugging Face Launched ‘Chat Assistants’ As An Open Rival to OpenAI’s GPT Store

    IBL News | New York

    Hugging Face announced last week of a third-party, customizable Chat Assistants as a free, open-source alternative to OpenAI’s custom GPTs — which require a $20 per month subscription.

    This offering allows users of Hugging Chat to easily create their own customized AI chatbots with specific capabilities. They can choose which of several open source LLMs they wish to use, including Mistral’s Mixtral and Meta’s Llama 2.

    Like OpenAI’s GPTs, Hugging Face — the New York City-based AI startup — has also created an aggregator and a central repository of third-party customized Hugging Chat Assistants, which users can choose.

    The assistants aggregator page bears a visual resemblance to the GPT Store page, with custom Assistants displayed in a baseball card-style boxes with circular logos inside.

    OpenAI’s GPTs outperform by supporting web search, retrieval augmented generation (RAG), and generating logos (through DALL-E 3).
    .

  • BMW Plans to Add Alexa Voice Assistant in Their Cars

    BMW Plans to Add Alexa Voice Assistant in Their Cars

    IBL News | New York

    BMW showcased Amazon’s Alexa LLM-powered voice assistant in cars during the CES conference in Las Vegas.

    The new capabilities provide users with a natural way of getting to know them instead of digging through the car manual.

    For example, users can ask the assistant for things like recommendations on different drive modes and activate their chosen mode.

    They can also ask for instructions on how car features work—like the parking assistance system—and hear explanations in easy-to-understand terms through the BMW assistant’s customized voice.

    The demo followed Amazon’s previous announcement that BMW’s next-generation Intelligent Personal Assistant will be supported through our Alexa Custom Assistant technology (ACA).

    BMW and Amazon said that voice technology can strip away complexity and minimize distractions in the car.

    Amazon also reported that Character.ai conversational chatbots work with Alexa.
    .

     

     

  • Google Sets AI as Its Main Corporate Goal for 2024

    Google Sets AI as Its Main Corporate Goal for 2024

    IBL News | New York

    Google’s main goal for 2024 is to “deliver the world’s most advanced, safe, and responsible AI,” according to an internal document leaked to Alex Heath in The Verge.

    The main corporate goals for this year are:

    1.⁠ ⁠Deliver the world’s most advanced, safe, and responsible AI.

    2.⁠ ⁠Improve knowledge, learning, creativity, and productivity.

    3. Build the most helpful personal computing platforms and devices.

    4.⁠ ⁠Enable organizations and developers to innovate on Google Cloud.

    5. Provide the world’s most trusted products and platforms.

    6. Build a Google that’s extraordinary for Googlers and the world.

    7.⁠ ⁠Improve company velocity, efficiency, and productivity, and deliver durable cost savings.

    This last goal on the list points to more layoffs. Additionally, CEO Sundar Pichai has warned to expect more “specific resource allocation decisions” (translation: layoffs). Since the beginning of January, Google has laid off about 12,000 employees in various areas. Upcoming layoffs have a lot of employees on edge.

    Google currently lags far behind OpenAI in AI technology and deployment. Google’s Gemini models unveiled last year are falling behind OpenAI, which is reportedly already working on the next major upgrade to GPT-4.
    .

  • Meta Released a Specialized version of Llama 2 for Code Generation

    Meta Released a Specialized version of Llama 2 for Code Generation

    IBL News | New York

    Meta released this month an updated version of model Code Llama 70B. This improved model can write code more accurately in various programming languages, such as Python, C++, Java, and PHP, from natural language prompts or existing code snippets.

    Based on the open-source Llama 2, one of the largest LLMs in the world, with 175 billion parameters, Code Llama is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.

    The model 70B version — which is available in Hugging Face — is designed for general code synthesis and understanding, while Llama 2 is a general-purpose LLM that can generate text in any domain and style, from poetry to news articles.

    Code Llama 70B has been fine-tuned for code generation using a technique called self-attention, which allows it to learn the relationships and dependencies between different parts of the code.

    Code generation has been a long-standing goal of computer scientists, as it promises to make software development more efficient, accessible, and creative.

    However, unlike natural language, which is often ambiguous and flexible, code is precise and rigid. It has to follow strict rules and syntax, and it has to produce the desired output and behavior.

    Code generation models need to have a lot of data, computing power, and intelligence.

    Code Llama 70B has been trained on 500 billion tokens of code and code-related data, making it more capable and robust than its predecessors, according to Meta.

    It also has a larger context window of 100,000 tokens, which enables it to process and generate longer and more complex code.

    Code Llama 70B also includes CodeLlama-70B-Python, a variant that has been optimized for Python. This variant has been trained on an additional 100 billion tokens of Python code, making it more fluent and accurate in generating Python code. CodeLlama-70B-Python can also handle a range of tasks, such as web scraping, data analysis, machine learning (ML), and web development.

    Code Llama 70B is available for free download under the same license as Llama 2 and previous Code Llama models, which allows both researchers and commercial users to use and modify it.

    The model can be accessed and used through various platforms and frameworks, such as Hugging FacePyTorchTensorFlow, and Jupyter Notebook. More information and documentation can be found on GitHub and Hugging Face.

    Meta AI also provides documentation and tutorials on how to use and fine-tune the model for different purposes and languages.

    Mark Zuckerberg, the CEO of Meta AI, said in a statement posted to his Facebook account: “We’re open-sourcing a new and improved Code Llama, including a larger 70B parameter model. Writing and editing code has emerged as one of the most important uses of AI models today. The ability to code has also proven to be important for AI models to process information in other domains more rigorously and logically. I’m proud of the progress here, and looking forward to including these advances in Llama 3 and future models as well.”

    Code Llama 70B is expected to have a significant impact on the field of code generation and the software development industry, as it offers a powerful and accessible tool for creating and improving code. It can also lower the barrier to entry for people who want to learn coding, as it can provide guidance and feedback based on natural language instructions. Moreover, Code Llama 70B can potentially enable new applications and use cases, such as code translation, code summarization, code documentation, code analysis, and code debugging.
    .