Welcome to "Age of the Agents"

Dear readers,

Last week, we discussed if the GenAI is a bubble or not. This week, we will explore how large language models (LLMs) and small language models (SLMs) can work together in a multi-agent system to increase efficiency, minimise errors, and boost confidence in generated outputs.

Firstly, let us understand what these terms mean. A language model is essentially a machine learning algorithm that predicts the likelihood of a given sequence of words appearing in natural language text. An LLM refers to a powerful language model trained on vast amounts of data, enabling it to generate coherent paragraphs or even entire articles. On the other hand, an SLM is a smaller version of an LLM, often fine-tuned for specific tasks such as summarisation, translation, or/and even acting as a reviewer/validator/critique.

Now, imagine a scenario where multiple LLMs and SLMs collaborate within a single framework, each playing their unique roles based on their strengths. The result is a more robust and efficient AI system capable of handling complex tasks while minimising potential drawbacks associated with individual models. 

Let's look at some benefits of this approach:

  1. Efficiency: By leveraging the computational power of LLMs and the task-specific expertise of SLMs, a multi-agent framework allows for faster processing times without compromising output quality. For instance, a team of SLMs could perform initial filtering or preprocessing before passing the refined input to the primary LLM, thereby reducing overall computation time.

  2. Error reduction: One common issue with LLMs is overconfidence in generating incorrect responses due to limited contextual understanding. In contrast, SLMs excel in focused tasks but lack broader comprehension capabilities. Combining both types of models enables them to cross-verify and correct each other's mistakes, ultimately leading to fewer erroneous outputs.

  3. Confidence improvement: When faced with ambiguous inputs or open-ended questions, LLMs might produce inconsistent results owing to inherent randomness. However, by incorporating feedback from several SLMs, the final response becomes more reliable and consistent, thus improving user trust in the AI system.

  4. Versatility: With a diverse set of models working together, a multi-agent framework gains versatility in tackling different use cases. Each component specialises in certain aspects, allowing the system to adapt seamlessly according to varying requirements.

Google DeepMind recently released a system (SIMA) that learned to follow text instructions in seven commercial video games and four research environments. This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might. 

Utilising cutting-edge reinforcement learning techniques, SIMA continually learns from its surroundings and makes informed choices based on designated goals, demonstrating remarkable aptitude in managing complex tasks and adjusting to dynamic environmental changes.

Sima boasts two essential components—large language models (LLMs) and compact, specialised reinforced learning (RL) agents, forming a harmonious and potent alliance. Among SIMA’s distinct advantages are:

  1. Adaptive problem solving: Drawing upon LLMs' extensive world knowledge combined with nimble RL agents, Sima efficiently handles multifaceted challenges and adapts dynamically in ever-evolving settings.

  2. Abstract reasoning and strategic planning: Equipped with sophisticated decision-making abilities, Sima visualises desired outcomes, strategises accordingly, and executes elaborate schemes to accomplish long-term aspirations.

  3. Versatile, customisable design: Constructed around a flexible, modular structure, Sima grants researchers the freedom to tweak and augment capacities effortlessly, paving the way for endless possibilities across numerous sectors, such as robotics, gaming, and simulations.

In conclusion, harnessing the collective prowess of multiple LLMs and SLMs within an agentic framework promises substantial improvements in efficiency, error reduction, and confidence levels compared to relying solely on one type of language model. As businesses continue to integrate advanced AI solutions into their operations, embracing such innovative approaches will become increasingly crucial for staying competitive and maximising ROI.

Agent-ic startup of the week

tomoro.ai - Based out of London, founded by seasoned entrepreneurs (Sandi Chanda, Ed Broussard, Chris Spencer, Ash Garner, Rishabh Sagar, Albert Phelps, Chloe Kelleher, Sam Netherwood) develop customised, self-improving AI solutions (agents) that can be seamlessly integrated into the workforce & realise competitive advantage. Tomoro’s alliance with OpenAI and NVIDIA (among others) enables them to lead the industry in building valuable, scalable, enterprise-ready GenAI solutions for businesses.

Kind regards,

Shen Pandi

New on Learning

Introducing the 10x Consultant Course for professionals and consultants

10xConsultant.AI Course is designed to empower you with the knowledge, skills, and insights to harness the full potential of Generative AI, positioning you at the forefront of this transformative wave. Whether your ambition lies in consulting, AI research, data science, creative technology, or entrepreneurship, this course is your gateway to not just participating in the AI-driven future but leading it.

Empower yourself with Generative AI mastery and skyrocket your career to unprecedented heights with 10xConsultant.AI.

TheGen.AI News

Google's Game-Changer! A Potential Fee for AI-Powered Searches

Google is considering introducing charges for its AI-enhanced search capabilities, marking a significant transformation in its business model, triggered by the high costs of delivering such services. This change, initially reported by the Financial Times, suggests that Google plans to make these advanced search features available only to its premium subscribers, who are already required to subscribe for AI features in other Google applications like Gmail and its office suite.

The move to monetize AI search is due to the higher computational costs associated with it compared to traditional search mechanisms. This was highlighted by Heather Dawe, the chief data scientist at UST, who noted that this strategy aims to offset the increased expenses.

The enormous expense in computing power needed for training sophisticated generative AI models is a significant part of these costs. For example, Amazon recently spent $65 million on a single AI training session, with projections of future costs potentially reaching $1 billion. Furthermore, OpenAI and Microsoft are planning a $100 billion investment in an AI data center, and Mark Zuckerberg has expressed an intention to invest at least $9 billion in Nvidia GPUs for AI development.

According to Brent Thill, an analyst at Jefferies, training AI models accounts for only a fraction of the total costs in the sector. He pointed out that the bulk of expenses now goes towards running AI models, particularly inferencing, which is seeing rapid growth as more AI tools are deployed.

AI search competitors are also adopting subscription models to manage costs. For instance, Perplexity, an AI-driven search engine, charges users $20 per month for a premium tier that offers access to stronger AI models and unlimited usage. Conversely, some companies continue to offer AI features for free, absorbing the costs. Microsoft's Bing allows free access to its AI search features through the Edge browser, and startup Arc provides its search services for free, planning to generate revenue by charging businesses for advanced features in the future.

The Alarming Intersection of AI and Conflict, Reports from Gaza

The Israeli military’s reported use of an untested and undisclosed artificial intelligence-powered database to identify targets for its bombing campaign in Gaza has alarmed human rights and technology experts who said it could amount to “war crimes”.

The Israeli-Palestinian publication +972 Magazine and Hebrew-language media outlet Local Call reported recently that the Israeli army was isolating and identifying thousands of Palestinians as potential bombing targets using an AI-assisted targeting system called Lavender.That database is responsible for drawing up kill lists of as many as 37,000 targets,” Al Jazeera’s Rory Challands, reporting from occupied East Jerusalem, said on Thursday.

The unnamed Israeli intelligence officials who spoke to the media outlets said Lavender had an error rate of about 10 percent. “But that didn’t stop the Israelis from using it to fast-track the identification of often low-level Hamas operatives in Gaza and bombing them,” Challands said.

It is becoming clear the Israeli army is “deploying untested AI systems … to help make decisions about the life and death of civilians”, Marc Owen Jones, an assistant professor in Middle East Studies and digital humanities at Hamid Bin Khalifa University, told Al Jazeera.

“Let’s be clear: This is an AI-assisted genocide, and going forward, there needs to be a call for a moratorium on the use of AI in the war,” he added.

The Israeli publications reported that this method led to many of the thousands of civilian deaths in Gaza.

On Thursday, Gaza’s Ministry of Health said at least 33,037 Palestinians have been killed and 75,668 wounded in Israeli attacks since October 7.

“The humans that were interacting with the AI database were often just a rubber stamp. They would scrutinise this kill list for perhaps 20 seconds before deciding whether or not to give the go-ahead for an air strike,” Challands reported.

In response to widening criticism, the Israeli military said its analysts must conduct “independent examinations” to verify that the identified targets meet the relevant definitions in accordance with international law and additional restrictions stipulated by its forces.

It refuted the notion that the technology is a “system”, but instead “simply a database whose purpose is to cross-reference intelligence sources, in order to produce up-to-date layers of information on the military operatives of terrorist organisations”.

But the fact that there were “five to 10 acceptable civilian deaths” for every single Palestinian fighter who was an intended target shows why there are so many civilian deaths in Gaza, according to Challands.

Bixby 2.0: Samsung's Answer to the Future of AI Assistance

Samsung is in the process of revitalizing its Bixby AI assistant by incorporating generative AI, aiming to significantly enhance its capabilities. In a discussion with CNBC, a senior executive from Samsung disclosed the company's intensive efforts to blend generative AI technology into Bixby, signaling a major upgrade since its initial introduction with the Galaxy S8 series in 2017. Bixby, which has been integrated across Samsung's range of products including smart TVs, accessories, and smart home devices, is set to undergo this transformation to keep pace with the advancements seen in popular AI chatbots like ChatGPT, Google's Gemini, and Microsoft's Copilot.

Won-joon Choi, the executive vice president of Samsung’s mobile division, emphasized the necessity to reimagine Bixby's role in the era of generative AI and large language models (LLMs). The objective is to equip Bixby with generative AI to foster more natural interactions and create a user interface that enhances the connectivity within the Samsung ecosystem.

Although a specific launch date for the AI-enhanced Bixby was not provided, the anticipation is that it could be introduced by late 2024 or early 2025, coinciding with the release of OneUI 7 based on the upcoming Android 15 OS. This update is poised to position Bixby as a more formidable competitor in the AI-powered chatbot space.

This announcement comes as Apple prepares for its WWDC 2024 event on June 10, where it is expected to unveil iOS 18, reportedly loaded with AI-driven enhancements, including a generative AI-enhanced Siri. This move by Samsung underscores the tech industry's broader shift towards integrating more sophisticated AI technologies into consumer products, reflecting an ongoing race to deliver more intelligent, responsive, and user-friendly digital assistants.

Investors Rally Behind Samsung's Entry into the AI Chip Race Against Nvidia

Samsung is quickly becoming a key player in the artificial intelligence (AI) chip market, gaining the attention of investors and analysts, which is reflected in the recent uptick in its stock prices. The company's strategic positioning and developments in the AI sector have led to a reassessment of its market potential, particularly in the NAND and high-bandwidth memory (HBM) segments, areas where Samsung is showing strong competition against industry rivals like SK Hynix.

Investors are increasingly recognizing the significance of AI technology in driving demand for memory chips, with Samsung holding a leading position in the high-bandwidth memory market. This realization is prompting a shift in investment strategies, with funds moving towards Samsung in anticipation of its growth prospects. This adjustment in investment focus is based on the potential for AI advancements to significantly impact the demand for Samsung's products.

Christine Phillpotts from Ariel Investments highlighted a reallocation of investments from SK Hynix to Samsung, citing the untapped potential in Samsung's segment of the memory chip market as a key factor. This shift comes as Samsung's stock performance begins to outshine other AI-related stocks, fueled by speculations of a potential HBM supply deal with Nvidia, which would enhance Samsung's competitive stance against SK Hynix.

Jae Lee of Timefolio Asset Management SG pointed out Samsung's promising position since March, emphasizing the importance of Samsung's role in the HBM market to establish itself as a prominent AI-focused stock. Lee also noted a significant market transition to enterprise SSDs from HDDs, particularly in top-tier US tech firms, which presents a lucrative opportunity for Samsung, given its strength in NAND technology.

The shift towards solid-state drives (SSD) over traditional hard-disk drives (HDD) is poised to play a crucial role in AI applications, with SSDs offering significantly faster speeds essential for AI training. Peter Lee from Citigroup highlighted this trend, suggesting a potential "replacement cycle" where SSDs, which depend on NAND memory, could become more prevalent in AI uses. This shift supports the notion of Samsung's increasing importance in the AI chip landscape.

Analysts have recently raised their price targets for Samsung, expecting a 17% increase in its value over the next year, which outpaces projections for SK Hynix and Nvidia. This optimism is grounded in Samsung's performance in the HBM segment and its broader semiconductor business, where the company anticipates a return to 2022 levels of operation as the market downturn eases.

Samsung's focus on advancing its semiconductor and packaging business, along with strategic endorsements and potential partnerships, places it in a strong position to lead in the global chip market, especially as the demand for AI-driven technologies continues to grow.

OnePlus Unveils Magic Eraser AI Feature: List Of Compatible Devices Revealed

OnePlus is stepping up its game in the smartphone industry by integrating AI capabilities into its devices. The company announced its new AI Eraser feature for OnePlus smartphones on Wednesday. This tool, powered by OnePlus's own Large Language Model (LLM), offers users the ability to effortlessly remove unwanted elements from their photos, akin to Google's Magic Eraser feature. 

The rollout of the AI Eraser will commence in April for its flagship models, including the OnePlus 12, 12R, 11, Open, and Nord CE 4. The feature utilizes artificial intelligence alongside a set of sophisticated algorithms to enable users to edit photos directly from the Photo Gallery by selecting and eliminating undesired objects. Whether it's an unwanted pedestrian, trash, or any imperfections, the AI Eraser can analyze and remove these elements, replacing them with a background that seamlessly integrates with the rest of the image.

OnePlus has emphasized the extensive research and development that went into making this feature reliable and accurate. The company's proprietary LLM, which powers the AI Eraser, has been trained on a vast dataset to ensure it can adeptly manage complex editing tasks. This ensures that removed objects are replaced with contextually appropriate elements, thereby maintaining or enhancing the photo's visual appeal.

Interestingly, Oppo, OnePlus's sister company, has also developed a similar feature known as the AI Eraser, which is based on its own LLM called AndesGPT. Oppo's version of the AI Eraser also focuses on removing unwanted photo elements through generative AI technology and was showcased at the ODC 2023 event. It is slated to be available on the Reno 11 series by the second quarter of 2024.

Kinder Liu, President and COO of OnePlus, expressed enthusiasm about the company's venture into AI-driven features with the AI Eraser. He highlighted it as a significant step towards embracing generative AI technology to enhance user creativity and transform photo editing. Liu also teased the introduction of more AI features in the future, indicating OnePlus's commitment to leveraging AI to empower users to produce exceptional photographs with ease.

Adecco Survey Highlights AI's Impact on Workforce Reductions

A recent survey conducted by the Adecco Group, a staffing provider, has shed light on the potential impact of artificial intelligence (AI) on the global workforce over the next five years. According to the report, which surveyed executives from 2,000 large companies worldwide, approximately 41% of senior executives anticipate a reduction in their workforce due to advancements in AI technology. This survey, one of the most extensive on the subject, underscores the mixed feelings towards generative AI, which has the capability to produce text, photos, and videos from open-ended prompts. While there's optimism about AI's potential to eliminate mundane tasks, there's also concern about its ability to render some jobs redundant.

This trend is reflected in the recent actions of tech giants like Google and Microsoft, which have initiated significant layoffs while pivoting towards AI-driven systems such as OpenAI's ChatGPT and Google's chatbot Gemini. These developments align with findings from a 2023 World Economic Forum study, which indicated that while 25% of companies foresee AI leading to job losses, half of them expect the technology to facilitate the creation of new roles.

Despite recognizing AI as a transformative force, a majority of executives surveyed by Adecco reported that their companies are lagging in AI adoption. Adecco's CEO, Denis Machuel, highlighted the dual nature of AI as both a potential job destroyer and creator. He pointed out the parallels between current fears about AI and past concerns regarding the digital revolution, which ultimately led to the creation of numerous jobs. Machuel advocated for companies to proactively prepare for AI's disruptive impact by focusing on training current employees to work alongside AI, rather than solely recruiting external AI specialists.

The survey included responses from businesses in various countries and sectors, such as defense, pharmaceuticals, healthcare, industry, and logistics. Adecco, which already utilizes AI in services like resume creation, views AI as presenting significant opportunities for enhancing its offerings to clients. The company is actively involved in training and upskilling employees for its clients, a practice that has seen a noticeable increase in demand.

This survey underscores the need for businesses to adapt to the evolving landscape shaped by AI, balancing between leveraging its potential to improve efficiency and creativity, and mitigating its disruptive effects on employment.

ChatGPT's Meteoric Rise and the Shift Towards Closed AI

ChatGPT, launched on November 30, 2022, by OpenAI, quickly became a global phenomenon, amassing 100 million active users within just two months. This platform, offering interactions with an advanced AI for the first time for many, showcased the capability to rapidly produce various types of content, from recipes to essays, demonstrating the transformative potential of such technology.

Following its immediate success, OpenAI received a significant $10 billion investment from Microsoft. However, a shift in the AI landscape is anticipated, moving from open-source platforms like ChatGPT to proprietary, or "Closed AI," systems. OpenAI, traditionally known for its open-source approach allowing widespread access and contribution, began transitioning towards a closed model to foster innovation without the risk of competitors replicating their advancements.

This change reflects a broader move within the industry towards Closed AI, characterized by restricted access to underlying technologies to maintain a competitive edge. One notable company, working in secrecy for over two decades with ties to the CIA, has been at the forefront of Closed AI, focusing initially on national security objectives. This firm has secured over $1 billion in government contracts and is involved in hundreds of AI projects, including significant contracts aimed at enhancing decision-making for the U.S. Army, monitoring health for the State Department, and developing AI systems for critical sensor data analysis.

The necessity for Closed AI in these sensitive applications underscores the importance of confidentiality in handling threats and intelligence. As this company extends its Closed AI capabilities to commercial ventures, it is poised to leverage its advanced technology across various sectors, from healthcare to cybersecurity, potentially becoming a dominant force in the AI industry, akin to the next Google. This potential makes it a compelling opportunity for investors, especially before a key date, May 5, hinting at significant developments or announcements.

Triangle Coding Bootcamp Shuts Down: AI's Role in the Decision

A Triangle-based coding bootcamp, known for successfully placing over 400 of its graduates into tech roles across 120 companies, has recently announced its closure after six years of operation. The bootcamp's decision to shut down highlights the growing impact of artificial intelligence on the job market, particularly for entry-level coding positions. Reports from GovTech and the Wall Street Journal have indicated that generative AI technologies, capable of performing novice-level coding tasks, are beginning to alter the landscape for entry-level tech employment and have contributed to a reduction in certain tech job vacancies.

Momentum Learning's co-founder, Jessica Mitsch Homes, attributed the closure to decreased demand for workers and the significant role generative AI plays in diminishing opportunities for entry-level coders. This challenge is compounded by broader tech industry trends, which have seen a significant reduction in jobs over the past year, continuing into the present. This shift marks a stark contrast to the job growth experienced during the early pandemic period, driven by substantial investments in workforce expansion by many companies.

Homes explained that while Momentum Learning had braced for potential downturns in tech hiring, the unexpected acceleration in AI's capabilities to automate entry-level tasks led to companies halting new hires for such roles. This rapid advancement in AI technology and its implications for the future of entry-level jobs in tech prompted the decision to close the bootcamp, despite efforts to anticipate and adapt to industry trends.

Raiinmaker Secures $7.5M in Seed Funding, Led by Industry Giants

Raiinmaker, a Web3 platform specializing in decentralized AI training tools accessible through iOS and Android devices, has successfully completed a $7.5 million funding round. This recent investment, co-led by Jump Capital and Cypher Capital, adds to the company's total fundraising amount, bringing it to $12.5 million. The funding round saw contributions from a variety of investors, including Gate.io Labs, MEXC Global, Krypital Group, Launchpool, and New Tribe Capital, among others.

The platform's innovative approach allows users to train AI agents directly from their mobile devices and earn rewards based on their contributions to the development of AI models. With this model, Raiinmaker aims to democratize AI training by making it accessible and rewarding for a broader audience.

Raiinmaker is gearing up for significant milestones, including the launch of its mainnet this April, followed by the release of its native token, COIIN. This development underscores the company's readiness to capitalize on the current synergy between decentralized AI, advanced blockchain infrastructure, and the positive trends in the crypto market.

J.D. Seraphine, the founder and CEO of Raiinmaker, expressed enthusiasm about the convergence of decentralized AI with blockchain and the crypto market's growth, highlighting years of preparation leading to this pivotal moment.

The fusion of artificial intelligence with blockchain technology is becoming a burgeoning field, attracting significant investment and interest. Another AI training firm, FLock, also secured a $6 million investment in a funding round co-led by Lightspeed Faction, emphasizing the sector's potential for growth and innovation.

Raiinmaker's technology holds promise for various industries, including sports, gaming, and entertainment, aligning with perspectives from leading venture capital firms like Bitkraft Ventures. Bitkraft, with a strong focus on the gaming industry, recently announced a $275 million fund and supports the view that AI can streamline game production, reduce costs, and hasten time-to-market, marking a transformative phase for digital entertainment.

Join the movement - If you wish to contribute thought leadership, then please fill the form below. One of our team members will be back in touch with you shortly.

Form link

Subscribe to keep reading

This content is free, but you must be subscribed to "Towards AGI" to continue reading.

Already a subscriber?Sign In.Not now