• "Towards AGI"
  • Posts
  • Sovereign AI: A Strategic Vision for the UK's AI Competitive Advantage

Sovereign AI: A Strategic Vision for the UK's AI Competitive Advantage

A perspective on Sovereign AI. (The original article was written by me on LinkedIn almost a year ago but this has only become more relevant)

As AI enthusiasts, researchers, developers, and industry leaders across the globe has expressed the perils of AI, here is my view on the current framework surrounding artificial intelligence across the globe. I believe that AI is at a crucial juncture, and the decisions made today will have a profound, long-lasting implications on our society, economy, and global standing. And yes, there is a GenAI hype but the applications is many areas have matured beyond the hype.

The foundation of the industry concerns lies in the dichotomy between open-sourced and closed-sourced AI. Open-sourced AI, as the name suggests, is built on transparency, collaboration, and accessibility, enabling researchers and developers from around the world to contribute and benefit from the collective knowledge base. In contrast, closed-sourced AI thrives in proprietary software limiting access to only a select few and the models either being a blackbox and/or in the hands of a few individuals (who are already big technology players with the firepower to crush any competition).

As it stands, the AI landscape is dominated by a handful of closed-sourced platforms with a few exceptions such as Mistral, Llama (though claims to be open source, it has revealed the data used to train the model), which can lead to monopolistic practices, stifle innovation, and exclude the majority of our nation's talent. What could be an alternative that will have a positive impact on public services? The answer is Sovereign AI. i.e. the establishment of a Large Language Model (LLM) that is created, owned, and managed jointly by a government (and co-owned by its citizens). In the case of UK, by investing in a democratised LLM, the UK can reap numerous benefits:

1. Job Opportunities: A government-backed LLM will create an ecosystem of AI-driven public sector startups and companies, generating new job opportunities and strengthening the economy.

2. Leading in Science and Technology: A collaborative, open-sourced LLM will enable UK researchers and scientists to participate in global conversations and make breakthroughs in AI, securing our position as a leader in science and technology.

3. Democratise AI: A jointly-owned LLM ensures that the benefits of AI are equitably distributed among all citizens, fostering a fairer society and avoiding an AI-driven wealth gap.

4. Ethical Considerations: By involving the government and citizens in the development and management of an LLM, we can ensure that ethical considerations are at the forefront of AI advancements, avoiding the pitfalls of data privacy violations and algorithmic biases.

5. National Security: A state-owned LLM will serve as a platform for homegrown AI solutions that can be leveraged for cybersecurity and defence purposes, ensuring the protection and resilience of our nation in the digital age. In conclusion, creation of a Sovereign AI can benefit the entire nation (and the allies). We are at a critical crossroads, and it is our responsibility to seize this opportunity to secure a brighter, safer, and more prosperous future for the United Kingdom.

Kind regards,

Shen Pandi

TheGen.AI News

AI Innovations Surge as OpenAI, Google, and Mistral Launch New Models

In recent developments, OpenAI, Google, and the French AI startup Mistral each launched new AI models within a span of 12 hours, marking a significant escalation in activity as the industry anticipates further innovations over the summer. This series of launches began shortly after Nick Clegg confirmed upcoming updates to Meta's AI model, Llama, during an event in London.

Google introduced Gemini Pro 1.5, an advanced model with a free usage limit, followed shortly by OpenAI's release of GPT-4 Turbo, a multimodal system capable of processing images. Shortly after, Mistral unveiled its Mixtral 8x22B model, notable for its open-source distribution, allowing widespread access for modifications and enhancements.

The open-source approach, while promoting broader development, has faced criticism for potential risks, including misuse and difficulty in addressing system flaws. In contrast, Meta defends this strategy for its broader benefits over proprietary models held by large corporations.

Meta plans to release less powerful versions of Llama 3 leading up to its most advanced model later in the summer, competing with OpenAI's anticipated GPT-5. Meanwhile, Yann LeCun of Meta has criticized the current focus on large language models, suggesting they lack the capability to perform practical physical tasks. He advocates for the development of "objective-driven" AI, which can reason and plan, predicting rapid progress toward more advanced AI capabilities.

Apple Unveils M4 Chip Overhaul for Mac, Aiming to Revive Sales and Strengthen AI Capabilities

Apple Inc. is gearing up to revitalize its Mac lineup with a new series of in-house processors, focusing on artificial intelligence enhancements, according to a Bloomberg report. Following a 27% decline in Mac sales in the last fiscal year ending in September and stagnant holiday sales, the introduction of these new Macs comes at a crucial juncture for the company. Despite releasing Macs equipped with M3 chips five months ago, the performance improvements were minimal over the previous year's M2 chips. 

As Apple's stock saw a notable 4.3% increase to $175.04 on April 11 in New York, marking its largest single-day rise in nearly a year, the company is planning an aggressive update schedule. The next-generation M4 processors, which will feature at least three variants including entry-level Donan, mid-range Brava, and high-end Hidra models, are aimed at refreshing every Mac model. 

This AI-driven overhaul is seen as an attempt to catch up with rivals like Microsoft and Google in AI technology. Apple expects to roll out these new M4-equipped Macs starting late 2024 and into early 2025, with plans to continue updates throughout the year, including new versions of the MacBook Air, Mac Studio, and Mac Pro.

Meta Launches AI Chatbot Testing on Instagram and WhatsApp in India

Meta, the parent company of Facebook, is piloting its artificial intelligence chatbot, Meta AI, with a select group of users in India, spanning across Instagram, Messenger, and WhatsApp, according to Moneycontrol. On Instagram, users can engage with Meta AI through the direct messaging feature by clicking the Meta AI icon in the search bar, enabling one-on-one interactions akin to those offered by OpenAI's ChatGPT and Google's Gemini. The chatbot serves as a multipurpose assistant, capable of providing answers complete with Google search links and source citations.

The assistant can generate text and images, summarize texts, assist with writing tasks such as editing and proofreading, and even translate text or create poems and stories. It can also be integrated into personal and group chats on Instagram for advice and answers to questions, like suggesting dinner party recipes or travel recommendations, which appear directly in the chat.

Additionally, Meta has begun testing this chatbot in WhatsApp, with users sharing experiences and screenshots on X, formerly known as Twitter. Meta, recognizing India as a key market with over a billion monthly users across its apps, launched Meta AI in September 2023 as part of its strategic push into generative AI. The chatbot, powered by Meta's custom AI model built on its open-source Llama 2 LLM, is part of the company's initiative to integrate advanced AI capabilities into its widely used applications, as it competes in the AI landscape with other major tech companies and AI-focused startups.

Amazon CEO Andy Jassy Declares AI Future Will Be Powered by AWS

Less than two weeks after discontinuing a high-profile AI initiative — cashierless checkout technology Just Walk Out — Amazon's CEO Andy Jassy remains positive about the company's future in AI, particularly generative AI. In a recent shareholder letter, Jassy expressed his belief that transformative AI technologies will be developed using Amazon Web Services (AWS), which is pivotal in the digital operations of many global companies.

Jassy highlighted Amazon's focus on foundational AI models over consumer-facing applications, contrasting with tools like OpenAI's ChatGPT. He noted that companies such as Delta Air Lines, Siemens, and Pfizer are among those leveraging Amazon’s AI technologies. Despite the rapid development and investment in AI across the tech industry, integrating these technologies into products that consumers are willing to pay for remains a challenge.

Amazon has invested heavily in generative AI, including a recent $2.75 billion investment in AI start-up Anthropic, raising its total commitment to $4 billion and securing a minority stake. This partnership will see Anthropic’s AI model Claude being hosted on AWS, enhancing the offerings available to Amazon's enterprise customers.

The company also welcomed AI expert Andrew Ng to its board and is investing substantially in data center development to support AI technology growth. However, Amazon has faced challenges in delivering successful consumer AI products, such as its shopping assistant Rufus and a more advanced Alexa, which have not yet achieved significant consumer impact.

Despite these product development challenges, Amazon's stock has risen 25 percent this year, though the company is still recovering from pandemic-era overspending. Following substantial layoffs, Amazon continues to downsize, particularly in areas related to its discontinued cashierless technology.

In his letter, Jassy also mentioned ongoing efforts to reduce costs and improve efficiency in Amazon's fulfillment and logistics operations, emphasizing a thorough reevaluation of these systems to achieve further cost reductions while enhancing delivery speed.

Neysa Secures $20 Million Seed Funding To Advance Gen AI Services

Neysa, a startup specializing in AI cloud and platform-as-a-service, has recently raised $20 million in seed funding. The investment round was led by Matrix Partners India, Nexus Venture Partners, and NTTVC. This funding will support the development of Neysa’s Generative AI cloud platform and observability services aimed at both the Indian and international markets, according to a company press release.

Founded by Sharad Sanghi and Anindya Das, Neysa aims to provide cost-effective and secure solutions for clients to implement and manage their Generative AI projects in the cloud and at the edge, using a consumption-based model. The company plans to launch its services in the third quarter of 2024 and is headquartered in Mumbai.

Neysa’s approach integrates industry-specific solutions to facilitate the adoption of Generative AI in enterprises worldwide. CEO Sharad Sanghi expressed that the new funding would be used to enhance their end-to-end Generative AI Platform as a Service (PaaS) ecosystem and their AI-engineered Observability Platform, aiming to deliver significant and measurable business benefits to clients.

CTO Anindya Das emphasized their vision of making AI integration effortless and transformative in its impact on technology usage.

The surge in venture capital interest in generative AI platforms, like ChatGPT, has been notable, reflecting a broader trend of increasing investments in early-stage AI firms. Recent funding rounds in this sector include Sarvam AI’s $41 million Series A, led by Lightspeed, and Ema’s $25 million round, marking significant early-stage investments. Additionally, Vodex recently secured $2 million in seed funding from Unicorn India Ventures and Pentathlon Ventures.

Earlier this year, Bhavish Aggarwal’s AI venture, Krutrim SI Designs, raised $50 million, quickly achieving a unicorn valuation as reported by Entrackr last August.

Google's Stock Reaches Record High, Nearing $2 Trillion Valuation

On Tuesday, Alphabet, the parent company of Google, saw its stock price reach an unprecedented high, propelled by the burgeoning interest in artificial intelligence. The stock peaked at an all-time high of $159.89 during the day and closed at a record $158.14, climbing 1.3% despite general market downturns.

This surge in stock value has brought Alphabet’s market capitalization close to $1.95 trillion, nearly reaching the $2 trillion mark—a level previously touched briefly in late 2021 and a milestone only achieved by a select few such as Apple, Microsoft, Nvidia, and Saudi Aramco.

The recent 13% increase in Alphabet’s stock price since the start of the year, coupled with a 77% rise since late 2022, has been fueled by robust advertising revenues and growing investor enthusiasm over Alphabet’s AI prospects. This has pushed the company's price to future earnings ratio to the highest it's been in two years, signaling strong market expectations for further profit growth.

Wall Street analysts anticipate Alphabet surpassing the $2 trillion valuation soon, with projections suggesting it could reach $2.1 trillion based on the average target stock price of $166, as reported by FactSet.

Optimism is particularly high among analysts like Oppenheimer's Jason Helfstein, who has set a target price of $185 for Alphabet shares ($2.3 trillion implied market cap), buoyed by the potential of Google’s generative AI applications in search. Over the past decade, Alphabet shares have provided a 19% annualized return, surpassing the S&P 500's 13%, though still below the 20% returns of other trillion-dollar American companies.

Despite previous market hesitations about its AI products, sentiment towards Alphabet has improved as Google positions itself as a key player in generative AI and considers charging for AI-enhanced search capabilities. After a nearly 40% drop in 2022, Alphabet’s financial performance has rebounded with stable advertising revenue, contributing to over three-quarters of Google's total revenue and helping the company weather potential economic downturns.

Knostic Secures $3.3 Million in Funding to Enhance GenAI Access Control for Enterprises

Knostic, a company specializing in access control for LLMs, has recently secured $3.3 million in pre-seed funding from Shield Capital, Pitango First, DNX Ventures, Seedcamp, and several angel investors. As organizations increasingly utilize LLMs to develop systems similar to ChatGPT, enhanced with localized knowledge, the risk of inadvertently exposing sensitive information such as financial details or strategic plans becomes significant. Knostic addresses this challenge by ensuring employees have access to the information necessary for their roles, without unnecessary exposure, aligning with organizational policies.

Gadi Evron, co-founder and CEO of Knostic, highlighted the risks associated with LLMs, such as the accidental exposure of confidential data through routine queries. Co-founder and CTO Sounil Yu compared the role of their technology in LLMs to brakes in a race car, essential for managing the powerful capabilities responsibly.

Founded in 2023 by Evron and Yu, both of whom have backgrounds at major corporations like Citibank, PwC, and Bank of America, Knostic was born out of a realization that traditional data security measures are inadequate for the challenges AI poses. They envisioned a new approach focusing on knowledge-level controls based on business context, which led to the creation of Knostic.

The necessity for Knostic’s product stems from the expansion of the knowledge ecosystem prompted by technologies like ChatGPT, especially in enterprise search applications where internal knowledge is queried. Without proper access controls, sensitive information could be accessible to unauthorized users, such as interns, on the same level as CEOs.

Knostic is shifting the AI security market from traditional data- or file-level security to a knowledge-based approach that considers the business context. This innovation allows for the scalable implementation of AI technologies in enterprises.

TheOpen.AI News

Meta's AI Chief Claims LLMs Fall Short of Human-Level Intelligence

The buzz surrounding artificial general intelligence (AGI) is inescapable, with daily news and bold predictions from tech leaders. Recently, Nvidia’s Jensen Huang predicted AGI's arrival within five years, while Ben Goertzel gave it three years, and Elon LeCunMusk forecasted its emergence by the end of 2025.

However, skepticism exists among experts like Yann LeCun, Meta's chief AI scientist and Turing Award winner, who challenges the concept of AGI altogether. He believes that true human intelligence is not general and is better described as "human-level AI," a goal he sees as still far off.

Speaking in London at Meta’s key engineering center, LeCun identified major hurdles for AI: reasoning, planning, persistent memory, and understanding the physical world. He argued that current AI systems, particularly large language models (LLMs) like Meta's LLaMA, OpenAI's GPT-3, and Google's Bard, lack these capabilities and are confined by their reliance on textual data, misleading us about their true intelligence due to their linguistic prowess.

LeCun critiqued LLMs for not grasping the broader reality beyond text, suggesting that they divert from the path to achieving human-like intelligence. He explained that humans acquire knowledge predominantly through sensory experiences rather than just text, pointing out that a four-year-old has been exposed to vastly more data than even the most advanced LLMs.

Proposing a shift in AI development, LeCun introduced the concept of "objective-driven AI," which relies on learning from the physical world through sensors and video data to build a model that predicts the consequences of different actions. This approach could enable AI to plan and execute tasks more effectively.

Despite the challenges, LeCun remains optimistic about the future potential of AI to surpass human intelligence, though he believes it will take much longer than some of his contemporaries suggest.

TheClosed.AI News

Baidu CEO Predicts Continued Dominance of Closed-Source Models

A recently uncovered internal speech by Baidu's Robin Li shed light on the company's decision-making process regarding the open sourcing of their ERNIE model a year ago. Li revealed that Baidu chose to keep ERNIE closed-source because they anticipated that the market would have multiple open-source models from various companies, and adding another wouldn't significantly impact the market.

Li observed that major open-source models like Llama and Mistral, along with domestic ones from BAAI, Baichuan AI, and Alibaba's Tongyi, are influential yet scattered and used on a small scale for various validations. This scattershot approach, according to Li, fails to benefit from collective improvements typically seen in traditional open-source software projects like Linux and Android, where community contributions drive progress.

Moreover, Li argued that closed-source models tend to be more capable and efficient, stating that they lead not just temporarily but will continue to do so because they are developed under controlled, focused conditions that allow for optimized performance and cost benefits. He emphasized that closed-source models, which are often derived from large-scale models through dimensionality reduction, are generally more powerful and cost-effective.

Additionally, Li critiqued the strategy of 'dual-wheel drive'—simultaneously focusing on modeling and applications—as inefficient for startups due to limited resources. He stressed the importance of focusing on one area to maximize the chances of success.

Finally, Li pointed out that the strength of AI startups should not necessarily come from developing new models, which is resource-intensive and time-consuming, but rather from leveraging specific domain knowledge and data to address unique needs, such as finding niche products on e-commerce platforms. This approach, he suggested, allows startups to create distinct value through specialized large models.

New on learning

Introducing the 10x Consultant Course for professionals and consultants

10xConsultant.AI Course is designed to empower you with the knowledge, skills, and insights to harness the full potential of Generative AI, positioning you at the forefront of this transformative wave. Whether your ambition lies in consulting, professional development within your current organisation, this course is your gateway to not just participating in the AI-driven future but leading it.

Unlock the power of Generative AI and elevate your professional journey with 10xConsultant.AI. Join and lead the charge into a future shaped by advanced technology. Don’t just adapt to the AI revolution—define it!

Join the movement - If you wish to contribute thought leadership, then please fill the form below. One of our team members will be back in touch with you shortly.

Form link

Subscribe to keep reading

This content is free, but you must be subscribed to "Towards AGI" to continue reading.

Already a subscriber?Sign In.Not now