AI and The Future of Work: A Utopian Vision

AI and The Future of Work: A Utopian Vision

This post is based on the book “Machines of Tomorrow: from AI Origins to Superintelligence & Posthumanity”, by Pedro Uria-Recio and Randy McGraw.

 INTRODUCTION

The end-state humankind hurtles towards will be either utopian, dystopian, or a mix of the two, with the actual results contingent wholly on how AI will be developed, deployed, and regulated out-of-the-gate—all of which are presently within our full control. Join us on a journey into the heart of this transformative technology as we explore the utopian possibilities it offers for the future of work. In this post, inspired by the book 'Machines of Tomorrow' by Pedro URIA-RECIO, we'll delve into a world where AI enhances productivity, fosters innovation, and creates new opportunities for human growth. But stay tuned—next week, we'll confront the darker side of AI with a dystopian vision that challenges our assumptions about the future.

AI will transform work as we know it today. In one dimension, the rapid deployment of AI in economic life forces us to ask, “What happens to jobs?” When we look at the trajectory of AI and cyborgization, another dimension is added wherein we must also ask, “What happens to work?” 

The complexity of the topic touches on prevailing economic models under scarcity, population transition, and the development of human capabilities to be productive and fulfilled in an increasingly AI-denominated world. Most importantly, a significant, out-of-the-gate concern about AI revolves around the potential displacement of jobs, particularly in industries heavily reliant on repetitive or routine tasks susceptible to automation. 

But this is a known and thus addressable issue. The “Future of Jobs Report” from the World Economic Forum estimated that 85 million jobs are expected to be replaced by AI-powered machines by the year 2025. However, the report also forecasts the creation of approximately 97 million new jobs attributable to AI by 2025. As AI progresses and integrates further into various industries, numerous job opportunities and benefits will emerge. While mundane tasks will be increasingly delegated to AI, humans remain essential for refining work done by AI, performing quality checks, carrying out more creative aspects of jobs, and, of course, interacting with other humans. Moreover, specialized roles such as AI engineer, data scientist, and AI legal & ethics practitioners that have previously not existed will be in high demand as companies pursue the development and implementation of AI-driven solutions, at least in the short run.

With AI first taking over routine tasks, there will be an increased emphasis on acquiring or deepening what remains uniquely human skills, which we characterize as critical thinking and emotional intelligence. This shift in focus aligns with the pursuit of passions, as individuals can invest time in developing skills that not only align with their interests but also contribute to their personal and professional growth. Similarly, a new wave of AI engineers, technicians, cloud specialists, machinists, and even specialized maintenance professionals will be needed, with ample training ground being supplied entrepreneurially by companies like Coursera, which since 2015 has grown its thousands of courses and students at a CAGR of 12%.

AI has an impact not only on jobs but also on “work” itself. While individuals need conscious adjustment to navigate the changing landscape, AI has the innate potential to contribute to a healthier work-life balance for everyone in the near term. With fewer hours spent on monotonous work, people will have the opportunity to invest more time in leisure, personal growth, and spending quality moments with family and friends. An initial movement in this specific direction can be seen in America's unionized auto workers seeking 4-day workweeks, bolstered by shop floor productivity from robots and AI, and winning the concession; the automaker companies can remain equally or more profitable while enabling a restructuring of work as the robots do not need a salary. The positive impact on mental well-being could be considerable, as individuals experience a reduction in stress associated with tedious work, leading to improved overall life satisfaction.

It is certainly possible for the advantages to outweigh the dislocation vis-a-vis work in the life of the individual in the age of AI and robotics. We can all agree that improving work-life balance in meaningful ways makes us all better off. But this will not happen without specific action on the part of societies to channel the activity and create appropriate new and perhaps even untried, novel responses to the challenges faced in migrating workforces. Societies will require a specific, documented workforce strategy that takes a long-term planning view and enables AI to integrate into economic life and corporations while preserving jobs as long as possible through role modification. A holistic approach involving new job creation, employment law, reskilling programs, and collaborative efforts between Government and corporations—as well as between humans and AI—would ensure a dynamic workforce, maximize productivity, and foster innovation. 

While concerns about job displacement loom large, there are already proposed solutions to address this issue, with one notable idea being Universal Basic Income (UBI). UBI entails providing a guaranteed minimum income to every person in a society, irrespective of their employment status. Advocates of UBI argue that it would function as a safety net for workers displaced by automation and AI, enabling them to meet basic needs while seeking new employment or pursuing education and training or become permanent for those who cannot be deployed economically. Funding would come from the wealth created by AI-driven productivity, which should be redistributed subject to a formula that reflects society's values, guaranteeing an elevated quality of life for everyone and ameliorating social displacement that AI will cause in the absence of any policy.

There are several notable advantages to this approach. In addition to a safety net that similarly ensures the ongoing broad-based consumption of goods and services necessary for an economy to expand in the wake of job displacement, the economic security that UBI would entail could ostensibly encourage rage risk-taking and entrepreneurial endeavors. Individuals could pursue projects and ventures they are passionate about without the fear of financial instability. This diversification of economic activities would not only stimulate innovation but also contribute to a more vibrant and resilient economy. The pursuit of education would also be an outcome of UBI. Thus, social accumulation of human capital could be a consequence, generating positive externalities for society.

Critics of UBI align it conceptually to current welfare programs, citing the creation of disincentives to work. Unlike in economic revolutions of the past, e.g., the Industrial Revolution and the Digitalization of the economy beginning in 2000, the deployment of AI and robotics will render large swathes of the population irrevocably unviable economically, necessitating some form of subsidization. Work/not work decisions are made primarily based on the marginal income obtainable from work, and AI is set to refactor this equation unfavorably for many people. The ability to avoid disincentives at the margin is, ironically, a function of the rate and rules that AI-level insight is needed to accurately set. 

Another criticism of UBI is its potential inflationary impact. UBI could produce an increase in spending and demand, and if that is not accompanied by an increase in the supply of goods and services, then prices would just go up. There are ample historical examples to generate this concern. Implementations of mass wealth redistribution programs without an increase in productivity in Argentina, for example, resulted in going from a GDP per capita at par with Western Europe at the beginning of the 20th century to merely 27% of the European Union’s GDP per capita in 2021, after seven decades of wealth redistribution programs.

The only possibility to avoid inflation would be achieving an increase in the supply of goods and services that would match the increase in demand. We believe that AI could provide the basis for that increase since AI will make existing resources more efficient while expanding the array of products and services available to people. However, we also acknowledge that the actual outcome would depend on the price elasticity and margins of each product and service. Keeping production high might or might not optimize ROI on AI. For some specific products, having a lower production, which leads to higher prices, would provide a more significant return than producing more and selling it at lower prices. Even if it were theoretically possible to match the supply and demand of services to benefit everything, human incentives might not be aligned to achieve that synchronization.

UBI presents other elements of complexity. For example, financing the UBI would essentially be an “AI Tax” with rates, calculation methodologies, and collection processes subject to political processes. This tax would charge the work done by AI in the same way current taxes charge human work. Also, a progressive tax structure would punish those companies that most successfully deploy AI, and a regressive structure might leave funding inadequate. The regressive tax structure might also be counterproductive to AI's objectives. 

Another element of complexity is the necessity to rethink how UBI fits with other social welfare programs that societies have, in particular, whether UBI will replace them. 

UBI remains a debated concept, and there is increasing Government and even industry interest in and support for exploring it. While UBI has never been implemented successfully at such a large scale, there are nevertheless some limited examples that are useful to explore for guidance: 

  • Between 1795 and 1834, the Speenhamland System, the first guaranteed income program in history, prevented starvation for a large number of rural England families.

  • The “BIG” (Basic Income Grant) program implemented in Namibia is credited with almost halving the country's poverty rate.

  • The “Bolsa Familia” program in Brazil (2003-2015) reduced that country’s poverty rate by more than 75%. 

  • According to a 2016 University of Alaska study, the APF program Alaska Permanent Fund—which provides all state residents with a small annual cash payment of about $1,000, keeps 15,000–23,000 Alaskans out of poverty.

In summary, the gains in productivity and output that AI would bring should theoretically make UBI or similar interventionist programs economically feasible, with AI models ironically helping to solve thorny problems in the rate setting and structure. For UBI to work in practice, it will require that the increase in demand created by the scheme is accompanied appropriately by an increase in the supply of goods and services created by AI. 

UBI is a complex scheme that might have other unwanted consequences. We will introduce those in next week’s post, which will focus on dystopian elements introduced by AI. Stay tuned.

This post is based on the book “Machines of Tomorrow: from AI Origins to Superintelligence & Posthumanity”, by Pedro Uria-Recio and Randy McGraw.

About Authors

Pedro URIA-RECIO, the Chief Data & AI Officer. Ex-McKinsey consultant. Chicago Booth MBA. Lived in more than ten countries. Passionate about Artificial Intelligence and relentless curious about its implications in business and society.

Randy McGrAW, A US business executive in Japan, specializes in digital strategy and development. He held key roles at Hughes Electronics/DIRECTV, Singtel's Group Digital Life, Altus Capital, Amazon Japan, and True Digital Group. McGraw is also active as an angel investor and consultant.

TheGen.AI News

Projected $26 Billion Boom in Generative AI Spending by 2027

IDC's new Worldwide AI and Generative AI Spending Guide highlights a significant increase in the adoption of Generative AI (GenAI) across the Asia/Pacific region, with anticipated spending expected to rise to $26 billion by 2027, showcasing an impressive compound annual growth rate (CAGR) of 95.4% from 2022 to 2027. This upsurge demonstrates the region's critical influence in spearheading the next era of AI innovation and technological progress.

Generative AI involves the use of unsupervised and semi-supervised machine learning algorithms that allow computers to create novel content from existing text, audio, video, images, and code in response to brief prompts. IDC regards GenAI as a pivotal technology that will facilitate a new phase in the journey towards automation, impacting a range of activities from general productivity and business-specific functions to industry-focused tasks.

Deepika Giri, IDC APJ's Head of Research for Big Data & AI, projects that the Asia/Pacific region will see rapid growth in GenAI adoption, mirroring the expansion seen in North America. This growth is largely driven by substantial enterprise investments in data and infrastructure platforms designed specifically for GenAI applications. Giri predicts that these investments will peak within the next two years before entering a stabilization phase. China is expected to continue as the leading market for GenAI, with Japan and India emerging as the fastest-growing markets in the coming years. The region is poised for a transformative journey across various sectors, propelled by robust digital infrastructure and increasing technological investments.

According to IDC, IT spending on GenAI technology evolves through three stages. Initially, the GenAI Foundation Build phase focuses on enhancing core infrastructure, including investments in IaaS and security software. This is followed by the Broad Adoption phase, where there is a shift towards widespread use of open-source AI platforms offered as services, essential for digital business operations. Lastly, the Unified AI Services phase sees a spike in spending as organizations quickly integrate GenAI to secure a competitive advantage, contrasting with the usually slower growth seen in new technology markets.

Vinayaka Venkatesh, IDC Asia/Pacific's Senior Market Analyst for IT Spending Guides, emphasizes that GenAI is not just a passing trend. Its ability to generate completely new content across various mediums promises significant efficiency improvements and innovative creative possibilities, offering organizations a competitive edge. Many organizations are either already using Generative AI or are exploring its potential.

The financial services sector in Asia is witnessing a rapid increase in GenAI adoption, with spending expected to reach $4.3 billion by 2027, growing at a CAGR of 96.7%. GenAI is being used internally to improve operational efficiency, automate repetitive tasks, and optimize processes like fraud detection and document creation. GenAI-powered solutions offer tailored financial services, adapting dynamically to changing customer needs, thereby enhancing profitability through cost reductions and revenue generation.

The software and information services industry is the second-largest sector adopting GenAI. It leverages the technology's versatility in areas such as marketing, data analytics, and software development. GenAI enhances content creation for digital platforms, optimizes marketing strategies, and enriches data sets, improving the resilience and performance of models. Additionally, it assists developers by automating coding, generating prototypes, and accelerating the software development lifecycle.

SaaS Security in the Age of Generative AI: What Teams Need to Know

The launch of OpenAI's ChatGPT in November 2022 marked a significant milestone in the software industry, initiating a race to integrate Generative AI (GenAI) into various SaaS products. These upgrades aim to boost productivity across multiple roles, from helping developers craft software to assisting marketers in creating unique content at a lower cost.

Notable GenAI integrations include Microsoft 365 Copilot, GitHub Copilot, and Salesforce Einstein GPT. These tools, provided by leading SaaS companies, indicate a trend towards monetizing GenAI capabilities. Google is also set to introduce its "Search Generative Experience" platform, offering AI-generated summaries over traditional website lists.

This rapid integration suggests that AI features could soon become standard in SaaS applications. However, this progress does not come without risks. The broad implementation of GenAI applications in workplaces is increasing exposure to new cybersecurity threats. GenAI models, including ChatGPT, often process user-provided data, raising concerns about data privacy and security. Potential risks include intellectual property theft, exposure of sensitive customer information, and the misuse of deepfakes for cybercrime.

These vulnerabilities have prompted a backlash against GenAI applications, particularly in sectors handling confidential data. A Cisco study found that over a quarter of organizations have restricted GenAI use due to these risks. The financial sector, for instance, while recognizing AI's potential for efficiency, has seen 30% of leaders prohibit GenAI tools at work due to security concerns.

Moreover, recent actions by the U.S. Congress, including a ban on Microsoft's Copilot on government PCs, underscore the growing caution towards GenAI tools in sensitive environments.

Despite the benefits, the adoption of GenAI tools often occurs without proper oversight or employer knowledge, complicating governance. A Salesforce study revealed that over half of GenAI users at work utilize unapproved tools, highlighting the need for clear usage policies.

However, the U.S. government is moving towards better AI governance. Vice President Kamala Harris recently mandated federal agencies to appoint a Chief AI Officer to ensure responsible AI usage. This initiative could lead to improved management and mitigation of GenAI risks.

As organizations grapple with these challenges, advanced security measures like SaaS Security Posture Management (SSPM) are becoming essential. These tools help organizations monitor and manage the security risks associated with AI-enabled applications, ensuring a safer integration of GenAI capabilities into the workplace. This shift towards enhanced security protocols is crucial as organizations strive to balance innovation with safety in the evolving landscape of SaaS applications.

Thomson Reuters Unveils Expanded GenAI Assistant Vision for All Professional

Thomson Reuters, a leading global content and technology firm, recently unveiled an expanded strategy for its advanced GenAI assistant, CoCounsel. This initiative aims to integrate CoCounsel across Thomson Reuters' full range of products, providing users with a unified platform that enhances access to the company's diverse offerings in Legal, Tax, Risk & Fraud, and Media sectors.

CoCounsel functions as an AI team member, adept at processing complex tasks through natural language understanding and completing them with exceptional speed. It not only delivers timely, high-quality information but also manages multiple workstreams while maintaining context and continuity across daily tasks and products. By enhancing professional tasks with GenAI capabilities, CoCounsel enables faster, higher-quality outputs while ensuring the security of customer data.

In the future, CoCounsel will link Thomson Reuters' entire product portfolio, allowing users to combine various skills and workflows into a singular, enhanced user experience.

David Wong, Chief Product Officer at Thomson Reuters, shared insights from a recent global AI study conducted by the company, revealing that 81% of professionals see potential AI applications in their work. He emphasized Thomson Reuters' commitment to adapting to customer needs by harnessing the best GenAI applications for professional tasks. Wong highlighted the evolving role of CoCounsel as a central, human-centric point of access to the Thomson Reuters product suite, which will continue to learn and add new capabilities.

Leveraging state-of-the-art generative AI technology, including GPT-4 through a partnership with OpenAI, CoCounsel is set to transform professional workflows. Brad Lightcap, COO at OpenAI, noted the significant benefits of this collaboration, emphasizing AI's role in automating routine tasks and fostering innovative problem-solving among knowledge workers.

The announcement also covered new GenAI-driven functionalities specifically for Tax and Legal professionals. These include Checkpoint Edge with CoCounsel for tax professionals, providing quicker, more accurate responses to complex tax queries, and Westlaw Edge UK with CoCounsel for legal research in the UK, enabling intuitive question-answering based on trusted Westlaw content.

Furthermore, upcoming integrations with Microsoft 365 will link CoCounsel to common professional tools used in daily operations, enhancing tasks like research, drafting, and review across applications such as Teams, Word, Outlook, and SharePoint.

Darth Vaughn, Managing Director at Ford Motor Company, praised CoCounsel's impact on operational efficiency within Ford, highlighting its role in enhancing the work of legal professionals and facilitating strategic, cross-functional initiatives.

Thomson Reuters plans to continue expanding its GenAI platform under the CoCounsel brand, developing new skills that will be incorporated across all its business segments. With over 30 years of experience and a wealth of proprietary content, the company is well-equipped to advance professional-grade GenAI solutions, supported by a unique GenAI engineering platform that enables rapid, secure development and deployment of new GenAI features.

TheOpen.AI News

Elon Musk Voices Concerns Over AI's Potential Threat to Civilization

Elon Musk has once again voiced his concerns about the leading AI chatbots, Gemini and ChatGPT, suggesting that the programming by Google and OpenAI could potentially threaten civilization. His comments came in response to a post on X (formerly Twitter) by Canadian professor Gaad Saad, who criticized a statement by NPR CEO Katherine Maher regarding the subjectivity of truth. Saad linked Maher's view to postmodernism, which he described as a breeding ground for detrimental ideologies.

Musk linked this debate to the AI technologies developed by OpenAI and Google, stating, "Now imagine if this is programmed, explicitly or implicitly, into super powerful AI – it could end civilization. Now, no need to imagine. It is already programmed into Google Gemini and OpenAI ChatGPT."

Musk's relationship with OpenAI has been strained. Despite being one of the co-founders, he left the organization in 2018. He has recently taken legal action against OpenAI, accusing it of prioritizing profit over its founding ethical guidelines. Musk's legal team argues that OpenAI has essentially become a closed-source entity under Microsoft’s influence, focusing on refining AI for profit rather than for the greater good. This lawsuit underscores his ongoing concerns with the direction the company has taken under its new leadership.

Nothing Leads the Way with First-Ever Smartphone Integration of ChatGPT

Nothing has unveiled plans to incorporate ChatGPT into its smartphones and earbuds, making advanced AI interaction accessible directly from Nothing devices. This feature is currently exclusive to Nothing earbuds when paired with Nothing smartphones, starting with Phone (2) and expected soon on Phone (1) and Phone (2a).

The company has also introduced two new earbud models, Ear (a) and Ear. The Ear (a) stands out with its retro, tacky yellow color—a first for Nothing's audio products traditionally available in only black and white. This model introduces a "pinch-to-speak" feature, enabling users to interact with ChatGPT easily. Integrated into Nothing OS, this feature complements other system enhancements such as screenshot sharing and custom Nothing widgets for an optimized user experience.

The integration allows users to query ChatGPT, receive spoken responses, and learn while on the move, pushing forward AI integration in consumer electronics. The Ear (a) model is designed with 45 dB active noise cancellation, an 11 mm driver for high-quality sound, LDAC support for superior audio streaming, and a Bass Enhance algorithm. It offers up to 42.5 hours of listening time and features intelligent noise cancellation that adapts based on the fit in the ear canal, providing consistently effective noise reduction.

Voice control for ChatGPT, to be extended to other Nothing earbud models, and bespoke Nothing widgets on Nothing OS enable easy access to ChatGPT for text, voice, or image queries right from the home screen.

The Ear is priced at Rs 11,999, and the more affordable Ear (a) will sell for Rs 7,999. Both models will be available in Indian markets by the end of April, initially sold exclusively through Flipkart with promotional launch discounts.

Meta Launches Llama 3 and Innovative Real-Time Image Generator

Meta Platforms, previously known as Facebook, has launched its new large language model, Llama 3, and a real-time image generator, marking a significant advancement in generative AI. This development is part of Meta's effort to rival OpenAI and enhance its presence in the AI industry. The latest models will be integrated into Meta's virtual assistant, Meta AI, which the company claims is the most sophisticated among its freely available competitors. Meta AI is expected to excel in areas such as reasoning, coding, and creative writing, positioning it as a direct competitor to products from companies like Google and new entrants like Mistral AI.

Meta is planning to increase the visibility of Meta AI across its family of apps, including Facebook, Instagram, WhatsApp, and Messenger, and will also feature it on a dedicated website. This move is intended to set Meta AI as a strong competitor to Microsoft-supported ChatGPT.

The new Meta AI website offers interactive features that allow users to engage in various activities, such as creating vacation packing lists, participating in music trivia, getting homework help, and generating artwork of famous city skylines. Meta's push into generative AI reflects a significant investment in computing infrastructure and the merging of research and product teams. By making its Llama models available to developers, Meta seeks to shake up the market and challenge competitors who rely on proprietary technologies.

Llama 3 introduces improved coding abilities and the capacity to handle both text and image data, although it currently only produces text outputs. Meta plans to add features that enable the simultaneous generation of text and images in future versions.

The integration of image data into the training of Llama 3 is particularly relevant for Meta's forthcoming Ray-Ban Meta smart glasses, which will use the AI to identify objects and provide information to the wearer. Moreover, Meta has partnered with Google to incorporate real-time search results into the Meta AI assistant, enhancing its search capabilities alongside its existing partnership with Microsoft's Bing.

Meta CEO Mark Zuckerberg has praised Meta AI as "the most intelligent AI assistant available for free." Early testing indicates that Llama 3 performs well, even in its smaller versions, compared to other free models. However, there remain concerns about the models' ability to understand nuanced language, an issue that previous versions faced. Meta claims that Llama 3 improves on these aspects by using higher-quality data and a larger volume of training data.

TheClosed.AI News

EU May Investigate Microsoft's OpenAI Partnership for Antitrust Concerns

Microsoft's $13 billion investment in OpenAI is potentially on the brink of an EU antitrust probe, according to sources familiar with the situation. This investigation, reflecting concerns shared by antitrust authorities in both Europe and the United States, stems from worries about how collaborations such as Microsoft’s with OpenAI, as well as those involving companies like Alphabet, Amazon, and Anthropic, might impact competitive dynamics.

The European Union's antitrust regulator has opted not to pursue an investigation under EU merger guidelines. However, the possibility of a separate antitrust inquiry remains, focusing on whether Microsoft’s partnership with OpenAI might limit competition within the EU’s internal market or if Microsoft’s market influence could unfairly skew the market landscape.

The European Commission is particularly interested in the latter possibility, though it has not yet committed to launching an investigation as it continues to gather evidence. A decision to proceed has not been made.

Despite Microsoft holding a non-voting seat on OpenAI's board, it maintains that it does not own a stake in the company known for developing ChatGPT.

The Commission has not made a broad statement on the antitrust implications but noted it is evaluating whether Microsoft’s investment could fall under the EU Merger Regulation, emphasizing that a determination of lasting change in control is crucial for addressing potential competition concerns. Microsoft has chosen not to comment on these developments.

Ex-OpenAI Board Member Calls for AI Incident Reporting and Auditing

During a TED conference talk reported by Bloomberg on Tuesday, Helen Toner, a former OpenAI board member and current director at Georgetown University’s Center for Security and Emerging Technology, advocated for the implementation of a formal reporting system for AI-related incidents akin to those used in aviation. Toner, who resigned from OpenAI last year after a dispute involving the company’s CEO Sam Altman, stressed the importance of transparency in AI development. She proposed that AI companies should be required to disclose details about their technologies, capabilities, and risk management strategies, with these disclosures being subject to independent audits.

Toner highlighted potential dangers such as AI-enabled cyberattacks as examples of risks that could arise from the technology. With eight years of experience in AI policy and governance, she has gained significant insights into how both the government and private sectors handle AI oversight. In a June 2023 CNBC interview, Toner discussed the ongoing debate within the industry about whether AI regulation should be managed by existing authorities or through a new, centralized agency specifically for AI. 

Moreover, a recent collaboration between the U.S. and the U.K. was established to create safety tests for advanced AI technologies. This partnership aims to unify scientific methods between the two nations and speed up the creation of effective evaluation techniques for AI models, systems, and agents.

OpenAI Enhances Assistants API with Advanced File Search and More

OpenAI has recently enhanced its Assistants API, introducing a series of new features aimed at improving functionality and efficiency. Announced on April 17, the update includes a more sophisticated file search tool capable of handling up to 10,000 files per assistant, designed to integrate models with developers’ data to better support application development tailored to specific organizational needs or use cases. 

The update introduces vector store objects that automate file parsing, chunking, and embedding processes. Additionally, new adjustments in token controls, the ability to select tools, and expanded support for model configuration parameters now provide developers with more customized control over their applications.

Further refinements include the capability for developers to fine-tune models within the Assistants API, which now also supports streaming. OpenAI has also enhanced its Node and Python SDKs with various streaming and polling helpers.

The Assistants API itself is crafted to facilitate the development of software assistants that can leverage OpenAI models to execute specific commands, access multiple tools simultaneously, maintain persistent threads, and handle files in diverse formats. For those transitioning to this updated API, OpenAI has provided a detailed migration guide to simplify the process of updating existing tool implementations to the latest version.

New on learning

Introducing the 10x Consultant Course for professionals and consultants

10xConsultant.AI Course is designed to empower you with the knowledge, skills, and insights to harness the full potential of Generative AI, positioning you at the forefront of this transformative wave. Whether your ambition lies in consulting, professional development within your current organisation, this course is your gateway to not just participating in the AI-driven future but leading it.

Unlock the power of Generative AI and elevate your professional journey with 10xConsultant.AI. Join and lead the charge into a future shaped by advanced technology. Don’t just adapt to the AI revolution—define it!

Join the movement - If you wish to contribute thought leadership, then please fill the form below. One of our team members will be back in touch with you shortly.

Form link

Keep reading