• "Towards AGI"
  • Posts
  • Wipro Teams Up with HPE to Unveil Advanced On-Premise GenAI Solution

Wipro Teams Up with HPE to Unveil Advanced On-Premise GenAI Solution

Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.

Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.

Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path towards Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.

Wipro Teams Up with HPE to Unveil Advanced On-Premise GenAI Solution

Wipro Limited announced a strategic collaboration with Hewlett Packard Enterprise (HPE) on June 13 to launch the GenAI solution, aimed at leveraging artificial intelligence (AI) to improve operational efficiency and enhance the global customer experience. Wipro stated that the GenAI platform, powered by their Smart Operations platform and HPE Machine Learning Development Environment, has shown promising results in field testing. These results include a potential 50% reduction in Mean Time to Resolution (MTTR) for GenAI applications, a 30% decrease in incident inflow, improved Overall Equipment Effectiveness (OEE), and reduced process cycle time, all contributing to ongoing operational excellence.

Industries such as financial services, healthcare, and manufacturing, which rely heavily on customer service, IT support, and operations, are expected to benefit significantly from this collaboration. Wipro further mentioned that clients would have access to various Large Language Models (LLMs) tailored to meet diverse business needs and enhance decision-making capabilities.

Jo Debecker, Managing Partner and Global Head of Wipro FullStride Cloud, remarked, “The creation of this GenAI platform underscores our long-term strategic partnership with HPE and our dedication to delivering advanced AI solutions through our new Customer Experience Center. This center will highlight the potential of the HPE Machine Learning Development Environment alongside Wipro's innovative solutions. Together, we will continue to push the boundaries of innovation to help our clients achieve their business goals.”

Marc Waters, Senior Vice President of Global Sales at HPE, highlighted the synergy between the two companies, stating, “The combination of Wipro’s deep technical expertise and HPE’s AI technology will expedite value realization for our customers. By integrating the HPE Machine Learning Development Environment into Wipro’s GenAI customer experience, we will enable customers to develop and deploy AI models more quickly by integrating with popular machine learning frameworks and streamlining data preparation.”

Mizuho Survey Highlights Surge in GenAI Adoption and Increased IT Spending

Mizuho's recent survey of CIOs presents an optimistic outlook for the IT services industry, especially with the increasing adoption of Generative AI (GenAI) services. According to Mizuho, which surveyed over 50 CIOs from various mid-sized companies across different sectors, there are several positive trends in IT services.

One major finding is the significant influence of GenAI. Mizuho discovered that over 80% of respondents have already engaged IT services firms for GenAI implementation, with an impressive 90% planning to do so within the next year.

Encouragingly, the survey indicates that this surge in GenAI adoption is leading to increased IT services spending. About 65% of CIOs reported that their GenAI expenditures are new investments rather than reallocations from existing budgets.

Further enhancing the positive outlook, nearly 60% of respondents expect to increase their IT services spending over the next year.

Mizuho notes that these findings might seem contradictory to the broader caution observed in the IT services sector. However, they attribute this difference to the survey's focus on mid-sized companies, suggesting that spending in these companies may not have been as severely impacted as that of larger corporations.

Overall, Mizuho's survey highlights a significant wave of GenAI adoption and a corresponding increase in IT services spending, particularly within the mid-sized company segment.

Robin AI Launches Advanced GenAI Reports for Efficient Due Diligence

The legal world has long faced the challenge of handling M&A due diligence, which involves sifting through extensive documents. Robin AI has now addressed this issue by launching Robin AI Reports.

This development is significant because M&A due diligence was where many early legal AI companies focused their efforts, given its labor-intensive nature and the suitability of AI as a tool for assistance. Despite the widespread use of genAI by lawyers today, its application for major transactional review work has not been fully realized. Robin AI Reports could be a game-changer in this area.

Through the Robin AI platform, lawyers can generate a report with a simple click, which is then emailed to them upon completion. Users can compile a list of ‘red-flag’ issues and deviations from preferred positions they want summarized for each contract.

Robin AI Reports delivers ‘an accurate summary of the issues for each contract, with clear citations, allowing for easy human verification. The reports are user-friendly and can be generated for various purposes including M&A due diligence, NDA compliance, supplier agreements, and audits,’ the company explained.

Additionally, Robin AI offers users three free reports to familiarize themselves with the platform. The development of Robin AI Reports involved collaboration with partners such as the University of Cambridge’s Investment Management group, which has been using the product since April.

The group reported that ‘a task that previously took three hours for Cambridge Investment Management’s senior in-house lawyers now takes 30 minutes on average – an impressive 85% time saving.’

Moreover, Robin AI utilizes Amazon Bedrock to leverage Anthropic’s Claude 3 generative AI model along with Robin’s own models, ensuring that customer data remains secure and compliant with local data regulations. A copy of Claude 3 is housed within Robin AI’s cloud environment, ensuring that user data does not leave Robin’s cloud.

Databricks Unveils Enterprise AI Innovations with Open-Source Focus

At the annual Data + AI Summit, data and AI giant Databricks unveiled a range of new generative AI capabilities and emphasized its open-source strategy. The new offerings include Mosaic AI Model Training, Mosaic AI for RAG, and Mosaic AI Gateway, along with the open-sourcing of their Unity Catalog. These initiatives are designed to assist enterprises in developing high-quality, domain-specific AI applications.

In an exclusive interview with AIM, CTO and co-founder Matei Zaharia said, “We aim to help people achieve the best possible quality in their domain for their GenAI application. We observe that many companies are creating what we refer to as compound AI systems.”

Compound AI systems involve integrating multiple components, such as invoking different models, retrieving relevant data, utilizing external APIs and databases, and breaking down problems into smaller steps. Simultaneously, Databricks is placing a strong emphasis on open-source models.

Zaharia noted that despite the rapid progress in closed-source models, Databricks is heavily investing in an open-source strategy. He believes the performance gap between closed-source and open-source models is quickly closing.

Recent open-source models like DBRX, Mistral 8×22 billion, and Llama 3 are nearing the quality of top closed-source models. Zaharia explained, “These models are all quite good and are coming very close to the best closed models, which haven’t improved significantly.”

While acknowledging that substantial investments could potentially lead to superior closed models, Zaharia believes open-source development will continue to prosper as companies collaborate to share development costs.

Zaharia predicts that the most exciting advancements in generative AI will emerge from open, customizable models applied to complex industry use cases in the B2B sector, rather than in consumer AI applications.

“I actually think the most exciting advancements in GenAI will be in the B2B world with custom AI for challenging, mission-critical domains,” Zaharia said.

“That’s another reason we’re betting on open models,” he added.

He compared this to how open-source big data technologies initially powered consumer applications but eventually had a transformative impact on enterprises.

Zaharia elaborated, “For example, if you build a model for chemistry that is highly specialized, it may not be as versatile as GPT-4 in general conversations, but it remains extremely valuable.”

French Open Source AI Startup Mistral AI Raises $640M, Hits $6B Valuation

Mistral AI, a Paris-based artificial intelligence startup, announced today that it has raised €600 million ($640 million) in new funding, reaching a valuation of $6 billion. This Series B funding round was led by General Catalyst.

This funding follows reports from last month indicating that the company was aiming to raise €600 million at a valuation of approximately $5 billion. The new funding round comes after the company's Series A round, led by a16z and Lightspeed Ventures, six months ago at a $2 billion valuation.

Existing investors in this round included Andreessen Horowitz, Lightspeed, Bpifrance, and BNP Paribas, along with corporate backers such as Nvidia Corp., Samsung Venture Investment Corp., and Salesforce Ventures LLC.

Mistral AI has gained significant attention due to the expertise of its founders, who have backgrounds as researchers at other prominent AI companies. Timothée Lacroix and Guillaume Lample previously worked at Meta Platforms Inc.’s Paris AI Lab, while Arthur Mensch comes from DeepMind, IBM Corp.’s AI research lab.

Despite being a new entrant in the AI market, having been founded in April 2023, Mistral AI is known for its powerful multilingual AI. Its latest model, Mistral Large, is fluent in five languages: French, English, German, Spanish, and Italian. The company claims that Mistral Large is only 10% behind OpenAI’s GPT-4 in reasoning benchmarks. Mistral AI also offers two other models, Small and Medium, to handle different workloads.

The company faces competition from major players like Meta Platforms Inc. and OpenAI, who have well-established large language models. However, the investor interest has prompted Mistral's Chief Executive Arthur Mensch to comment that it sends a strong message.

“When we started, we were told that this market would never be disrupted,” Mensch told the Financial Times. “We demonstrated that this wasn’t true and effectively disrupted OpenAI’s business model.”

In February, Mistral formed a distribution partnership with Microsoft Corp., giving its engineers access to Azure’s supercomputing cloud architecture for training and deploying AI models. Microsoft also invested €15 million ($16.3 million) in Mistral, which will convert into equity in this funding round.

Additionally, at the end of May, the company launched an open-source coding language model called Codestral. Capable of understanding over 80 programming languages, Codestral is a 22 billion-parameter model that stands up against some of the most powerful language models on the market, including Meta Platforms Inc.’s Llama 3 70B, which has three times the parameters. Codestral can process prompts up to 32,000 tokens, more than twice the capacity of Llama 3 70B.

OpenAI Gains Exposure from Apple Deal, No Money Exchange

At this year's WWDC, Apple announced a partnership with OpenAI, integrating the ChatGPT-4o model into various features of iOS 18, iPadOS 18, and macOS Sequoia. However, according to Bloomberg, no money will be exchanged in this deal.

Instead, OpenAI benefits from significant exposure through Apple's devices, which is arguably more valuable than monetary compensation. Additionally, Apple is in discussions with Anthropic and Google to incorporate their respective chatbots into its operating systems. The goal is to provide users with a broader range of AI services, similar to the options available for web browsers and search engines. This aligns with previous reports indicating that Apple is diversifying its AI partnerships.

There are concerns that users relying on AI instead of performing simple Google searches could impact Apple's financial results. Google pays a substantial amount to be Apple's default search provider, so if this relationship ends, Apple would need to find alternative revenue streams.

One potential approach is to share revenue with the AI companies involved. For example, if OpenAI monetizes results from its integrated chatbot on iOS, iPadOS, and macOS devices, Apple could take a share of the profits.

OpenAI Welcomes Retired US Army General Paul Nakasone to Its Board

US artificial intelligence giant OpenAI announced on Thursday that retired US Army General Paul M. Nakasone has joined its board of directors. The company aims to leverage Nakasone's cybersecurity expertise as it addresses challenges in the AI field.

Nakasone will join the board’s Safety and Security Committee, which is responsible for making recommendations on crucial safety and security decisions for all OpenAI projects and operations, the company said in a blog post.

OpenAI stated that Nakasone's cybersecurity knowledge will be instrumental in detecting and addressing cybersecurity threats, thereby enhancing the field's security.

India has recently experienced a surge in cybersecurity threats, including bomb-threat emails sent to various institutions and frequent hacking attacks on companies. According to an April report by Check Point Software, Indian companies faced an average of 2,444 hacker attacks per week over the past six months.

As AI continues to advance and integrate into new technologies, experts hope its capabilities can be used to respond quickly to such incidents and strengthen existing infrastructure.

“Artificial intelligence has the potential to have huge positive impacts on people’s lives, but it can only meet this potential if these innovations are securely built and deployed,“ said Bret Taylor, chair of OpenAI’s board, commenting on Nakasone’s induction.

Who is Paul Nakasone?

“OpenAI’s dedication to its mission aligns closely with my own values and experience in public service. I look forward to contributing to OpenAI’s efforts to ensure artificial general intelligence is safe and beneficial to people around the world,” Nakasone said.

Nakasone is a renowned cybersecurity expert, instrumental in developing US cyber defense capabilities. He established the US Cyber Command (USCYBERCOM) and became its longest-serving leader. He has also led the National Security Agency and held various command and staff positions. Additionally, he has worked with many elite cyber units in countries such as South Korea, Iraq, and Afghanistan, solidifying his reputation as a global cyber defense expert.

OpenAI and Microsoft Seek Dismissal of Newspaper Publishers' Copyright Lawsuit

OpenAI and Microsoft have filed motions to partially dismiss allegations from a coalition of eight newspapers accusing the ChatGPT creators of using the publishers' articles without authorization or payment to develop their AI products.

The publishers, owned by MediaNews Group and Tribune Publishing, sued OpenAI and Microsoft in April, claiming the AI developers used large portions of copyrighted articles to train the language models that power ChatGPT and Copilot.

The newspapers involved in the lawsuit include Tribune Publishing's Chicago Tribune, Orlando Sentinel, South Florida Sun Sentinel, and New York Daily News, as well as MediaNews Group's Mercury News, Denver Post, Orange County Register, and St. Paul Pioneer Press.

In motions filed in the U.S. District Court for the Southern District of New York late Tuesday night, OpenAI and Microsoft argue that the publishers have not provided concrete evidence of copyright violations.

"Microsoft and OpenAI’s tools do not exploit the protected expression in the plaintiffs' digital content nor replace it. Instead, they extract and share elements of language, culture, ideas, and knowledge that are common to all," Microsoft stated in its motion.

The AI developers also dispute the publishers' assertion that "given the right prompt," the AI products can "repeat large portions" of newspaper articles used to train their language models.

Microsoft and OpenAI claim the publishers have not demonstrated that the AI developers encouraged "end-user copyright infringement" or prompted users to generate content similar to the publishers' articles.

They further argue that the potential for users to prompt the GPT-based products to produce infringing content does not constitute a valid copyright claim.

"The mere theoretical possibility that someone might engage in the same acrobatics plaintiffs did here is not enough to plausibly allege direct infringement," Microsoft contends.

Similarly, OpenAI asserts that the publishers' complaint fails to address whether using copyrighted content to train a generative AI model falls under fair use in copyright law.

"Ultimately, the truth will come out, showing that ChatGPT is not a highly inefficient way to access, through rare impermissible attempts, snippets of old newspaper articles that are freely available online," OpenAI argues.

Kristelia Garcia, a law professor at Georgetown University, suggests that the potential for GPT-product users to produce the publishers' articles might be sufficient to demonstrate copyright infringement.

"It proves that it could happen, it’s just a matter of time and they shouldn’t have to wait until they suffer some sort of loss," Garcia told Courthouse News.

Garcia added that the outcome of the case hinges on whether the court finds the publishers' claims substantial enough to support a copyright argument.

Keep reading

In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.

Highlights from GenAI, OpenAI and ClosedAI: Dive into the latest projects and innovations from the leading organisations behind some of the most advanced AI models in open-source, closed-sourced AI.

Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.

Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realisation of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!