VMware NVIDIA logos png

VMware Private AI Foundation with NVIDIA Looks To Enable Entreprises to Embrace Generative AI

VMware Inc. and NVIDIA (NASDAQ: NVDA) are expanding their strategic partnership. Their mission? To ready the multitude of enterprises dependent on VMware’s cloud infrastructure for the imminent generative AI era.

Generative AI, the driving force behind intelligent chatbots, assistants, search engines, and summarization tools, is revolutionizing industries. VMware Private AI Foundation with NVIDIA is designed to democratize this transformation. It offers an integrated solution, seamlessly combining generative AI software with NVIDIA’s advanced accelerated computing, all within VMware Cloud Foundation, optimized for AI applications.

VMware NVIDIA logos png

The synergy between generative AI and multi-cloud environments is profound. Enterprise data resides in various locations, including data centres, edge devices, and diverse cloud platforms. VMware and NVIDIA aim to empower enterprises to harness generative AI while preserving data privacy, ensuring security, and retaining control.

Enterprises are in a race to implement generative AI, with the potential to contribute up to a staggering $4.4 trillion annually to the global economy. VMware Private AI Foundation with NVIDIA is stepping in to empower them to expedite this journey. It enables enterprises to customize large language models (LLMs), construct secure and private models for internal use, offer generative AI as a service, and scale inference workloads securely.

With emerging concerns surrounding data privacy and security with deploying Generative AI tools like ChatGPT at an organisational level, VMware Private AI Foundation with NVIDIA empowers organizations to use the full capabilities of Generative AI without the worry of data leaks. Enterprises can deploy AI services close to their data, safeguarding data privacy and ensuring secure access.

austin distel uLnmmE8Y0E4 unsplash

It also provides them with diverse options for constructing and running models, including leading OEM hardware configurations without ruling out the potential integration with public clouds. This choice doesn’t come at the expense of performance with NVIDIA’s accelerated infrastructure. It promises performance equal to or even surpassing bare-metal solutions.

When enterprises are ready to scale, they can do so seamlessly and without much hassle. GPU scaling optimizations in virtualized environments facilitate the efficient scaling of AI workloads across multiple nodes. Scaling and implementation costs can also be minimized through VMware Private AI Foundation with NVIDIA. This is thanks to resource optimization and a shared resource environment fostered by the platform. In fact, the platform is built with fast prototyping capabilities with pre-installed frameworks and libraries allowing enterprises to fail quickly and achieve development milestones at an accelerated rate.

Aside from this, the platform will be deployed on performance-optimized NVMe storage and GPUDirect® storage over RDMA for seamless data transfer. Networking performance is also sustained and accelerated with deep integration between vSphere and NVIDIA NVSwitch™ technology ensuring efficient multi-GPU execution.

The platform integrates NVIDIA NeMo, a cloud-native framework simplifying the creation, customization, and deployment of generative AI models. NeMo offers customization frameworks, guardrail toolkits, data curation tools, and pre-trained models. It provides enterprises with an efficient, cost-effective, and expeditious path to adopting generative AI. For production deployment, NeMo leverages TensorRT for Large Language Models (TRT-LLM), optimizing inference performance on the latest LLMs on NVIDIA GPUs.

cables connected on server
Photo by Brett Sayles on Pexels.com

VMware Private AI Foundation with NVIDIA receives robust support from Dell Technologies, Hewlett Packard Enterprise, and Lenovo. These partners will offer systems equipped with NVIDIA L40S GPUs, NVIDIA BlueField®-3 DPUs, and NVIDIA ConnectX®-7 SmartNICs. These components will supercharge enterprise LLM customization and inference workloads.

VMware aims to release VMware Private AI Foundation with NVIDIA in early 2024, marking the continuation of a decade-long partnership that has optimized VMware’s cloud infrastructure to run NVIDIA AI Enterprise with the performance of bare metal.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.