Connect with us

Hi, what are you looking for?

HEADLINES

Google Cloud and NVIDIA expand partnership to scale AI development

To continue bringing AI breakthroughs to its products and developers, Google announced its adoption of the new NVIDIA Grace Blackwell AI computing platform, as well as the NVIDIA DGX Cloud service on Google Cloud.

Google Cloud and NVIDIA announced a deepened partnership to enable the machine learning (ML) community with technology that accelerates their efforts to easily build, scale and manage generative AI applications.

To continue bringing AI breakthroughs to its products and developers, Google announced its adoption of the new NVIDIA Grace Blackwell AI computing platform, as well as the NVIDIA DGX Cloud service on Google Cloud. Additionally, the NVIDIA H100-powered DGX Cloud platform is now generally available on Google Cloud.

Building on their recent collaboration to optimize the Gemma family of open models, Google also will adopt NVIDIA NIM inference microservices to provide developers with an open, flexible platform to train and deploy using their preferred tools and frameworks. The companies also announced support for JAX on NVIDIA GPUs and Vertex AI instances powered by NVIDIA H100 and L4 Tensor Core GPUs.

“The strength of our long-lasting partnership with NVIDIA begins at the hardware level and extends across our portfolio – from state-of-the-art GPU accelerators, to the software ecosystem, to our managed Vertex AI platform,” said Google Cloud CEO Thomas Kurian. “Together with NVIDIA, our team is committed to providing a highly accessible, open and comprehensive AI platform for ML developers.”

Advertisement. Scroll to continue reading.

The new integrations between NVIDIA and Google Cloud build on the companies’ longstanding commitment to providing the AI community with leading capabilities at every layer of the AI stack. Key components of the partnership expansion include:

  • Adoption of NVIDIA Grace Blackwell: The new Grace Blackwell platform enables organizations to build and run real-time inference on trillion-parameter large language models. Google is adopting the platform for various internal deployments and will be one of the first cloud providers to offer Blackwell-powered instances.
  • Grace Blackwell-powered DGX Cloud coming to Google Cloud: Google will bring NVIDIA GB200 NVL72 systems, which combine 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink®, to its highly scalable and performant cloud infrastructure. Designed for energy-efficient training and inference in an era of trillion-parameter LLMs, NVIDIA GB200 NVL72 systems will be available via DGX Cloud, an AI platform offering a serverless experience for enterprise developers building and serving LLMs. DGX Cloud is now generally available on Google Cloud A3 VM instances powered by NVIDIA H100 Tensor Core GPUs.
  • Support for JAX on GPUs: Google Cloud and NVIDIA collaborated to bring the advantages of JAX to NVIDIA GPUs, widening access to large-scale LLM training among the broader ML community. JAX is a framework for high-performance machine learning that is compiler-oriented and Python-native, making it one of the easiest to use and most performant frameworks for LLM training. AI practitioners can now use JAX with NVIDIA H100 GPUs on Google Cloud through MaxText and Accelerated Processing Kit (XPK).
  • NVIDIA NIM on Google Kubernetes Engine (GKE): NVIDIA NIM inference microservices, a part of the NVIDIA AI Enterprise software platform, will be integrated into GKE. Built on inference engines including TensorRT-LLM, NIM helps speed up generative AI deployment in enterprises, supports a wide range of leading AI models and ensures seamless, scalable AI inferencing.
  • Support for NVIDIA NeMo: Google Cloud has made it easier to deploy the NVIDIA NeMo framework across its platform via Google Kubernetes Engine (GKE) and Google Cloud HPC Toolkit. This enables developers to automate and scale the training and serving of generative AI models, and it allows them to rapidly deploy turnkey environments through customizable blueprints that jump-start the development process. NVIDIA NeMo, part of NVIDIA AI Enterprise, is also available in the Google Marketplace, providing customers with another way to easily access NeMo and other frameworks to accelerate AI development.
  • Vertex AI and Dataflow expand support for NVIDIA GPUs: To advance data science and analytics, Vertex AI now supports Google Cloud A3 VMs powered by NVIDIA H100 GPUs and G2 VMs powered by NVIDIA L4 Tensor Core GPUs. This provides MLOps teams with scalable infrastructure and tooling to confidently manage and deploy AI applications. Dataflow has also expanded support for accelerated data processing on NVIDIA GPUs.

Google Cloud has long offered GPU VM instances powered by NVIDIA’s cutting-edge hardware coupled with leading Google innovations. NVIDIA GPUs are a core component of the Google Cloud AI Hypercomputer – a supercomputing architecture that unifies performance-optimized hardware, open software and flexible consumption models. The holistic partnership enables AI researchers, scientists and developers to train, fine-tune and serve the largest and most sophisticated AI models – now with even more of their favorite tools and frameworks jointly optimized and available on Google Cloud.

Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

With the e-Commerce market projected to hit $24 billion by 2025, foodpanda sees significant growth potential in the local digital space.

HEADLINES

Emperador’s move to cashless payments reduces cash-handling risks, improves transaction speed, and enables the sales teams and retailers to focus on business growth and...

HEADLINES

inDrive gave away more than Php 1.1 million worth of rewards to its partner drivers which included a Honda ADV 160 motorcycle as the...

HEADLINES

A leveled-up 5G connectivity, Smart 5G Max features significantly faster speeds for uploading and downloading, and ultra-low latency, providing customers with amazing mobile experiences.

HEADLINES

The project is expected to increase Asialink’s total loans to SMEs from Php 8.8 billion (around $150 million) to around Php 13 billion, with...

HEADLINES

The partnership enables MCU to integrate Fortinet’s Network Security Expert (NSE) training and certification program into its academic offerings, either as part of the curriculum or...

HEADLINES

Leveraging a robust "Cloud + AI" development strategy, Alibaba Cloud has witnessed substantial growth in external client revenue.

HEADLINES

Introduced in the country in August 2022, Skyro offers one of the most flexible product loans in the country.

Advertisement