Nvidia and Dell look to breathe new life into AI on premises with Project Helix

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Dell and Nvidia are extending their long standing partnership with the new Project Helix initiative to bring the power of generative AI to on-premises enterprise deployments.

Project Helix is an effort to combine hardware, software and services from the two vendors to help enterprises benefit from the emerging capabilities of large language models (LLMs) and generative AI. The initiative will include validated blueprints and reference deployments to help organizations deploy generative AI workloads.

The hardware side will see Dell PowerEdge servers including the PowerEdge XE9680 and R760a benefit from Nvidia H100 tensor core GPUs. The integrated hardware stack will integrate with Dell PowerScale and Dell ECS enterprise object storage. The software stack includes Nvidia AI Enterprise as well as capabilities from the Nvidia NeMo framework for generative AI.

To date, much of the work in generative AI has involved the cloud, but that’s not necessarily where all enterprises want to run workloads.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now

“We’re bringing security and privacy to enterprise customers,” Kari Briski, VP of software product management at Nvidia told VentureBeat. “Every enterprise needs an LLM for their business, so it just makes sense to do this on premises.”

Also Read : Google will roll out new AI tools for advertisers, marketers

Project Helix looks to enable LLMops for enterprises

The reality for many enterprises is that there is no need to build a new LLM from scratch. Rather, most enterprises will customize a pre-trained foundation model to understand the organization’s data.

Briski noted that she realizes that the term “generative AI” is a much hyped buzzword. The combination of Dell hardware with Nvidia hardware and software is also about enabling what Briski referred to as LLMops — that is, being able to operationalize an LLM for enterprise use cases.

Nvidia and Dell are hardly strangers: The two vendors have been partnering on hardware solutions for years. Briski emphasized, however, that Project Helix is different from what the two companies have been collaborating on to date.

“What we haven’t been doing is providing these pre-trained foundational models in a way that’s easily replicable,” she said.

Benefiting from AI, no matter the deployment

Briski explained that Project Helix blueprints will provide guidance to help enterprises deploy generative AI workloads that can be customized for an organization’s specific use case. She noted that it can be daunting for an organization to be able to optimize a model for latency and throughput in real time.

Varun Chhabra, SVP for product marketing for the infrastructure solutions group and telecom at Dell told VentureBeat that it is critical to understand how compute, storage and networking work together to enable a real time genitive AI workload. Determining the right mix of computer resources is important, and the best practices to do so are encapsulated within the Project Helix initiative.

By running generative AI on Dell hardware, Chhabra expects that organizations will be able to also benefit from AI wherever they want to deploy, be it on-premises, at the edge or in the cloud.

Chhabra is particularly optimistic about the potential for Project Helix. The name Helix is a nod to the double-helix structure of DNA, which is the basic building block of sentient life on Earth.

“If you think about the double helix and what it means to life, we felt it was very appropriate metaphor for what we think is happening with generative AI, in terms of not just transforming people’s lives, but more specifically what will happen within enterprises and what this will unlock for our customers” said Chhabra.

Originally appeared on: TheSpuzz

iSlumped