x
Thursday, Dec 05th 24

Dell Launches Next-Gen PowerEdge Servers for AI Workloads

Dell Launches Next-Gen PowerEdge Servers for AI Workloads


Dell Technologies has announced the release of its next-generation PowerEdge rack configuration servers, designed to support artificial intelligence (AI) workloads from edge to core. The new servers come with a range of GPU-optimized models that can handle a variety of AI use cases, including AI at the edge and telecom. The latest PowerEdge servers from […]

BY

Posted On May 12, 2023
Dell Technologies Launches Next-Gen PowerEdge Servers for AI Workloads

Dell Technologies has announced the release of its next-generation PowerEdge rack configuration servers, designed to support artificial intelligence (AI) workloads from edge to core. The new servers come with a range of GPU-optimized models that can handle a variety of AI use cases, including AI at the edge and telecom.

The latest PowerEdge servers from Dell Technologies are designed to support the growing need for infrastructure to support AI-related applications. With AI rapidly transforming various industries, the right infrastructure is essential to ensure the technology’s continued growth.

The new PowerEdge servers come with a range of features that can be easily installed in or outside the data center. Customers can choose from a variety of cooling options, including eight-way NVLink peer-to-peer air-cooled, four-way NVLink peer-to-peer in liquid-assist air-cooled, and direct liquid-cooled.

These cooling options provide customers with varying options as they assess their cooling needs when planning their infrastructure growth to support AI-related applications.

One of the most significant improvements in the new PowerEdge servers is their performance. The XE9680 with eight Nvidia H100 GPUs and NVLink demonstrated up to an eightfold improvement over the previous generation in machine learning (ML) performance testing Inference 3.0 for high-demanding AI training, generative AI model training and fine-tuning, and AI inferencing metrics.

The R760xa with Nvidia H100 and XR5610 with Nvidia L4 also showed exceptional results for data center and edge inferencing with high performance/watt for edge applications. These technological advancements in GPU-optimized servers and Intel’s Xeon processors are creating the foundation for enabling the creation of new AI training and inferencing software, generative AI models, AI DevOps tools, and AI applications.

Intel’s 4th Generation Xeon scalable processors also offer significant improvements for AI workloads. The R760 with 4th Generation Intel Xeon scalable processors uses AMX technology to deliver up to an 8x improvement for inference throughput. With Intel AMX technology, developers can leverage AMX to boost the performance of AI workloads while continuing to use the ISA instruction set for non-AI workloads.

These technological advancements in infrastructure are necessary to keep up with the growing demand for AI and the increased complexity of the models and workloads being developed. Dell Technologies’ latest release of AI- and ML-enabled platforms provides the flexibility that end-users need to create AI applications that span from core to edge.

In conclusion, Dell Technologies’ latest PowerEdge servers are paving the way for a more efficient and powerful AI future both inside and outside the data center. These advancements in infrastructure are essential to keep up with the growing demand for AI and the increased complexity of the models and workloads being developed.

Related Posts


Latest Posts