The global transformation is rapidly scaling the demands for flexible computer, networking, and storage. Future workloads will necessitate infrastructures that can seamlessly scale to support immediate responsiveness and widely diverse performance requirements. The exponential growth of data generation and consumption, the rapid expansion of cloud-scale computing and 5G networks, and the convergence of high-performance computing (HPC) and artificial intelligence (AI) into new usages requires that today’s data centers and networks evolve now— or be left behind in a highly competitive environment. ROC300-TA45 supports 64-bit Intel Xeon Ice Lake Scalable processor is capable of accelerating deep learning AI workloads, and also running and consolidating real-time and mission-critical workloads with near zero latency.
7Starlake’s ROC300-TA45 3U GPGPU Server which are featuring Xeon Silver 4310 Scalable Processor (12 cores) with 2x NVIDIA A4500, DDR4-256 GB (Up to 2TB) memory and 24 TB NVMe+SATA III, to provide the seamless performance foundation for the data centric era from the multi-cloud to intelligent edge, and back. The Intel Xeon Scalable platform provides the foundation for an evolutionary leap forward in data center agility and scalability. Disruptive by design, this innovative processor sets a new level of platform convergence and capabilities across compute, storage, memory, network, and security.
ROC300-TA45 enables a new level of consistent, pervasive, and breakthrough performance in new AI inference to implement machine learning and deep learning. In addition to NVIDIA RTX A4500, ROC300-TA45 provides 4x M.2 NVMe slot for fast storage access. Combining stunning inference performance, powerful CPU and expansion capability, it is the perfect platform for versatile edge AI applications.
ROC300-TA45 3U GPGOU server AI inference platforms designed for advanced inference acceleration applications such as voice, video, image and recommendation services. It supports 2x NVIDIA® RTX A4500 GPU, featuring 17.66 TFLOPS in FP32 with 184 Tensor Cores, 5888 CUDA cores for real-time inference.
• NVIDIA embedded graphics based on Ampere architecture
• Standard MXM 3.1 Type A/B form factor
• PCIe Gen 4 up to x16 interface
• Up to 5888 CUDA® cores, 46 RT Cores, and 184 Tensor Cores
• Up to 17.66 TFLOPS peak FP32 performance
• Up to 16GB GDDR6 memory, 256-bit
• Up to 512GB/s maximal memory bandwidth
• Support up to 4 DP 1.4a displays, 115W TGP
• 5-year availability