-15%
,

NVIDIA Tesla V100 32GB Graphics Card


  • 7 TFLOPS Double-Precision Performance
  • 14 TFLOPS Single-Precision Performance
  • NVIDIA Volta Architecture
  • 5120 CUDA Cores
  • 640 Tensor Cores
  • 32GB of HBM2 VRAM
  • PCIe 3.0 x16 Interface
  • Passive Heatsink

$8,500.00 $9,990.00

Add additional power to your PC with the Tesla V100 Graphics Card from NVIDIA. The Tesla V100 is a powerful accelerator card designed for deep learning, machine learning, high-performance computing (HPC), and of course, graphics. This particular card is ideal for a variety of users, whether you are rendering intense graphics or developing research technology, the Tesla V100 has the enhanced capabilities to increase production and general system performance.

Powered by NVIDIA’s Volta architecture, a single V100 graphics core can offer the performance of almost 32 CPUs. Volta pairs the 5120 CUDA cores and 640 Tensor cores together, allowing a single server of V100S GPUs to replace hundreds of commodity CPU servers for traditional HPC and deep learning. The V100 delivers 7 TFLOPS double-precision performance and 14 TFLOPS single-precision performance.

Volta Architecture

By pairing CUDA and Tensor Cores within a unified architecture, a single server with V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and deep learning.

Tensor Cores

Equipped with 640 Tensor Cores, V100 delivers 130 TFLOPS of deep learning performance. That’s 12X Tensor FLOPS for deep learning training and 6X Tensor FLOPS for deep learning inference when compared to NVIDIA Pascal GPUs.

Next-Gen NVLINK

NVIDIA NVLINK in V100 delivers 2X higher throughput compared to the previous gen. Up to eight V100 accelerators can be interconnected at up to Gb/s to provide high app performance on a single server.

Maximum Efficiency Mode

The maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.

HBM2

With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM. V100 comes in a 32GB VRAM configuration.

Programmability

V100 is architected from the ground up to simplify programmability. Its independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.
GPU
GPU Model NVIDIA Tesla V100
Stream Processors 640 Tensor Cores
Interface PCI Express 3.0 x16
Supported APIs OpenCL
CUDA
DirectCompute
OpenACC
Memory
Memory Speed 900 Gb/s
Memory Configuration 32 GB
ECC Memory Yes
Memory Interface HBM2
Memory Bandwidth 900 GB/s
Power Requirements
Max Power Consumption 250 W
PCI Power Connectors 1 x 8-Pin
Dimensions
Height 4.38″ / 111.15 mm
Length 10.5″ / 266.7 mm
Width Dual-Slot
General
Cooler Type Passive Heatsink
Weight 2.6 lb / 1196 g
Packaging Info
Package Weight 3.12 lb
Box Dimensions (LxWxH) 11.7 x 6.9 x 3.1″

Based on 0 reviews

0.0 overall
0
0
0
0
0

Be the first to review “NVIDIA Tesla V100 32GB Graphics Card”

There are no reviews yet.

Translate »