NVIDIA A H100 - GPU computing processor - NVIDIA H100 Tensor Core - 80 GB HBM2E - PCIe 5.0 x32 - for NVIDIA DGX H100
You have to log in to shop/see prices.
|
|
NVIDIA A H100 - GPU computing processor - NVIDIA H100 Tensor Core - 80 GB HBM2E - PCIe 5.0 x32 - for NVIDIA DGX H100
-
GPU Family
NVIDIA
-
GPU
H100
-
Video Memory
80GB
-
Video Memory Type
HBM2
|
H100 Tensor Core GPU PCIe
Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU. With NVIDIA® NVLink® Switch System, up to 256 H100s can be connected to accelerate exascale workloads, along with a dedicated Transformer Engine to solve trillion-parameter language models. H100’s combined technology innovations can speed up large language models by an incredible 30X over the previous generation to deliver industry-leading conversational AI.
Specifications:
- Form Factor H100 PCIe
- FP64 26 teraFLOPS
- FP64 Tensor Core 51 teraFLOPS
- FP32 51 teraFLOPS
- TF32 Tensor Core 756teraFLOPS
- BFLOAT16 Tensor Core 1,513 teraFLOPS
- FP16 Tensor Core 1,513 teraFLOPS
- FP8 Tensor Core 3,026 teraFLOPS
- INT8 Tensor Core 3,026 TOPS
- GPU memory 80GB
- GPU memory bandwidth 2TB/s
- Decoders 7 NVDEC, 7 JPEG
- Max thermal design power (TDP) 300-350W (configurable)
- Multi-Instance GPUs Up to 7 MIGS @ 10GB each
- Form factor PCIe, Dual-slot air-cooled
- Interconnect NVLINK: 600GB/s PCIe Gen5: 128GB/s
- Server options Partner and NVIDIA-Certified Systems with 1–8 GPUs
- NVIDIA AI Enterprise Included
Please note that although care has been taken in the degree of relevancy, pictures are for
display purposes only, and product appearance may differ from what you see. If there are any
discrepancies between the product headline, description and picture, the correct information
will be in the product headline (i.e. computer systems may not come with monitors even if
pictured that way). If anything is unclear, please email support before placing your order.