GPU

NVIDIA H100 SXM

Edit@7 days ago

Intergrated Memory(VRAM)
Capacity

80 GB

(HBM3 5120-bit)

Bandwidth

3361 GB/s

480 Token/s

Vector Compute
FP64
33.45 T
FP32
66.91 T
FP16
133.80 T
BF16
133.80 T
INT32
33.45 T
INT8
X

NVIDIA H100 SXM General-Purpose Float-Point performance (Vector Performance / Scalar Performance)

FP64: 33.45 TFLOPS

FP32: 66.91 TFLOPS

FP16: 133.80 TFLOPS

BF16: 133.80 TFLOPS

INT32: 33.45 TOPS

Matirx Compute
FP64
66.91 T
133.82 T
FP32
X
FP16
989.40 T
1978.80 T
FP8
1979 T
3958 T
TF32
494.70 T
989.40 T
BF16
989.40 T
1978.80 T
INT16
X
INT8
1979 T
3958 T
INT4
X

NVIDIA H100 SXM AI performance (Tensor Performance / Matrix Performance)

FP64: 66.91 TFLOPS, with sparsity: 133.82 TFLOPS

FP16: 989.40 TFLOPS, with sparsity: 1978.80 TFLOPS

FP8: 1979 TFLOPS, with sparsity: 3958 TFLOPS

TF32: 494.70 TFLOPS, with sparsity: 989.40 TFLOPS

BF16: 989.40 TFLOPS, with sparsity: 1978.80 TFLOPS

INT8: 1979 TOPS, with sparsity: 3958 TOPS

Hardware Specs
NVIDIA H100 SXM is a 5nm chip, has 80000 million transistors, launched by NVIDIA at 2023. It has 80 GB built-in(On-Board/On-Chip) memory with bandwidth up to 3361 GB/s. It has 16896 general-purpose ALUs(CUDA cores/Shader cores) and 528 matrix cores(Tensor cores) .
Process Node
5 nm
Launch Year
2023

Vector(CUDA) Cores
16896
Matrix(Tensor) Cores
528
Core Frequency
1590 ~ 1980 MHz
Cache
50MB

Comment without registration

Share your experience with NVIDIA H100 SXM / Found an Error? Help Us Improve!