GeForce RTX 2080 Ti 12 GB vs H100 PCIe

#ad 
Buy on Amazon
VS

Primary details

GPU architecture, market segment, value for money and other general parameters compared.

Place in the rankingnot ratednot rated
Place by popularitynot in top-100not in top-100
ArchitectureHopper (2022−2023)Turing (2018−2022)
GPU code nameGH100TU102
Market segmentWorkstationDesktop
Release date22 March 2022 (2 years ago)12 October 2022 (2 years ago)

Detailed specifications

General parameters such as number of shaders, GPU core base clock and boost clock speeds, manufacturing process, texturing and calculation speed. Note that power consumption of some graphics cards can well exceed their nominal TDP, especially when overclocked.

Pipelines / CUDA cores72964608
Core clock speed1065 MHz1350 MHz
Boost clock speed1650 MHz1635 MHz
Number of transistors80,000 million18,600 million
Manufacturing process technology4 nm12 nm
Power consumption (TDP)350 Watt250 Watt
Texture fill rate752.4470.9
Floating-point processing power24.08 TFLOPS15.07 TFLOPS
ROPs2496
TMUs456288
Tensor Cores456576
Ray Tracing Coresno data72

Form factor & compatibility

Information on compatibility with other computer components. Useful when choosing a future computer configuration or upgrading an existing one. For desktop graphics cards it's interface and bus (motherboard compatibility), additional power connectors (power supply compatibility).

InterfacePCIe 5.0 x16PCIe 3.0 x16
Length267 mm267 mm
Width2-slot2-slot
Supplementary power connectors8-pin EPS2x 8-pin

VRAM capacity and type

Parameters of VRAM installed: its type, size, bus, clock and resulting bandwidth. Integrated GPUs have no dedicated video RAM and use a shared part of system RAM.

Memory typeHBM2eGDDR6
Maximum RAM amount80 GB12 GB
Memory bus width5120 Bit384 Bit
Memory clock speed1000 MHz2000 MHz
Memory bandwidth1,280 GB/s768.0 GB/s

Connectivity and outputs

Types and number of video connectors present on the reviewed GPUs. As a rule, data in this section is precise only for desktop reference ones (so-called Founders Edition for NVIDIA chips). OEM manufacturers may change the number and type of output ports, while for notebook cards availability of certain video outputs ports depends on the laptop model rather than on the card itself.

Display ConnectorsNo outputs1x HDMI 2.0, 3x DisplayPort 1.4a, 1x USB Type-C
HDMI-+

API compatibility

List of supported 3D and general-purpose computing APIs, including their specific versions.

DirectXN/A12 Ultimate (12_2)
Shader ModelN/A6.8
OpenGLN/A4.6
OpenCL3.03.0
VulkanN/A1.3
CUDA9.07.5

Pros & cons summary


Recency 22 March 2022 12 October 2022
Maximum RAM amount 80 GB 12 GB
Chip lithography 4 nm 12 nm
Power consumption (TDP) 350 Watt 250 Watt

H100 PCIe has a 566.7% higher maximum VRAM amount, and a 200% more advanced lithography process.

RTX 2080 Ti 12 GB, on the other hand, has an age advantage of 6 months, and 40% lower power consumption.

We couldn't decide between H100 PCIe and GeForce RTX 2080 Ti 12 GB. We've got no test results to judge.

Be aware that H100 PCIe is a workstation graphics card while GeForce RTX 2080 Ti 12 GB is a desktop one.


Should you still have questions concerning choice between the reviewed GPUs, ask them in Comments section, and we shall answer.

Vote for your favorite

Do you think we are right or mistaken in our choice? Vote by clicking "Like" button near your favorite graphics card.


NVIDIA H100 PCIe
H100 PCIe
NVIDIA GeForce RTX 2080 Ti 12 GB
GeForce RTX 2080 Ti 12 GB

Comparisons with similar GPUs

We selected several comparisons of graphics cards with performance close to those reviewed, providing you with more options to consider.

Community ratings

Here you can see the user ratings of the compared graphics cards, as well as rate them yourself.


3.7 68 votes

Rate H100 PCIe on a scale of 1 to 5:

  • 1
  • 2
  • 3
  • 4
  • 5
4 121 vote

Rate GeForce RTX 2080 Ti 12 GB on a scale of 1 to 5:

  • 1
  • 2
  • 3
  • 4
  • 5

Questions & comments

Here you can ask a question about this comparison, agree or disagree with our judgements, or report an error or mismatch.