ATI Radeon X1300 LE vs GeForce GTX 580

#ad 
Buy on Amazon
VS

Primary details

GPU architecture, market segment, value for money and other general parameters compared.

Place in the ranking458not rated
Place by popularitynot in top-100not in top-100
Cost-effectiveness evaluation1.83no data
Power efficiency3.49no data
ArchitectureFermi 2.0 (2010−2014)R500 (2005−2007)
GPU code nameGF110RV515
Market segmentDesktopDesktop
Release date9 November 2010 (15 years ago)5 October 2005 (20 years ago)
Launch price (MSRP)$499 no data

Cost-effectiveness evaluation

The higher the ratio, the better. We use the manufacturer's recommended prices.

no data

Performance to price scatter graph

Detailed specifications

General parameters such as number of shaders, GPU core base clock and boost clock speeds, manufacturing process, texturing and calculation speed. Note that power consumption of some graphics cards can well exceed their nominal TDP, especially when overclocked.

Pipelines / CUDA cores512no data
Core clock speed772 MHz450 MHz
Number of transistors3,000 million107 million
Manufacturing process technology40 nm90 nm
Power consumption (TDP)244 Wattno data
Maximum GPU temperature97 °Cno data
Texture fill rate49.411.800
Floating-point processing power1.581 TFLOPSno data
ROPs484
TMUs644
L1 Cache1 MBno data
L2 Cache768 KBno data

Form factor & compatibility

Information on compatibility with other computer components. Useful when choosing a future computer configuration or upgrading an existing one. For desktop graphics cards it's interface and bus (motherboard compatibility), additional power connectors (power supply compatibility).

Bus supportPCI-E 2.0 x 16no data
InterfacePCIe 2.0 x16PCIe 1.0 x16
Length267 mmno data
Height4.376" (111 mm) (11.1 cm)no data
Width2-slot1-slot
Supplementary power connectors1x 6-pin + 1x 8-pinNone
SLI options+-

VRAM capacity and type

Parameters of VRAM installed: its type, size, bus, clock and resulting bandwidth. Integrated GPUs have no dedicated video RAM and use a shared part of system RAM.

Memory typeGDDR5DDR
Maximum RAM amount1536 MB64 MB
Memory bus width384 Bit64 Bit
Memory clock speed2004 MHz (4008 data rate)250 MHz
Memory bandwidth192.4 GB/s4 GB/s

Connectivity and outputs

This section shows the types and number of video connectors on each GPU. The data applies specifically to desktop reference models (for example, NVIDIA’s Founders Edition). OEM partners often modify both the number and types of ports. On notebook GPUs, video‐output options are determined by the laptop’s design rather than the graphics chip itself.

Display ConnectorsMini HDMITwo Dual Link DVI1x VGA
Multi monitor support+no data
HDMI+-
Maximum VGA resolution2048x1536no data
Audio input for HDMIInternalno data

API and SDK support

List of supported 3D and general-purpose computing APIs, including their specific versions.

DirectX12 (11_0)9.0c (9_3)
Shader Model5.13.0
OpenGL4.22.0
OpenCL1.1N/A
Vulkan+N/A
CUDA+-

Pros & cons summary


Recency 9 November 2010 5 October 2005
Maximum RAM amount 1536 MB 64 MB
Chip lithography 40 nm 90 nm

GTX 580 has an age advantage of 5 years, a 2300% higher maximum VRAM amount, and a 125% more advanced lithography process.

We couldn't decide between GeForce GTX 580 and Radeon X1300 LE. We've got no test results to judge.

Vote for your favorite

Do you think we are right or mistaken in our choice? Vote by clicking "Like" button near your favorite graphics card.


NVIDIA GeForce GTX 580
GeForce GTX 580
ATI Radeon X1300 LE
Radeon X1300 LE

Other comparisons

We selected several comparisons of graphics cards with performance close to those reviewed, providing you with more options to consider.

Community ratings

Here you can see the user ratings of the compared graphics cards, as well as rate them yourself.


4 502 votes

Rate GeForce GTX 580 on a scale of 1 to 5:

  • 1
  • 2
  • 3
  • 4
  • 5
1 1 vote

Rate Radeon X1300 LE on a scale of 1 to 5:

  • 1
  • 2
  • 3
  • 4
  • 5

Comments

Here you can give us your opinion about GeForce GTX 580 or Radeon X1300 LE, agree or disagree with our ratings, or report errors or inaccuracies on the site.