Risparmi $4,900

NVIDIA GPU A100 SXM4 80GB con dissipatore (699-2G506-0210-320)


SKU: 965-2G506-0030-200
Prezzo:
$ 4,999 99
$ 9,899 . 99
SPEDIZIONE GRATUITA

Net 30 or as low as $472 /mo with Credit Key Apply Now

- +
Richiedi un preventivo
  • warranty-icon
    1-Anno di garanzia Premium ?
  • return-icon
    Garanzia di restituzione di 30 giorni
  • tech-support-icon
    Supporto tecnico gratuito
NVIDIA GPU A100 SXM4 80GB con dissipatore (699-2G506-0210-320)

Specifiche tecniche

Produttore NVIDIA
Numero parte MFG (965-2G506-0030-200) (699-2G506-0210-300) (699-2G506-0210-332)
Memoria GPU 80GB HBM2e
Interfaccia di memoria 5120-bit HBM2e (High Bandwidth Memory 2e)
Larghezza di banda della memoria 2,039GB/s
Consumo Energetico 400 W

Garanzia e supporto

Warranty Icon

1-Anno di garanzia Premium

1-anno garanzia su parti e manodopera

1-anno copertura del mutuatario

1-anno supporto tecnico

Moneyback Icon

Garanzia di restituzione di 30 giorni

Goditi la tranquillità con la nostra garanzia di restituzione di 30 giorni

Technical Support Icon

Supporto tecnico gratuito

1 (800) 801-8432


The NVIDIA A100 80GB SXM4 is a high-performance data-center GPU engineered to accelerate the most demanding AI, high-performance computing (HPC), and data analytics workloads. Built on the Ampere architecture, the A100 delivers massive computational throughput, advanced multi-instance GPU (MIG) capabilities, and industry-leading memory bandwidth to meet the needs of modern large-scale computing environments.

Designed for integration into NVIDIA HGX platforms via the SXM4 form factor, the A100 delivers optimized power and supports high-bandwidth interconnects such as NVLink and NVSwitch, making it the premier solution for enterprise AI training, inference, scientific simulation, and hyperscale deployment.

Features

80GB High-Bandwidth HBM2e Memory

  • Provides 80 GB of ECC-protected HBM2e, enabling training on large models and datasets.
  • Offers up to 2 TB/s memory bandwidth, among the fastest of any GPU.

NVIDIA Ampere Architecture

  • Built with 3rd-generation Tensor Cores and improved CUDA cores for accelerated AI, HPC, and data analytics.
  • Significantly improves training and inference performance compared with previous generations.

3rd-Generation Tensor Cores

  • Specialized hardware for accelerating AI workloads such as training, inference, matrix operations, and scientific computing.
  • Supports FP64 Tensor Core acceleration, BFLOAT16, TF32, FP16, INT8, and INT4 precision.

Multi-Instance GPU (MIG)

  • Allows a single A100 GPU to be partitioned into up to seven isolated GPU instances.
  • Enables maximum utilization and secure multi-tenant operations in data-center environments.

High-Speed Interconnects

  • Supports bidirectional NVLink at 600 GB/s, enabling multi-GPU scaling with minimal latency.
  • Integrates into HGX A100 systems with NVSwitch for large-scale GPU clusters.

SXM4 Form Factor

  • Optimized for server deployment with superior power delivery, thermals, and interconnect bandwidth.
  • Not a PCIe card — designed for OEM and hyperscale server platforms.

Unmatched Compute Performance

  • Excellent for AI model training, large-scale inference, simulation, computational science, and big data analytics.
  • Delivers multi-petaflop AI performance when deployed across multiple GPUs.

ECC Reliability & Secure Operation

  • Full error correction code (ECC) protection across memory and L2 cache.
  • Built for mission-critical enterprise workloads requiring accuracy and stability.

Docs & Drivers

Recensioni

Aggiungi la tua recensione

Fai la tua domanda