Redden $4,900

NVIDIA A100 SXM4 80GB GPU met koelblok (699-2G506-0210-320)


SKU: 965-2G506-0030-200
Prijs:
$ 4,999 99
$ 9,899 . 99
GRATIS VERZENDING

Net 30 or as low as $472 /mo with Credit Key Apply Now

- +
Vraag een offerte aan
  • warranty-icon
    1-Jaar Premium Garantie ?
  • return-icon
    30 dagen retourgarantie
  • tech-support-icon
    Gratis technische ondersteuning
NVIDIA A100 SXM4 80GB GPU met koelblok (699-2G506-0210-320)

Technische specificaties

Fabrikant NVIDIA
MFG Artikel# (965-2G506-0030-200) (699-2G506-0210-300) (699-2G506-0210-332)
GPU-geheugen 80GB HBM2e
Geheugen Interface 5120-bit HBM2e (High Bandwidth Memory 2e)
Geheugen bandbreedte 2,039GB/s
Energieverbruik 400 W

Garantie en ondersteuning

Warranty Icon

1-Jaar Premium Garantie

1-jaar garantie op onderdelen en arbeid

1-jaar leendekking

1-jaar technische ondersteuning

Moneyback Icon

30 dagen retourgarantie

Geniet van gemoedsrust met onze 30 dagen retourgarantie

Technical Support Icon

Gratis technische ondersteuning

1 (800) 801-8432


The NVIDIA A100 80GB SXM4 is a high-performance data-center GPU engineered to accelerate the most demanding AI, high-performance computing (HPC), and data analytics workloads. Built on the Ampere architecture, the A100 delivers massive computational throughput, advanced multi-instance GPU (MIG) capabilities, and industry-leading memory bandwidth to meet the needs of modern large-scale computing environments.

Designed for integration into NVIDIA HGX platforms via the SXM4 form factor, the A100 delivers optimized power and supports high-bandwidth interconnects such as NVLink and NVSwitch, making it the premier solution for enterprise AI training, inference, scientific simulation, and hyperscale deployment.

Features

80GB High-Bandwidth HBM2e Memory

  • Provides 80 GB of ECC-protected HBM2e, enabling training on large models and datasets.
  • Offers up to 2 TB/s memory bandwidth, among the fastest of any GPU.

NVIDIA Ampere Architecture

  • Built with 3rd-generation Tensor Cores and improved CUDA cores for accelerated AI, HPC, and data analytics.
  • Significantly improves training and inference performance compared with previous generations.

3rd-Generation Tensor Cores

  • Specialized hardware for accelerating AI workloads such as training, inference, matrix operations, and scientific computing.
  • Supports FP64 Tensor Core acceleration, BFLOAT16, TF32, FP16, INT8, and INT4 precision.

Multi-Instance GPU (MIG)

  • Allows a single A100 GPU to be partitioned into up to seven isolated GPU instances.
  • Enables maximum utilization and secure multi-tenant operations in data-center environments.

High-Speed Interconnects

  • Supports bidirectional NVLink at 600 GB/s, enabling multi-GPU scaling with minimal latency.
  • Integrates into HGX A100 systems with NVSwitch for large-scale GPU clusters.

SXM4 Form Factor

  • Optimized for server deployment with superior power delivery, thermals, and interconnect bandwidth.
  • Not a PCIe card — designed for OEM and hyperscale server platforms.

Unmatched Compute Performance

  • Excellent for AI model training, large-scale inference, simulation, computational science, and big data analytics.
  • Delivers multi-petaflop AI performance when deployed across multiple GPUs.

ECC Reliability & Secure Operation

  • Full error correction code (ECC) protection across memory and L2 cache.
  • Built for mission-critical enterprise workloads requiring accuracy and stability.

Docs & Drivers

Beoordelingen

Voeg uw beoordeling toe

Stel uw vraag