NVIDIA GPU A100 SXM4 80 Go avec dissipateur thermique (699-2G506-0210-320)


SKU: 699-2G506-0210-320
Prix:
$ 15,378 66 CAD
LIVRAISON GRATUITE
- +
Demander un devis
  • tech-support-icon
    Assistance technique gratuite
NVIDIA GPU A100 SXM4 80 Go avec dissipateur thermique (699-2G506-0210-320)

Spécifications techniques

Fabricant NVIDIA
Numéro de pièce MFG 699-2G506-0210-320
Mémoire GPU 80GB HBM2e
Interface mémoire 5120-bit HBM2e (High Bandwidth Memory 2e)
Bande passante mémoire 2,039GB/s
Consommation d'énergie 400 W
Ports de sortie 4x DisplayPort 1.4a

Garantie et assistance

Warranty Icon

-Garantie Premium d’un an

-année garantie sur les pièces et la main d'oeuvre

-année couverture de prêt

-année assistance technique

Technical Support Icon

Assistance technique gratuite

1 (800) 801-8432


96 units in Stock. Lot QTY discount available. Please request a quote if you are interested in a QTY purchase (QTY discount applies in batches of 8, 16, 24, etc.)


⚠️
Export Restriction Notice
This product is subject to U.S. Export Administration Regulations (EAR). The NVIDIA A100 SXM4 80GB may not be sold, shipped, exported, re-exported, transferred, redistributed, or incorporated into any system or device intended for China, including Hong Kong and Macau. 


The NVIDIA A100 80GB SXM4 is a high-performance data-center GPU engineered to accelerate the most demanding AI, high-performance computing (HPC), and data analytics workloads. Built on the Ampere architecture, the A100 delivers massive computational throughput, advanced multi-instance GPU (MIG) capabilities, and industry-leading memory bandwidth to meet the needs of modern large-scale computing environments.

Designed for integration into NVIDIA HGX platforms via the SXM4 form factor, the A100 delivers optimized power and supports high-bandwidth interconnects such as NVLink and NVSwitch, making it the premier solution for enterprise AI training, inference, scientific simulation, and hyperscale deployment.

Features

80GB High-Bandwidth HBM2e Memory

  • Provides 80 GB of ECC-protected HBM2e, enabling extensive models and datasets.
  • Offers up to 2 TB/s memory bandwidth, among the fastest of any GPU.

NVIDIA Ampere Architecture

  • Built with 3rd-generation Tensor Cores and improved CUDA cores for accelerated AI, HPC, and data analytics.
  • Significantly boosts training and inference performance over previous generations.

3rd-Generation Tensor Cores

  • Specialized hardware for accelerating AI workloads such as training, inference, matrix operations, and scientific computing.
  • Supports FP64 Tensor Core acceleration, BFLOAT16, TF32, FP16, INT8, and INT4 precision.

Multi-Instance GPU (MIG)

  • Allows a single A100 GPU to be partitioned into up to seven isolated GPU instances.
  • Enables maximum utilization and secure multi-tenant operations in data-center environments.

High-Speed Interconnects

  • Supports bidirectional NVLink at 600 GB/s, enabling multi-GPU scaling with minimal latency.
  • Integrates into HGX A100 systems with NVSwitch for large-scale GPU clusters.

SXM4 Form Factor

  • Optimized for server deployment with superior power delivery, thermals, and interconnect bandwidth.
  • Not a PCIe card — designed for OEM and hyperscale server platforms.

Unmatched Compute Performance

  • Excellent for AI model training, large-scale inference, simulation, computational science, and big data analytics.
  • Delivers multi-petaflop AI performance when deployed across multiple GPUs.

ECC Reliability & Secure Operation

  • Full error correction code (ECC) protection across memory and L2 cache.
  • Built for mission-critical enterprise workloads requiring accuracy and stability.

Docs & Drivers

Critiques

Ajouter votre avis

Posez votre question