NVIDIA Tesla M2050 3GB 384-bit GDDR5 448 Processing Cores Fermi GPU Computing Module Mã sản phẩm: AOC-GPU-NVM2050

    • Number of Processor Cores: 448
    • Frequency of Processor Cores: 1.15 GHz
    • Double Precision floating point performance (peak): 515 Gflops
    • Single Precision floating point performance (peak): 1.03 Tflops
    • Total Dedicated Memory: 3GB GDDR5
    • Memory Speed: 1.5 GHz
    • Memory Interface: 384-bit
    • Memory Bandwidth: 144 GB/sec
    • Power Consumption: 247W
    • Form Factor: Module
    • System Interface: PCIe x16 Gen2
    • Thermal Solution : Passive Heatsink
    • Software Development Tools: CUDA C/C++/Fortran, OpenCL, DirectCompute Toolkits. NVIDIA Parallel Nsight™ for Visual Studio

FEATURES

- GPUs powered by the Fermi-generation of the CUDA architecture
Delivers cluster performance at 1/10th the cost and 1/20th the power of CPU-only systems based on the latest quad core CPUs.

- 448 CUDA Cores
Delivers up to 515 Gigaflops of double-precision peak performance in each GPU, enabling a single workstation to deliver a Teraflop or more of performance. Single precision peak performance is over a Teraflop per GPU.

- ECC Memory
Meets a critical requirement for computing accuracy and reliability for workstations. Offers protection of data in memory to enhance data integrity and reliability for applications. Register files, L1/L2 caches, shared memory, and DRAM all are ECC protected.

-Desktop Cluster Performance
Solves large-scale problems faster than a small server cluster on a single workstation with multiple GPUs.

- 3GB of GDDR5 memory per GPU
Maximizes performance and reduces data transfers by keeping larger data sets in local memory that is attached directly to the GPU.

- NVIDIA Parallel DataCache™
Accelerates algorithms such as physics solvers, ray-tracing, and sparse matrix multiplication where data addresses are not known beforehand. This includes a configurable L1 cache per Streaming Multiprocessor block and a unified L2 cache for all of the processor cores.

- NVIDIA GigaThread™ Engine
Maximizes the throughput by faster context switching that is 10X faster than previous architecture, concurrent kernel execution, and improved thread block scheduling.

- Asynchronous Transfer
Turbocharges system performance by transferring data over the PCIe bus while the computing cores are crunching other data. Even applications with heavy data-transfer requirements, such as seismic processing, can maximize the computing efficiency by transferring data to local memory before it is needed.

- CUDA programming environment with broad support of programming languages and APIs
Choose C, C++, OpenCL, DirectCompute, or Fortran to express application parallelism and take advantage of the “Fermi” GPU’s innovative architecture. NVIDIA Parallel Nsight™ tool is available for Microsoft Visual Studio developers.

- High Speed , PCIe Gen 2.0 Data Transfer
Maximizes bandwidth between the host system and the Tesla processors. Enables Tesla systems to work with virtually any PCIe-compliant host system with an open PCIe x16 slot.

    Sản phẩm liên quan