×

GPU (Graphics processing unit)

Graphics processing technology has evolved to deliver unique benefits in the world of computing. The latest graphics processing units (GPUs) unlock new possibilities in gaming, content creation, machine learning, and more. A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them more efficient than general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die.

A GPU (Graphics Processor Unit) is a familiar term – it’s the component that enables video and sophisticated graphics, such as video games, to run on the PC. GPU Cloud Computing is a fast, stable, and elastic computing service based on GPU ideal for various scenarios such as deep learning training/inference, graphics processing, and scientific computing. GPU Cloud Computing can be managed just like a standard Cloud Virtual Machine instance with speed and ease. GPU is used together with a CPU to accelerate deep learning, analytics, and engineering applications for platforms ranging from artificial intelligence to cars, drones, robots, search engines, interactive speech, video recommendations and much more.

NVIDIA TESLA P4 2560 CUDA Cores Server

The NVIDIA Tesla P4 GPU features Nvidia’s Pascal architecture and has 8GB of GDDR5 memory plus 2560 NVIDIA CUDA cores, all in a single-slot form factor. Although it’s small in size, about the length of a pencil, the P4 graphics processing unit is designed for efficiency when addressing deep learning inference workloads. It improves scale-out server performance, runs deep learning workloads, and allows smart AI-based responsive services.

₹39,999/MO

  • Intel Xeon E5-2650L
  • Octo Core 1.80 GHz
  • NVIDIA TESLA P4 2560 CUDA Cores
  • RAM: 32 GB
  • Storage: 2 x SAS-SSD 200 GB
  • OS: Ubuntu 20.04
  • Bandwidth: 300 Mbps Unmetered

Description

Based on a 16nm manufacturing process, this card is primarily used for professional graphics, deep learning inference, and virtualization. The NVIDIA Tesla P4 GPU is also packed with NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) software to allow full GPU-per-server density.

Performance

Using a low-profile design, the P4 is perfect for smaller blade servers. As a direct successor to its Maxwell counterpart, the NVIDIA Tesla M4 GPU, the Tesla P4 packs 7.2 billion transistors on a large chip with a die area of 314 mm². A server with a single Tesla P4 replaces 13 CPU-only video-inferencing servers, which means there’s a decrease in total cost of ownership. The NVIDIA Tesla P4 supports 2560 CUDA cores, 160 texture mapping units, and 64 ROPs.

Memory

Under the hood, the NVIDIA P4 has a memory clock running at 1502MHz, or 6 Gbps. Also known as the GP104 GPU, this graphics processing unit uses 8GB of GDDR5 memory connected to a 256-bit memory interface with a 192.3GB/s memory bandwidth. The GPU also operates at a frequency of 886MHz, which can be boosted to a maximum of 1114MHz.