Breaking

Wednesday, July 19, 2017

Amazon's new GPU-fueled service aims at VR, 3D video and remote workstations.

AWS expands its lineup of GPU cases with the new Nvidia Tesla M60 based G3 family.







Amazon Web Services (AWS) has propelled another group of superior Nvidia-based GPU examples. 

The new G3 examples are controlled by Nvidia's Tesla M60 GPUs, and succeed its previous G2 case, which had four NVIDIA Grid GPUs and 1,536 CUDA centers. 

Similarly as with G2, which propelled in 2013, the new G3 occasions are focusing on applications that need gigantic parallel handling power, for example, 3D rendering and perception, virtual reality, video encoding, remote designs workstation applications. 

AWS is putting forth three kinds of the G3 occurrence, with one, two, or four GPUs. Each GPU has 8GB of GPU memory, 2048 parallel preparing centers, and an equipment encoder that backings up to 10 H.265 streams and 18 H.264 streams. AWS says that the G3 cases bolster Nvidia's GRID Virtual Workstation, and are fit for supporting four 4K screens. 

AWS claims the biggest G3 occurrence, the g3.16large, has double the CPU power and eight times the host memory of its G2 occasions. It has four GPUs, 64 CPUs, and 488GB of RAM. The virtual CPUs utilize Intel's Xeon E5-2686v4 (Broadwell) processors. Its biggest G2 case included 60GB RAM. 

On-request estimating for the G3 examples are $1.14 every hour for g3.4xlarge, $2.28 every hour for the g3.8xlarge, and $4.56 every hour for the g3.16xlarge. The occasions are accessible just with AWS Elastic Block Storage, contrasted and the G2 examples, which are accessible with SSD stockpiling. 

The G3 occurrences are accessible in US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), AWS GovCloud (US), and EU (Ireland). AWS is intending to extend the offering to more locales in the coming months. 

AWS has kept on expanding its lineup of GPU occasions throughout the years. In 2013 it was pitching the G2 family for machine learning and sub-atomic demonstrating, yet these applications are currently taken into account with its P2 occurrences, which it propelled in September. 

The biggest P2 case offers 16 GPUs with a consolidated 192GB of video memory. They additionally highlight up to 732 GB of host memory, and up to 64 vCPUs utilizing custom Intel Xeon E5-2686 v4 Broadwell processors. 

"Today, AWS gives the broadest scope of cloud occurrence sorts to help a wide assortment of workloads. Clients have disclosed to us that being able to pick the correct occasion for the correct workload empowers them to work all the more proficiently and go to showcase speedier, which is the reason we keep on innovating to better help any workload," said Matt Garman, Amazon EC2 VP. 

Microsoft has likewise been augmenting on GPU examples for Azure clients. The organization propelled its NC-Series figure centered GPU examples a year ago, presenting to four Nvidia Tesla M60 GPUs and 244GB RAM with 24 centers utilizing Intel Xeon E5-2690 v3 (Sandy Bridge) processors. 

In May, it reported the prospective ND-arrangement which utilize Nvidia Pascal-based Tesla P40 GPUs and a refreshed lineup of NC-arrangement cases. The biggest ND-arrangement highlights 24 CPUs, four P40 GPUs, and 448GB RAM. The biggest NC-arrangement, the NC24rs_v2 highlights 24 CPUs, four Tesla P100 GPUs, and 448GB RAM.

No comments:

Post a Comment