Ein tolles verkaufbares Top Produkt für die breite Masse, dank Nvidia CUDA-X for ARM eine willkommene Alternative zu x86 von AMD / INTEL und nun als 3.te Eingenständige / HPC 64bit ARMv8-A "Plattform" Standbein neben den äusserst leistungsstarken IBM POWER 9/10 CPU's mit voll integriertem 'cross-connected' NV-Link Support.
Das sind eben Sachen, von denen die kleine unbedeutenden Konkurrenz, wie AMD & INTEL (2021 Xe), bisher nur Träumen können .... und die Zeit vergeht doch so schnell. Nvidia lässt bald den 7nm EUV 'Hopper' raus ....
>>>
Nvidia Arms Up Server OEMs And ODMs For Hybrid Compute | Nextplatform.com
Marvell ThunderX2 64bit ARMv8 Platform .... mit Nvidia CUDA-X / und 8x / 16x NVLink TESLA V100 GPUs
NVIDIA CUDA-X AI and HPC Software Stack Now Available on Marvell ThunderX Platforms
ARM = HPC 64bit ARMv8-A = Marvell's ThunderX2 Plattform
This includes GROMACS, LAMMPS, MILC, NAMD, Quantum Espresso, and Relion, just to name a few, and the testing of the Arm ports was done in conjunction with not only key hardware partners that have Arm processors – Marvell, Fujitsu, and Ampere are the ones that matter, with maybe HiSilicon in China but not mentioned – or make Arm servers – such as Cray, Hewlett Packard Enterprise, and Fujitsu – or who make Linux on Arm distributions – with Red Hat, SUSE Linux, and Canonical being the important ones.
Although Nvidia did not say this, at some point, this CUDA-X stack on Arm will probably be made available on those Cray Storm CS500 systems that some of the same HPC centers mentioned above are getting equipped with the Fujitsu A64FX Arm processor that Fujitsu has designed for RIKEN’s “Fugaku” exascale system. Cray, of course, announced that partnership with Fujitsu and RIKEN, Oak Ridge, and Bristol ahead of SC19, and said that it was not planning to make the integrated Tofu D interconnect available in the CS500 clusters with the A64FX iron. And that means that the single PCI-Express 4.0 slot in the A64FX processor is going to be in contention on the A64FX processor, or someone is going to have to create a Tofu D to InfiniBand or Ethernet bridge to accelerate this server chip. A Tofu D to NVLink bridge would be even better. . . . But perhaps this is just a perfect use case for PCI-Express switching with disaggregation of accelerators and network interfaces and dynamic composition with a fabric layer, such as what GigaIO is doing.
That’s not Nvidia’s concern today, though. What Nvidia does want to do is make it easier for any Arm processor plugged into any server design to plug into a complex of GPU accelerators, and this is being accomplished with a new reference design dubbed EBAC – short for Everything But A CPU.
Also ob sich Nvidia jemals freiwillig Ausruhen würde >>>
NVIDIA Next Gen-GPU Hopper could be offered in chiplet design | Guru3d.com
Multicore AMD/INTEL CPU's mit 48 oder 64 Cores sind eben doch nicht die Zufunkt .... CES 2019: Moore's Law is dead - das weiss mittlerweile jeder ... denn "Multi-TPU/GPU Cluster" sind die AI Accelerator Zukunft .... fragt Raja K!
>>>
CES 2019: Moore's Law is dead, says Nvidia's CEO - CNET