

Nvidia laid out its GPU roadmap for the year in March with the announcement of its Hopper GPU architecture, claiming that, depending on use, it can deliver three to six times the performance of its previous architecture, Ampere, which weighs in at 9.7 TFLOPS of FP64. Here’s a look at what Nvidia, AMD, and Intel have in store. Nvidia, AMD, and Intel have laid their cards on the table about their immediate plans, and it looks like this will be a stiff competition. So what does the year hold for server GPUs? Quite a bit as it turns out. This number sometimes specifies the standardized floating-point format in use when the measure is made, such as FP64. Performance of GPUs is measured in how many of these floating-point math operations they can perform per second or FLOPS. By contrast Nvidia’s current GPU generation, Ampere, has 6,912 cores, all operating in parallel to do one thing: math processing, specifically floating-point math. For example, Intel’s Xeon server CPUs have up to 28 cores, while AMD’s Epyc server CPUs have up to 64. To accomplish this, they have multiple cores, many more than the general-purpose CPU. CPUs can handle the work it just takes them longer.īecause GPUs are designed to solve complex mathematical problems in parallel by breaking them into separate tasks that they work on at the same time, they solving them more quickly. That’s because GPUs are better suited than CPUs for handling many of the calculations required by AI and machine learning in enterprise data centers and hyperscaler networks. These three vendors recognize the demand for GPUs in data centers as a growing opportunity.
