Some applications need CPU clock while others need more cores, so base your server purchases accordingly.
When checking CPU and server benchmarks, you’ve no doubt noticed that the testing covers both single-core and multi-core performance. Here is the difference.
In terms of raw performance, both are equally important, but single and multi-core have areas of use where they shine. So when choosing a CPU, it’s important to consider your specific workload and evaluate whether single-core or multi-core best suits your needs.
Single core CPU
There are still many applications that are limited to a single core, such as many databases (although some, such as MySQL, are multi-core).
Performance is measured in several ways. The clock frequency is large; the higher the frequency, the faster the application will run. The width of the execution channels is also important, and the wider the pipeline, the more work can be done per clock cycle. So even if the application is single-threaded, a wider channel can improve its performance.
Multi-core CPU
Multi-core benchmarking often means running multiple applications in parallel, rather than bringing multiple cores to a single application. Each application runs on a separate core without having to wait for its turn, as with a single core.
Many chips aimed at cloud providers and large enterprises have 96 (AMD Epyc “Genoa”) to 128 (Ampere AltraMax) cores. The more users and the more virtual machines, the more cores to handle the load.
Prices per core
These very large chips are typically used to run multi-user workloads, including containers and virtual machines, said Patrick Kennedy, president and editor of Serve The Home, an independent test site for SMB to enterprise server equipment.
Because chips are licensed per core, businesses should aim for the highest performance per core to minimize licensing fees, he said. The big demand for single-core performance is to bypass these charges.
Cores get help
After years of AMD trailing Intel in both single-core and multi-core performance, the two are now tied in both benchmarks, Kennedy says. “I would say that Intel and AMD are very interchangeable in most applications. But I think there’s probably 10-15% of cases where they’re totally different,” he said.
For example, in any scenario where memory bandwidth was limited, it would use AMD Epyc processors over Intel Xeon, because Epycs has huge caches and going to cache is faster than going to main memory.
“For general purpose, enterprise workloads, realistically I think you could use either [Intel or AMD]. But in general, I would tell people that at this point it’s probably worth trying one of each and deciding based on your workload,” Kennedy said.
CPU performance alone is no longer the deciding factor. Servers are increasingly augmented with accelerators such as GPUs, FPGAs, and AI processors that offload tasks from the CPU to speed up the system as a whole.
For example, in anything related to VPN termination, Kennedy said he would “100 percent” use an Intel processor with a QuickAssist card to offload crypto/compression because it lifts a lot of the CPU load. On the other hand, if he was doing something that was limited by memory bandwidth, he would use AMD Epyc chips because Epycs have huge caches.