After many months of waiting, Amazon today has finally made available their new compute-oriented C5a AWS cloud instances based on the new AMD EPYC 2nd generation Rome processors with new Zen2 cores.

白色妖精泷泽萝拉Amazon had their intentions to adopt AMD’s newest silicon designs. The new C5a instances scale up to 96 vCPUs (48 physical cores with SMT), and were advertised to clock up to 3.3GHz.

The instance offerings scale from 2 vCPUs with 4GB of RAM, up to 96 vCPUs, with varying bandwidth to elastic block storage and network bandwidth throughput.

白色妖精泷泽萝拉The actual CPU being used here is an AMD EPYC 7R32, a custom SKU that’s seemingly only available to Amazon / cloud providers. Due to the nature of cloud instances, we actually don’t know exactly the core count of the piece and whether this is a 64 or 48- core chip.

白色妖精泷泽萝拉We quickly fired up an instance to check the CPU topology, and we’re seeing that the chip has two quadrants populated with the full 2 CCDs with four CCXs in total per quadrant, and two quadrants with seemingly only a single CCD populated, with only two CCXs per quadrant.

I quickly ran some tests, and the CPUs are idling at 1800MHz and boost up to 3300MHz maximum. All-core frequencies (96 threads) can be achieved at up to 3300MHz, but will throttle down to 3200MHz after a few minutes. Compute heavy workloads such as 456.hmmer will run at around 3100MHz all-core.

While it is certainly possible that this is a 64-core chip, Amazon’s offering of 96 vCPU metal instances point out against that. On the other hand, the 96 vCPU’s configuration of 192GB wouldn’t immediately match up with the memory channel count of the Rome chip unless the two lesser chip quadrants also each had one memory controller disabled. Either that, or there’s simply two further CCDs that aren’t can’t be allocated – makes sense for the virtualised instances but would be weird for the metal instance offering.

The new C5a Rome-based instances are available now in eight sizes in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Singapore) regions.

Related Reading:

POST A COMMENT

20 Comments

View All Comments

  • ads295 - Friday, June 5, 2020 - link

    Reading articles like these reminds me how little I know about server workloads and the custom chips that are the result of that. I understood nothing about this article! Reply
  • ingwe - Friday, June 5, 2020 - link

    You aren't alone! I feel the same way about server articles on AT. Reply
  • imaheadcase - Friday, June 5, 2020 - link

    Because when anand left the current person in charge is more interested in technical hardware stuff now than actual stuff a regular person would use at home. Sure they throw up some news/reviews of other hardware..but no way as popular as when it was geared towards more basic hardware. Reply
  • brantron - Friday, June 5, 2020 - link

    Hi, former regular person here, and I've been using cloud instances at home for 5 years. I guess all the bazillion people who were able to switch to working from home at the push of a button aren't regular anymore, either. The world has suddenly become a lonely place for the few remaining regular people... Reply
  • aryonoco - Friday, June 5, 2020 - link

    Speak for yourself. I find the server articles, the HPC and the mobile articles far more interesting than yet another PSU or case review. The PC market is boring, servers and mobile is where all the innovation is happening. Reply
  • Foeketijn - Sunday, June 7, 2020 - link

    I'm pushing that imaginary +1/thumbs up/heart button. Reply
  • jospoortvliet - Sunday, June 7, 2020 - link

    same here, I've seen enough power supplies. The technical deep dives in cpu, gpu and other components are what make this site stand out from the gazillion others. Reply
  • schujj07 - Friday, June 5, 2020 - link

    AWS has been doing a disservice to the Epyc CPUs the entire time. More times than not the AMD instances follow the same RAM allotment that you would find with the Intel CPUs despite the AMD chips having 8 RAM channels vs Intel's 6. Reply
  • awesomeusername - Friday, June 5, 2020 - link

    Andrei, can you share which tool provided core-to-core latency results? There are open source ajakubek/core-latency which can be used for getting data, and then plotting it via some sort of python + mathlab - but solution in screenshot above already doing it.
    Can you share some details?
    Reply
  • Andrei Frumusanu - Friday, June 5, 2020 - link

    It's a custom tool I wrote. It's a generic atomic compare and set ping-pong on a value between two threads on a single cache line. The table is just Excel gradient of the CSV data. Reply

Log in

Don't have an account? Sign up now