AMD’s Vega Graphic Cards – What To Expect?

We have learned that AMD plans to launch their high-end VEGA GPUs in May. AMD Vega GPUs can be utilized in professional work, gaming, and even in server oriented tasks, but what exactly can we expect?

AMD’s high-end Vega 10 GPU will be available to the consumers in the first half of 2017. The chip spans a die size of over 500mm2 from early calculations, and it comes with two HBM2 stacks, integrating up to 16 GB of HMB2.


The graphics chip will employ the latest 14nm GFX9 core architecture based on the NCU design. The graphics card will spotlight 64 Compute Units or 4096 stream processors. AMD plans on increasing the throughput of the chip through increased clock speeds allowing them to pump numbers better than the 28nm GCN 3.0 architecture based FIJI GPU.

Radeon Vega

The first generation HBM graphics cards were limited to only 4 GB of VRAM, and they had a bandwidth of 512 GB/s. The first generation HBM graphics cards had 4-layers per stack, and the same will be the case in the latest Vega GPUs. We have noticed that the pin speed is increased with HBM2. The new memory standard boasts 2 Gb/s compared to 1 Gb/s on HBM1. This means that the increased clock speed will allow the same memory bandwidth of four HBM1 stacks on only 2 HBM2 stacks.

The graphics performances will be faster than the GeForce GTX 1080, as it was demonstrated a few times. We expect this graphics card to feature awesome power efficiency with higher clock speeds on the consumer variants. A 4096 stream processor SKU with 16 GB HBM2 is rated at 225W, which means that we can expect higher-clocked variants for consumers. AMD will have AIBs offering a few custom variants of the card, but we are glad to see that they will be including the Mini-ITX variants as well. When it comes to the pricing, we expect the Radeon Vega to tackle the GTX 1080, but we hope that it will also be able to compete with the GTX 1080 Ti.

We are looking forward to learning more about the Vega, and we can’t wait for the launch event when we learn more. Until then, we have to be patient.

About the Author

Leave a Reply 0 comments

Leave a Reply: