News Posts matching #EPYC

Return to Keyword Browsing

AMD EPYC Scores New Supercomputing and High-Performance Cloud Computing System Wins

AMD today announced multiple new high-performance computing wins for AMD EPYC processors, including that the seventh fastest supercomputer in the world and four of the 50 highest-performance systems on the bi-annual TOP500 list are now powered by AMD. Momentum for AMD EPYC processors in advanced science and health research continues to grow with new installations at Indiana University, Purdue University and CERN as well as high-performance computing (HPC) cloud instances from Amazon Web Services, Google, and Oracle Cloud.

"The leading HPC institutions are increasingly leveraging the power of 2nd Gen AMD EPYC processors to enable cutting-edge research that addresses the world's greatest challenges," said Forrest Norrod, senior vice president and general manager, data center and embedded systems group, AMD. "Our AMD EPYC CPUs, Radeon Instinct accelerators and open software programming environment are helping to advance the industry towards exascale-class computing, and we are proud to strengthen the global HPC ecosystem through our support of the top supercomputing clusters and cloud computing environments."

GIGABYTE Introduces a Broad Portfolio of G-series Servers Powered by NVIDIA A100 PCIe

GIGABYTE, an industry leader in high-performance servers and workstations, announced its G-series servers' validation plan. Following the NVIDIA A100 PCIe GPU announcement today, GIGABYTE has completed the compatibility validation of the G481-HA0 / G292-Z40 and added the NVIDIA A100 to the support list for these two servers. The remaining G-series servers will be divided into two waves to complete their respective compatibility tests soon. At the same time, GIGABYTE also launched a new G492 series server based on the AMD EPYC 7002 processor family, which provides PCIe Gen4 support for up to 10 NVIDIA A100 PCIe GPUs. The G492 is a server with the highest computing power for AI models training on the market today. GIGABYTE will offer two SKUs for the G492. The G492-Z50 will be at a more approachable price point, whereas the G492-Z51 will be geared towards higher performance.

The G492 is GIGABYTE's second-generation 4U G-series server. Based on the first generation G481 (Intel architecture) / G482 (AMD architecture) servers, the user-friendly design and scalability have been further optimized. In addition to supporting two 280 W 2nd Gen AMD EPYC 7002 processors, the 32 DDR4 memory slots support up to 8 TB of memory and maintain data transmission at 3200 MHz. The G492 has built-in PCIe Gen4 switches, which can provide more PCIe Gen4 lanes. PCIe Gen4 has twice the I/O performance of PCIe Gen3 and fully enables the computing power of the NVIDIA A100 Tensor Core GPU, or it can be applied to PCIe storage to help provide a storage upgrade path that is native to the G492.

TYAN Brings the Latest Server Advancements at its 2020 Server Solutions Online Exhibition

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, is showcasing its latest lineup of HPC, storage, cloud and embedded platforms powered by 2nd Gen AMD EPYC 7002 series processors and 2nd Gen Intel Xeon Scalable Processors at TYAN server solutions online exhibition.

"With over 30 years of experience offering state-of-the-art server platforms and server motherboards, TYAN has been recognized by large scale data center customers and server channels," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. "Combining the latest innovation from our partners, like Intel and AMD, TYAN customers enable to win the market opportunities precisely with TYAN's server building block offerings."

ASUS Announces SC4000A-E10 GPGPU Server with NVIDIA A100 Tensor Core GPUs

ASUSTek, the leading IT Company in server systems, server motherboards and workstations today announced the new NVIDIA A100-powered server - ESC4000A E10 to accelerate and optimize data centers for high utilization and low total cost of ownership with the PCIe Gen 4 expansions, OCP 3.0 networking, faster compute and better GPU performance. ASUS continues building a strong partnership with NVIDIA to deliver unprecedented acceleration and flexibility to power the world's highest-performing elastic data centers for AI, data analytics, and HPC applications.

ASUS ESC4000A-E10 is a 2U server powered by the AMD EPYC 7002 series processors that deliver up to 2x the performance and 4x the floating point capability in a single socket versus the previous 7001 generation. Targeted for AI, HPC and VDI applications in data center or enterprise environments which require powerful CPU cores, more GPUs support, and faster transmission speed, ESC4000A E10 focuses on delivering GPU-optimized performance with support for up to four double-deck high performance or eight single-deck GPUs including the latest NVIDIA Ampere-architecture V100, Tesla, and Quadro. This also benefits on virtualization to consolidate GPU resources in to shared pool for users to utilize resources in more efficient ways.

AMD EPYC Processors Ecosystem Continues to Grow with Integration into New NVIDIA DGX A100

AMD today announced the NVIDIA DGX A100, the third generation of the world's most advanced AI system, is the latest high-performance computing system featuring 2nd Gen AMD EPYC processors. Delivering 5 petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.

NVIDIA DGX A100 leverages the high-performance capabilities, 128 cores, DDR4-3200 MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz¹. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that's critical for high performance computing and connections between the CPU and other devices like GPUs.

2nd Gen AMD EPYC Processors Now Delivering More Computing Power to Amazon Web Services Customers

AMD today announced that 2nd Gen AMD EPYC processor powered Amazon Elastic Compute Cloud (EC2) C5a instances are now generally available in the AWS U.S. East, AWS U.S. West, AWS Europe and AWS Asia Pacific regions.

Powered by a 2nd Gen AMD EPYC processor running at frequencies up to 3.3Ghz, the Amazon EC2 C5a instances are the sixth instance family at AWS powered by AMD EPYC processors. By using the 2nd Gen AMD EPYC processor, the C5a instance delivers leadership x86 price-performance for a broad set of compute-intensive workloads including batch processing, distributed analytics, data transformations, log analytics and web applications.

AMD CEO Lisa Su Tops Earnings as Highest Paid CEO in The S&P 500

Lisa Su of Advanced Micro Devices has become the world's highest-paid CEO, according to a recent survey from The Associated Press on CEO compensation. Lisa Su's pay package was valued at $58.5 million after some extremely impressive company performance over her last five years as CEO on the back of the wild success of EPYC, Ryzen, and Radeon. This pay package comprised a base salary of $1 million, a performance bonus of $1.2 million, $56 million in stocks. This makes Lisa Su the first woman to become the highest-paid CEO and one of only 20 women on the list, versus 309 men.

AMD COVID-19 HPC Fund Donates 7 Petaflops of Compute Power to Researchers

AMD and technology partner Penguin Computing Inc., a division of SMART Global Holdings, Inc, today announced that New York University (NYU), Massachusetts Institute of Technology (MIT) and Rice University are the first universities named to receive complete AMD-powered, high-performance computing systems from the AMD HPC Fund for COVID-19 research. AMD also announced it will contribute a cloud-based system powered by AMD EPYC and AMD Radeon Instinct processors located on-site at Penguin Computing, providing remote supercomputing capabilities for selected researchers around the world. Combined, the donated systems will collectively provide researchers with more than seven petaflops of compute power that can be applied to fight COVID-19.

"High performance computing technology plays a critical role in modern viral research, deepening our understanding of how specific viruses work and ultimately accelerating the development of potential therapeutics and vaccines," said Lisa Su, president and CEO, AMD. "AMD and our technology partners are proud to provide researchers around the world with these new systems that will increase the computing capability available to fight COVID-19 and support future medical research."

Distant Blips on the AMD Roadmap Surface: Rembrandt and Raphael

Several future AMD processor codenames across various computing segments surfaced courtesy of an Expreview leak that's largely aligned with information from Komachi Ensaka. It does not account for "Matisse Refresh" that's allegedly coming out in June-July as three gaming-focused Ryzen socket AM4 desktop processors; but roadmap from 2H-2020 going up to 2022 sees many codenames surface. To begin with, the second half of 2020 promises to be as action packed as last year's 7/7 mega launch. Over in the graphics business, the company is expected to debut its DirectX 12 Ultimate-compliant RDNA2 client graphics, and its first CDNA architecture-based compute accelerators. Much of the processor launch cycle is based around the new "Zen 3" microarchitecture.

The server platform debuting in the second half of 2020 is codenamed "Genesis SP3." This will be the final processor architecture for the SP3-class enterprise sockets, as it has DDR4 and PCI-Express gen 4.0 I/O. The EPYC server processor is codenamed "Milan," and combines "Zen 3" chiplets along with an sIOD. EPYC Embedded (FP6 package) processors are codenamed "Grey Hawk."

GIGABYTE Announces HPC Systems Powered by NVIDIA A100 Tensor Core GPUs

GIGABYTE, a supplier of high-performance computing (HPC) systems, today disclosed four NVIDIA HGX A100 platforms under development. These platforms will be available with NVIDIA A100 Tensor Core GPUs. NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics. These four products include G262 series servers that can hold four NVIDIA A100 GPUs and G492 series that can provide eight A100 GPUs. Each series also distinguishes between two models, which support the 3rd generation Intel Xeon Scalable processor and the 2nd generation AMD EPYC processor. The NVIDIA HGX A100 platform is a key element in the NVIDIA accelerated data center concept that brings huge parallel computing power to customers, thereby helping customers accelerate their digital transformation.

With GPU acceleration becoming the mainstream technology in today's data center. Scientists, researchers and engineers are committed to using GPU-accelerated HPC and artificial intelligence (AI) to meet the important challenges of the current world. The NVIDIA accelerated data center concept, including GIGABYTE high-performance servers with NVIDIA NVSwitch, NVIDIA NVLink, and NVIDIA A100 GPUs, will provide GPU computing power required for different computing scales. The NVIDIA accelerated data center also features NVIDIA Mellanox HDR InfiniBand high-speed networking and NVIDIA Magnum IO software that supports GPUDirect RDMA and GPUDirect Storage.

AMD 2nd Gen EPYC Processors Set to Power Oracle Cloud Infrastructure Compute E3 Platform

Today, AMD announced that 2nd Gen AMD EPYC processors are powering the Oracle Cloud Infrastructure Compute E3 platform, bringing a new level of high-performance computing to Oracle Cloud. Using the AMD EPYC 7742 processor, the Oracle Cloud "E3 standard" and the bare metal compute instances are available today and leverage key features of the Gen AMD EPYC processors including class-leading memory bandwidth and the highest core count for an x86 data center processor. These features enable the Oracle Cloud E3 platform to be well suited for both general purpose and high bandwidth workloads such as big data analytics, memory intense workloads and Oracle business applications.

AMD Reports First Quarter 2020 Financial Results

AMD today announced revenue for the first quarter of 2020 of $1.79 billion, operating income of $177 million, net income of $162 million and diluted earnings per share of $0.14. On a non-GAAP* basis, operating income was $236 million, net income was $222 million and diluted earnings per share was $0.18.

"We executed well in the first quarter, navigating the challenging environment to deliver 40 percent year-over-year revenue growth and significant gross margin expansion driven by our Ryzen and EPYC processors," said Dr. Lisa Su, AMD president and CEO. "While we expect some uncertainty in the near-term demand environment, our financial foundation is solid and our strong product portfolio positions us well across a diverse set of resilient end markets. We remain focused on strong business execution while ensuring the safety of our employees and supporting our customers, partners and communities. Our strategy and long-term growth plans are unchanged."

AMD "Matisse" and "Rome" IO Controller Dies Mapped Out

Here are the first detailed die maps of the I/O controller dies of AMD's "Matisse" and "Rome" multi-chip modules that make up the company's 3rd generation Ryzen and 2nd generation EPYC processor families, respectively, by PC enthusiast and VLSI engineer "Nemez" aka @GPUsAreMagic on Twitter, with underlying die-shots by Fitzchens Fitz. The die maps of the "Matisse" cIOD in particular give us fascinating insights to how AMD designed the die to serve both as a cIOD and as an external FCH (AMD X570 and TRX40 chipsets). At the heart of both these chips' design effort is using highly configurable SerDes (serializer/deserializers) that can work as PCIe, SATA, USB 3, or other high-bandwidth serial interfaces, using a network of fabric switches and PHYs. This is how motherboard designers are able to configure the chipsets for the I/O they want for their specific board designs.

The "Matisse" cIOD has two x16 SerDes controllers and an I/O root hub, along with two configurable x16 SerDes PHYs, while the "Rome" sIOD has four times as many SerDes controllers, along with eight times as many PHYs. The "Castle Peak" cIOD (3rd gen Ryzen Threadripper) disables half the SerDes resources on the "Rome" sIOD, along with half as many memory controllers and PHYs, limiting it to 4-channel DDR4. The "Matisse" cIOD features two IFOP (Infinity Fabric over Package) links, wiring out to the two "Zen 2" CCDs (chiplets) on the MCM, while the "Rome" sIOD features eight such IFOP interfaces for up to eight CCDs, along with IFIS (Infinity Fabric Inter-Socket) links for 2P motherboards. Infinity Fabric internally connects all components on both IOD dies. Both dies are built on the 12 nm FinFET (12LP) silicon fabrication node at GlobalFoundries.
Matisse cIOD Rome cIOD

TYAN Updates Transport HX Barebones with New AMD EPYC 7002 Series Processors

TYAN, an industry-leading server platform design manufacturer and MiTAC Computing Technology Corporation subsidiary, today announced support for high frequency AMD EPYC 7F32, AMD EPYC 7F52 and AMD EPYC 7F72 processor-based server motherboards and server systems to the market. TYAN's HPC and storage server platforms continue to offer exceptional performance to datacenter customers.

"Leveraging AMD's innovation in 7 nm process technology, PCIe 4.0 I/O, and an embedded security architecture, TYAN's 2nd Gen AMD EPYC processor-based platforms are designed to address the most demanding challenges facing the datacenter", said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. "Adding the new AMD EPYC 7002 Series processors with TYAN server platforms enable us to provide new capabilities to our customers and partners."

x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant?

Intel's x86 processor architecture has been the dominant CPU instruction set for many decades, since IBM decided to put the Intel 8086 microprocessor into its first Personal Computer. Later, in 2006, Apple decided to replace their PowerPC based processors in Macintosh computers with Intel chips, too. This was the time when x86 became the only option for the masses to use and develop all their software on. While mobile phones and embedded devices are mostly Arm today, it is clear that x86 is still the dominant ISA (Instruction Set Architecture) for desktop computers today, with both Intel and AMD producing processors for it. Those processors are going inside millions of PCs that are used every day. Today I would like to share my thoughts on the demise of the x86 platform and how it might vanish in favor of the RISC-based Arm architecture.

Both AMD and Intel as producer, and millions of companies as consumer, have invested heavily in the x86 architecture, so why would x86 ever go extinct if "it just works"? The answer is that it doesn't just work.

AMD Financial Analyst Day 2020 Live Blog

AMD Financial Analyst Day presents an opportunity for AMD to talk straight with the finance industry about the company's current financial health, and a taste of what's to come. Guidance and product teasers made during this time are usually very accurate due to the nature of the audience. In this live blog, we will post information from the Financial Analyst Day 2020 as it unfolds.
20:59 UTC: The event has started as of 1 PM PST. CEO Dr Lisa Su takes stage.

AMD Scores Another EPYC Win in Exascale Computing With DOE's "El Capitan" Two-Exaflop Supercomputer

AMD has been on a roll in both consumer, professional, and exascale computing environments, and it has just snagged itself another hugely important contract. The US Department of Energy (DOE) has just announced the winners for their next-gen, exascale supercomputer that aims to be the world's fastest. Dubbed "El Capitan", the new supercomputer will be powered by AMD's next-gen EPYC Genoa processors (Zen 4 architecture) and Radeon GPUs. This is the first such exascale contract where AMD is the sole purveyor of both CPUs and GPUs, with AMD's other design win with EPYC in the Cray Shasta being paired with NVIDIA graphics cards.

El Capitan will be a $600 million investment to be deployed in late 2022 and operational in 2023. Undoubtedly, next-gen proposals from AMD, Intel and NVIDIA were presented, with AMD winning the shootout in a big way. While initially the DOE projected El Capitan to provide some 1.5 exaflops of computing power, it has now revised their performance goals to a pure 2 exaflop machine. El Capitan willl thus be ten times faster than the current leader of the supercomputing world, Summit.

Cloudflare Deploys AMD EPYC Processors Across its Latest Gen X Servers

The ubiquitous DDoS-mitigation and CDN provider, Cloudflare, announced that its latest Gen X servers implement AMD EPYC processors ditching Intel Xeons with its older Gen 9 servers. Cloudflare uses multi-functional servers (just like Google), in which each server is capable of handling any kind of the company's workloads (DDoS mitigation, content delivery, DNS, web-security, etc.). The company minimizes server hardware configurations so they're easier to maintain and lower TCO. The hardware specs of its servers are periodically updated and classified by "generations."

Cloudflare's Gen X server is configured with a single-socket 2nd gen AMD EPYC 7642 processor (48-core/96-thread, 256 MB L3 cache), and 256 GB of octa-channel DDR4-2933 memory, along with NVMe flash-based primary storage. "We selected the AMD EPYC 7642 processor in a single-socket configuration for Gen X. This CPU has 48-cores (96 threads), a base clock speed of 2.4 GHz, and an L3 cache of 256 MB. While the rated power (225 W) may seem high, it is lower than the combined TDP in our Gen 9 servers and we preferred the performance of this CPU over lower power variants. Despite AMD offering a higher core count option with 64-cores, the performance gains for our software stack and usage weren't compelling enough," Cloudflare writes in its blog post announcing Gen X. The new servers will go online in the coming weeks.
Many Thanks to biffzinker for the tip.

KIOXIA First to Deliver Enterprise and Data Center PCIe 4.0 U.3 SSDs

The PCI Express 4.0 specification was designed to double the performance of server and storage systems, pushing speeds up to 16.0 gigatransfers per second (GT/s) or 2 gigabits per second (Gb/s) throughput per lane, and driving new performance levels for cloud and enterprise applications. Today, KIOXIA America, Inc. (formerly Toshiba Memory America, Inc.) announced that its lineup of CM6 and CD6 Series PCIe 4.0 NVM Express (NVMe ) enterprise and data center solid state drives (SSDs) are now shipping to customers.

An established leader in developing PCIe and NVMe SSDs, KIOXIA delivers never-before-seen performance. KIOXIA was the first company to publicly demonstrate PCIe 4.0 SSDs and is now the first to ship these next-generation drives. The CM6 and CD6 Series SSDs are compliant to the latest NVMe specification, and include key features such as in-band NVMe-MI, persistent event log, namespace granularity, and shared stream writes. Additionally, both drives are SFF-TA-1001 conformant (also known as U.3), which allows them to be used in tri-mode enabled backplanes, which can accept SAS, SATA or NVMe SSDs.

AMD Gets Design Win in Cray Shasta Supercomputer for US Navy DSRC With 290,304 EPYC Cores

AMD has scored yet another design win for usage of its high-performance EPYC processors in the Cray Shasta supercomputer. The Cray Shasta will be deployed in the US Navy's Department of Defense Supercomputing Resource Center (DSRC) as part of the High Performance Computing Modernization Program. The peak theoretical computing capability of 12.8 PetaFLOPS, or 12.8 quadrillion floating point operations per second supercomputer will be built with 290,304 AMD EPYC (Rome) processor cores and 112 NVIDIA Volta V100 General-Purpose Graphics Processing Units (GPGPUs). The system will also feature 590 total terabytes (TB) of memory and 14 petabytes (PB) of usable storage, including 1 PB of NVMe-based solid state storage. Cray's Slingshot network will make sure all those components talk to each other at a rate of 200 Gigabits per second.

Navy DSRC supercomputers support climate, weather, and ocean modeling by NMOC, which assists U.S. Navy meteorologists and oceanographers in predicting environmental conditions that may affect the Navy fleet. Among other scientific endeavors, the new supercomputer will be used to enhance weather forecasting models; ultimately, this improves the accuracy of hurricane intensity and track forecasts. The system is expected to be online by early fiscal year 2021.

VMWare Updates Licensing Model, Setting 32-Core Limit per License

VMWare, one of the most popular virtualization solutions commercially available for businesses and the industry in general, has announced changes to its licensing model. From now on, licensees will have to acquire a license per 32 CPU cores, instead of the former "per socket" model. This effectively means that users who had made a migration to AMD's 64-core EPYC CPUs, for instance, and who saved on both price-per core and VMWare licensing fees compared to Intel customers (who would need two sockets to achieve the same core-count, and thus, two licenses) are now being charged for two licenses for a 64-core, AMD-populated socket. This was a selling point for AMD - the company stated that their high-end EPYC processors could act as a dual-socket setup with a single processor, thanks to EPYC's I/O capabilities and core counts. VMWare claims this change is in line with industry standard pricing models.

Of course this decision from VMWare hits AMD the hardest, and it comes at a time where there are already 48 and 64 core CPUs available in the market. Should this licensing change be done, perhaps it should be in line with the current state of the industry, and not following in a quasi-random core-count (it definitely isn't random, though, and I'll leave it at that). From VMware's perspective, AMD's humongous CPU core counts does affect their bottom line. The official release claiming customers license software based on CPU counts may be valid, and they do allow for free licenses for servers past 32 cores until April 30, 2020. Of course, VMWare is also preparing itself for future industry changes - Intel will obviously increase its core counts in response to AMD's EPYC attack on the expected core counts of professional applications.

NVIDIA's Next-Generation "Ampere" GPUs Could Have 18 TeraFLOPs of Compute Performance

NVIDIA will soon launch its next-generation lineup of graphics cards based on a new and improved "Ampere" architecture. With the first Tesla server cards that are a part of the Ampere lineup going inside Indiana University Big Red 200 supercomputer, we now have some potential specifications and information about its compute performance. Thanks to the Twitter user dylan552p(@dylan522p), who did some math about the potential compute performance of the Ampere GPUs based on NextPlatform's report, we discovered that Ampere is potentially going to feature up to 18 TeraFLOPs of FP64 compute performance.

With Big Red 200 supercomputer being based on Cray's Shasta supercomputer building block, it is being deployed in two phases. The first phase is the deployment of 672 dual-socket nodes powered by AMD's EPYC 7742 "Rome" processors. These CPUs provide 3.15 PetaFLOPs of combined FP64 performance. With a total of 8 PetaFLOPs planned to be achieved by the Big Red 200, that leaves just a bit under 5 PetaFLOPs to be had using GPU+CPU enabled system. Considering the configuration of a node that contains one next-generation AMD "Milan" 64 core CPU, and four of NVIDIA's "Ampere" GPUs alongside it. If we take for a fact that Milan boosts FP64 performance by 25% compared to Rome, then the math shows that the 256 GPUs that will be delivered in the second phase of Big Red 200 deployment will feature up to 18 TeraFLOPs of FP64 compute performance. Even if "Milan" doubles the FP64 compute power of "Rome", there will be around 17.6 TeraFLOPs of FP64 performance for the GPU.

AMD Strengthens Senior Leadership Team

AMD (NASDAQ: AMD) today announced several promotions and a new hire to strengthen its senior leadership team to further enable the company's continued growth.

AMD announced four senior vice president promotions:
  • Nazar Zaidi to senior vice president of Cores, Server SoC and Systems IP Engineering with continued responsibility for leading the development of leadership CPU cores, server SoCs and system IP.
  • Andrej Zdravkovic to senior vice president of Software Development, leading the teams responsible for all aspects of AMD software strategy and development across AMD graphics, client and data center products.
  • Spencer Pan to senior vice president of Greater China Sales and president of AMD Greater China, with responsibility for leading all sales and go-to-market activities for AMD in Greater China and expansion of strategic partner and customer relationships in the region.
  • Jane Roney to senior vice president of Business Operations, responsible for aligning and scaling critical business processes across the company to support growth and help ensure consistent execution.

AMD CEO To Unveil "Zen 3" Microarchitecture at CES 2020

A prominent Taiwanese newspaper reported that AMD will formally unveil its next-generation "Zen 3" CPU microarchitecture at the 2020 International CES. Company CEO Dr Lisa Su will head an address revealing three key client-segment products under the new 4th generation Ryzen processor family, and the company's 3rd generation EPYC enterprise processor family based on the "Milan" MCM that succeeds "Rome." AMD is keen on developing an HEDT version of "Milan" for the 4th generation Ryzen Threadripper family, codenamed "Genesis Peak."

The bulk of the client-segment will be addressed by two distinct developments, "Vermeer" and "Renoir." The "Vermeer" processor is a client-desktop MCM that succeeds "Matisse," and will implement "Zen 3" chiplets. "Renoir," on the other hand, is expected to be a monolithic APU that combines "Zen 2" CPU cores with an iGPU based on the "Vega" graphics architecture, with updated display- and multimedia-engines from "Navi." The common thread between "Milan," "Genesis Peak," and "Vermeer" is the "Zen 3" chiplet, which AMD will build on the new 7 nm EUV silicon fabrication process at TSMC. AMD stated that "Zen 3" will have IPC increases in line with a new microarchitecture.

AMD Reports Third Quarter 2019 Financial Results

AMD (NASDAQ:AMD) today announced revenue for the third quarter of 2019 of $1.80 billion, operating income of $186 million, net income of $120 million and diluted earnings per share of $0.11. On a non-GAAP(*) basis, operating income was $240 million, net income was $219 million and diluted earnings per share was $0.18.

"Our first full quarter of 7 nm Ryzen, Radeon and EPYC processor sales drove our highest quarterly revenue since 2005, our highest quarterly gross margin since 2012 and a significant increase in net income year-over-year," said Dr. Lisa Su, AMD president and CEO. "I am extremely pleased with our progress as we have the strongest product portfolio in our history, significant customer momentum and a leadership product roadmap for 2020 and beyond."
Return to Keyword Browsing
Jun 1st, 2024 20:08 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts