Thursday, April 25th 2024

AMD "Strix Point" Mobile Processor Confirmed 12-core/24-thread, But Misses Out on PCIe Gen 5

AMD's next-generation Ryzen 9000 "Strix Point" mobile processor, which succeeds the current Ryzen 8040 "Hawk Point" and Ryzen 7040 "Phoenix," is confirmed to feature a CPU core-configuration of 12-core/24-thread, according to a specs-leak by HKEPC citing sources among notebook OEMs. It appears like Computex 2024 will be big for AMD, with the company preparing next-gen processor announcements across the desktop and notebook lines. Both the "Strix Point" mobile processor and "Granite Ridge" desktop processor debut the company's next "Zen 5" microarchitecture.

Perhaps the biggest takeaway from "Zen 5" is that AMD has increased the number of CPU cores per CCX from 8 in "Zen 3" and "Zen 4," to 12 in "Zen 5." While this doesn't affect the core-counts of its CCD chiplets (which are still expected to be 8-core), the "Strix Point" processor appears to use one giant CCX with 12 cores. Each of the "Zen 5" cores has a 1 MB dedicated L2 cache, while the 12 cores share a 24 MB L3 cache. The 12-core/24-thread CPU, besides the generational IPC gains introduced by "Zen 5," marks a 50% increase in CPU muscle over "Hawk Point." It's not just the CPU complex, even the iGPU sees a hardware update.
Apparently, AMD is increasing the workgroup processor (WGP) count of the iGPU from 6 on the current "Hawk Point" processor, to 8 on "Strix Point." This works out to 16 compute units, or 1,024 stream processors, making a 33% increase in the shader engine's performance. The new iGPU is based on the updated RDNA 3+ graphics architecture. "Strix Point" also debuts AMD's 2nd Generation Ryzen AI NPU based on the XDNA 2 architecture. This NPU offers 50 AI TOPS in performance, an over 3-fold increase from the 16 TOPS offered by the NPU on "Hawk Point."

In terms of I/O, one can expect an updated display engine with DisplayPort 2.1 UHBR10, DSC, and support for 8K @ 60 Hz with a single cable. The memory interfaces are expected to remain unchanged spare for increased reference speeds, with support for DDR5 and LPDDR5(x) memory types. One area for disappointment is the PCIe interface. We had expected "Strix Point" to feature PCIe Gen 5, but it seems like AMD had other plans. The PCIe interface of "Strix Point" will be similar to that of "Phoenix" and "Hawk Point," it will stick to PCIe Gen 4. While this might not mean much for discrete GPUs, it would mean that you can't use the latest Gen 5 NVMe SSDs at their advertised speeds.

The same specs sheet also confirms a lot of specs of the larger "Strix Halo" chiplet-based mobile flagship processor, you can read all about it in our older article.
Source: HKEPC
Add your own comment

42 Comments on AMD "Strix Point" Mobile Processor Confirmed 12-core/24-thread, But Misses Out on PCIe Gen 5

#1
P4-630
btarunrit would mean that you can't use the latest Gen 5 NVMe SSDs at their advertised speeds.
Has a sad
Posted on Reply
#2
bug
Quickly, who can tell the difference between CCD and CCX otoh?
Posted on Reply
#3
Denver
PCIe 5 SSDs have so many advantages that I can't even list them all... but among them being more expensive, hot is the highlight.

Ultra Important features that no one can live without, for sure
Posted on Reply
#4
maxfly
Going to have to put off the new laptop once again it seems. These definitely look worth the wait.
Posted on Reply
#5
bug
DenverPCIe 5 SSDs have so many advantages that I can't even list them all... but among them being more expensive, hot is the highlight.

Ultra Important features that no one can live without, for sure
There are important advantages for servers. But that doesn't mean they're also useful on a desktop or, as is the case here, laptops.
The only thing that uses PCIe5 in the consumer space right now is expensive SSDS that you have to figure out how to keep from overheating.
Posted on Reply
#6
tommo1982
bugQuickly, who can tell the difference between CCD and CCX otoh?
I keep forgetting, CCX houses CPU cores with cache and CCD has two of CCX's?
I simply omit the part stating the number of CCD'S and CCX's and focus on the number of cores and how L3 cache was implemented.

Edit:
So many errors typing on a smartphone
Posted on Reply
#7
persondb
bugQuickly, who can tell the difference between CCD and CCX otoh?
CCD is the chiplet.
CCX is the core complex, i.e. the cluster of cores.

There isn't much of a difference nowasday because AMD has gone to 1 CCD = 1 CCX since Zen 3. But it was a consideration that was important in the Zen 2 days as each CCD had up to 2 CCX, each with 4 cores. There were difference in performance in Zen 2 parts that were both four core but one had it in a single CCX while other had it enabled two cores per CCX.

The same is true for monolithic Zen 2 APUs which had up to two CCXs.
Posted on Reply
#9
Daven
AMD is doubling down on APUs. Maybe this is why they are possibly having second thoughts on the high end discrete client GPU market.
Posted on Reply
#10
Denver
DavenAMD is doubling down on APUs. Maybe this is why they are possibly having second thoughts on the high end discrete client GPU market.
What's particularly intriguing is AMD's incorporation of a "GPU block" into their chiplet strategy. Creating a GPU die solely for use in a single product contradicts the principles of modularity/chiplet economics.
It makes me wonder if they might embrace a design that pairs two mid-end chips to make a high-end solution(replicating the strategy adopted by Ryzen).

Plus, it's interesting to note that 40CU matches the specifications of (numerous) CDNA3 dies utilized in the Instinct design. :cool:
Posted on Reply
#11
bug
DavenAMD is doubling down on APUs. Maybe this is why they are possibly having second thoughts on the high end discrete client GPU market.
Being unable to compete at the top might have something to do with it.
But when "the top" goes for $1k+, I really don't care what happens there.
Posted on Reply
#12
ncrs
persondbThere isn't much of a difference nowasday because AMD has gone to 1 CCD = 1 CCX since Zen 3. But it was a consideration that was important in the Zen 2 days as each CCD had up to 2 CCX, each with 4 cores. There were difference in performance in Zen 2 parts that were both four core but one had it in a single CCX while other had it enabled two cores per CCX.

The same is true for monolithic Zen 2 APUs which had up to two CCXs.
Zen 4c chiplets retain the split to 2 CCX in a single CCD. Each CCX has up to 8 cores with 16MB L3 cache.
Posted on Reply
#13
Daven
DenverWhat's particularly intriguing is AMD's incorporation of a "GPU block" into their chiplet strategy. Creating a GPU die solely for use in a single product contradicts the principles of modularity/chiplet economics.
It makes me wonder if they might embrace a design that pairs two mid-end chips to make a high-end solution(replicating the strategy adopted by Ryzen).

Plus, it's interesting to note that 40CU matches the specifications of (numerous) CDNA3 dies utilized in the Instinct design. :cool:
I was thinking along the same lines.
bugBeing unable to compete at the top might have something to do with it.
But when "the top" goes for $1k+, I really don't care what happens there.
The 7900XTX is the second fastest GPU ever made. Its hard to compete against emotion. There are a large number of Nvidia users that fall into three categories:

1. Cult followers - few but they actually exist.
2. Belief in anything negative even if untrue to always justify buying Nvidia.
3. And the worst Nvidia user - those who want AMD to compete in order to bring down prices of Nvidia cards so they can afford one. That’s a special kind of irrationality.

So AMD might go the Apple route and almost always bundle the GPU and CPU together. Maybe make a discrete card for the most popular price bracket (mid range) and save the rest of capacity space for Instincts.
Posted on Reply
#14
Carillon
DenverWhat's particularly intriguing is AMD's incorporation of a "GPU block" into their chiplet strategy. Creating a GPU die solely for use in a single product contradicts the principles of modularity/chiplet economics.
It makes me wonder if they might embrace a design that pairs two mid-end chips to make a high-end solution(replicating the strategy adopted by Ryzen).

Plus, it's interesting to note that 40CU matches the specifications of (numerous) CDNA3 dies utilized in the Instinct design. :cool:
The SoC die looks much like a discrete gpu chip with 2 extra infinity fabric links, they might be able to link togheter 2 of those SoC dies or just reuse them singularly for an entry level gpu SKU.
Though I dont think IF links are fast enough to link 2 GPUs
Posted on Reply
#15
SL2
DavenMaybe this is why they are possibly having second thoughts on the high end discrete client GPU market.
I thought part of it was that they're not selling enough of them, I could be wrong tho.

Strix point and Strix halo can't compete with high end anyway.
Posted on Reply
#16
Hakker
Oh you can't use gen5 SSDs on advertised speeds. Funny It would be a really smart move to add gen 5 SSDs which basically use as much power as half the rest of the laptop. Come on. I can already hear some ppl cry but but but I want to fill my 1 TB SSD in 90 seconds because I will do this NEVER!
You're better off with a Gen 3 SSD. It's more than fast enough for any laptop and never uses a lot of power.
Posted on Reply
#17
TheinsanegamerN
HakkerOh you can't use gen5 SSDs on advertised speeds. Funny It would be a really smart move to add gen 5 SSDs which basically use as much power as half the rest of the laptop. Come on. I can already hear some ppl cry but but but I want to fill my 1 TB SSD in 90 seconds because I will do this NEVER!
You're better off with a Gen 3 SSD. It's more than fast enough for any laptop and never uses a lot of power.
I still use P31 SSDs in my laptops since they are so energy efficient.
DavenI was thinking along the same lines.


The 7900XTX is the second fastest GPU ever made. Its hard to compete against emotion. There are a large number of Nvidia users that fall into three categories:

1. Cult followers - few but they actually exist.
2. Belief in anything negative even if untrue to always justify buying Nvidia.
3. And the worst Nvidia user - those who want AMD to compete in order to bring down prices of Nvidia cards so they can afford one. That’s a special kind of irrationality.

So AMD might go the Apple route and almost always bundle the GPU and CPU together. Maybe make a discrete card for the most popular price bracket (mid range) and save the rest of capacity space for Instincts.
Option 4: AMD fans talk a big game but never put their money where their mouth is. If they DO buy a GPU, it's a 6500xt or used mid range GPU.

Besides, what is the 7900xtx's market? How many people are there who want to buy a $1000 GPU that pulls more power and is significantly slower at RT then the competition? There IS a market there, the 7900xt/xtx were hard to find for months after launch, but that market does have a limit.

Many who were pent up on demand for a fast rater GPU jumped on a 6800/6900 series and were not willing to dump a grand for a 30% uptick. Nvidia has a rotation of people that seem to wait 2-3 gens then upgrade (many from pascal and turing gen owners are buying ampere, if steam results indicate anything), for AMD, 2-3 gens earlier there was no high end, and few if any consumers who had something considered good enough to wait for a 7000. IMO this is the reason their mid range does a lot better, there's actually a rotation of customers there.
Posted on Reply
#18
Wirko
DenverWhat's particularly intriguing is AMD's incorporation of a "GPU block" into their chiplet strategy. Creating a GPU die solely for use in a single product contradicts the principles of modularity/chiplet economics.
I speculate that the "GPU block" is actually two chiplets, GPU and IOD. That seems better for achieving modularity, which AMD is the master of. The IOD could also be made on an older process.
DenverIt makes me wonder if they might embrace a design that pairs two mid-end chips to make a high-end solution(replicating the strategy adopted by Ryzen).
That would probably require a 512-bit memory bus and 8 LPDDR packages very close to the processor (meaning, on the substrate) for adequate bandwidth. Not impossible but it would be a giant BGA package.
Posted on Reply
#19
Denver
CarillonThe SoC die looks much like a discrete gpu chip with 2 extra infinity fabric links, they might be able to link togheter 2 of those SoC dies or just reuse them singularly for an entry level gpu SKU.
Though I dont think IF links are fast enough to link 2 GPUs
If it weren't for the RAM bandwidth limitation, it would be an excellent deal for AMD to sell strong APUs instead of low-end GPUs. They would in fact be selling CPU and GPU in a combo with numerous advantages.
TheinsanegamerNI still use P31 SSDs in my laptops since they are so energy efficient.


Option 4: AMD fans talk a big game but never put their money where their mouth is. If they DO buy a GPU, it's a 6500xt or used mid range GPU.

Besides, what is the 7900xtx's market? How many people are there who want to buy a $1000 GPU that pulls more power and is significantly slower at RT then the competition? There IS a market there, the 7900xt/xtx were hard to find for months after launch, but that market does have a limit.

Many who were pent up on demand for a fast rater GPU jumped on a 6800/6900 series and were not willing to dump a grand for a 30% uptick. Nvidia has a rotation of people that seem to wait 2-3 gens then upgrade (many from pascal and turing gen owners are buying ampere, if steam results indicate anything), for AMD, 2-3 gens earlier there was no high end, and few if any consumers who had something considered good enough to wait for a 7000. IMO this is the reason their mid range does a lot better, there's actually a rotation of customers there.
The 7900XTX is among AMD's best-selling GPUs. Please, stop the RT cultism. It has already been proven that it is a waste of resources and does not run in a minimally acceptable way on GPUs below 4090, that is in the two games that make any difference using the technology.
Posted on Reply
#20
SL2
Daven3. And the worst Nvidia user - those who want AMD to compete in order to bring down prices of Nvidia cards so they can afford one. That’s a special kind of irrationality.
What's irrational with wanting lower prices? Does anyone here hate competition? :confused:

How many actually thinks "I just wish AMD got gone, even if it meant I had to pay twice the price for my next RTX!".
Posted on Reply
#21
Alan Smithee
Since the 12C SKU uses the same FP8 package as the single-CCD 7040/8040 series, the chart implies that there will be a 12C CCD using presumably 1-2 Zen5 plus 10-11 Zen5c
Posted on Reply
#22
SL2
Alan SmitheeSince the 12C SKU uses the same FP8 package as the single-CCD 7040/8040 series, the chart implies that there will be a 12C CCD using presumably 1-2 Zen5 plus 10-11 Zen5c
How?
Posted on Reply
#23
bug
DavenThe 7900XTX is the second fastest GPU ever made. Its hard to compete against emotion. There are a large number of Nvidia users that fall into three categories:

1. Cult followers - few but they actually exist.
2. Belief in anything negative even if untrue to always justify buying Nvidia.
3. And the worst Nvidia user - those who want AMD to compete in order to bring down prices of Nvidia cards so they can afford one. That’s a special kind of irrationality.

So AMD might go the Apple route and almost always bundle the GPU and CPU together. Maybe make a discrete card for the most popular price bracket (mid range) and save the rest of capacity space for Instincts.
This second fastest is within margin of error from 4080. And only if you disregard RT.
Also, RDNA3 is almost a freak in recent AMD GPU history in that it doesn't throw efficiency under the bus.
Posted on Reply
#24
Wirko
Alan SmitheeSince the 12C SKU uses the same FP8 package as the single-CCD 7040/8040 series, the chart implies that there will be a 12C CCD using presumably 1-2 Zen5 plus 10-11 Zen5c
Even if it turns out to be hybrid, why do you expect only 1-2 big cores, not 4?
Posted on Reply
#25
Minus Infinity
DavenAMD is doubling down on APUs. Maybe this is why they are possibly having second thoughts on the high end discrete client GPU market.
I don't see that. High end dGPU is not dead, it just has to wait for RDNA5. RDNA4's complex chiplet design is why RDNA4 high end was killed. It was taking too much time and effort to get working. It was much more complex than RDNA3. They decided not to spend the money and man hours getting it right as that would push further RDNA5 back which is already 12 months behind Blackwell. Also RDNA4 mid range is said to as strong as current RDNA3 flagships but a lot cheaper. If true N48 will sell up a storm at $500 max.
Posted on Reply
Add your own comment
May 4th, 2024 17:42 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts