Monday, October 23rd 2023

Qualcomm Snapdragon Elite X SoC for Laptop Leaks: 12 Cores, LPDDR5X Memory, and WiFi7

Thanks to the information from Windows Report, we have received numerous details regarding Qualcomm's upcoming Snapdragon Elite X chip for laptops. The Snapdragon Elite X SoC is built on top of Nuvia-derived Oryon cores, which Qualcomm put 12 off in the SoC. While we don't know their base frequencies, the all-core boost reaches 3.8 GHz. The SoC can reach up to 4.3 GHz on single and dual-core boosting. However, the slide notes that this is all pure "big" core configuration of the SoC, so no big.LITTLE design is done. The GPU part of Snapdragon Elite X is still based on Qualcomm's Adreno IP; however, the performance figures are up significantly to reach 4.6 TeraFLOPS of supposedly FP32 single-precision power. Accompanying the CPU and GPU, there are dedicated AI and image processing accelerators, like Hexagon Neural Processing Unit (NPU), which can process 45 trillion operations per second (TOPS). For the camera, the Spectra Image Sensor Processor (ISP) is there to support up to 4K HDR video capture on a dual 36 MP or a single 64 MP camera setup.

The SoC supports LPDDR5X memory running at 8533 MT/s and a maximum capacity of 64 GB. Apparently, the memory controller is an 8-channel one with a 16-bit width and a maximum bandwidth of 136 GB/s. Snapdragon Elite X has PCIe 4.0 and supports UFS 4.0 for outside connection. All of this is packed on a die manufactured by TSMC on a 4 nm node. In addition to marketing excellent performance compared to x86 solutions, Qualcomm also advertises the SoC as power efficient. The slide notes that it uses 1/3 of the power at the same peak PC performance of x86 offerings. It is also interesting to note that the package will support WiFi7 and Bluetooth 5.4. Officially coming in 2024, the Snapdragon Elite X will have to compete with Intel's Meteor Lake and/or Arrow Lake, in addition to AMD Strix Point.
Additionally, we have previously reported that Qualcomm is insisting on integrating its own PMICs (Power Management Integrated Circuits), which are inherently designed for cell phones, causing significant compatibility and efficiency issues in the deployment of this new Snapdragon Elite X processor. Also, the company advertises the SoC as capable of running 13 billion-parameter models, as well as 7B models at 70 tokens per second. This means that local LLM inference will be very efficient. To learn more, we still have to wait for any official reviews coming next year. Below, you can see the complete specification table, courtesy of Windows Report.
Source: Windows Report
Add your own comment

32 Comments on Qualcomm Snapdragon Elite X SoC for Laptop Leaks: 12 Cores, LPDDR5X Memory, and WiFi7

#26
lexluthermiester
trsttteWhy do you say that?
Seriously? Because it's ARM... and Windows On ARM is a bit of a joke, currently. Android is excellent! Sure as hell don't want iOS/MacOS.
trsttteWell, why not? ARM can be quite efficient in small everyday tasks and there's more players to help keep the prices down and continue to advance the performance, contrary to what happens with x86 where if either AMD or Intel starts lagging behind, the other one just waits for them to catch up (i.e. what's been happening the past couple years with Intel when AMD was down on it's luck)
You may have misunderstood my comment. WTHK=Who The Hell Knows

Let's be clear: ARM is more efficient, NOT more powerful, than X86/X64. Watt for Watt, Cycle for Cycle, X86 is a MUCH better performer, full stop, end of discussion. Additionally, X86/X64 CPUs can do A LOT more in hardware than ARM CPUs as it has more instruction sets. That's the point. ARM is RISC(Reduced Instruction Set Computing). X86/X64 is CISC(Complex Instruction Set Computing).

Using a RISC CPU for a general computing device is doable as long as the tasks are not too complex and the code is optimized & compiled properly. However, there are many functions that are always better done on CISC as they can run hardware accelerated, whereas on a RISC CPU those same functions have to be emulated in software, which is always less efficient.

EDIT: A perfect example of some of these points is shown in the following video.
And Quadcomm's new Snapdragon SOC will make the RPi5 look slow by comparison. With Android or a version of Linux, the laptop discussed in this article would be excellent!
Posted on Reply
#27
alwayssts
lexluthermiesterSeriously? Because it's ARM... and Windows On ARM is a bit of a joke, currently. Android is excellent! Sure as hell don't want iOS/MacOS.


You may have misunderstood my comment. WTHK=Who The Hell Knows

Let's be clear: ARM is more efficient, NOT more powerful, than X86/X64. Watt for Watt, Cycle for Cycle, X86 is a MUCH better performer, full stop, end of discussion. Additionally, X86/X64 CPUs can do A LOT more in hardware than ARM CPUs as it has more instruction sets. That's the point. ARM is RISC(Reduced Instruction Set Computing). X86/X64 is CISC(Complex Instruction Set Computing).

Using a RISC CPU for a general computing device is doable as long as the tasks are not too complex and the code is optimized & compiled properly. However, there are many functions that are always better done on CISC as they can run hardware accelerated, whereas on a RISC CPU those same functions have to be emulated in software, which is always less efficient.

EDIT: A perfect example of some of these points is shown in the following video.
And Quadcomm's new Snapdragon SOC will make the RPi5 look slow by comparison. With Android or a version of Linux, the laptop discussed in this article would be excellent!
I think you're over-complicating things. For general use, ARM has been a reasonable arch since the launch of OoO, which was 10 years ago. Since 8-wide instruction (A76/A13), five years ago, it has been a viable alternative wrt cost and efficiency. We are now at this precipice, where it can make inroads into good-enough ST and fairly amazing MT versus x86, and that's taking into account translation layers. I don't think anyone should be making any bets on where we'll be five years from now.

You have to remember the architecture can also evolve, and there is a decent likelihood fixed-function hardware will be added as time goes on. Give WoA a little time to get their ducks in a row, especially as new players enter the field. If that doesn't work, It's always possible nVIDIA could make a GeForOS complete with low-level integration. Also tiles pointing you to Geforce Now and maybe licensed Nintendo emulation. It could get real weird, real fast. It would've been weirder faster had nVIDIA been able to acquire ARM (good call there, regulatory bodies) but don't be surprised if it still does.

I find it amazing that people think it's such a big leap when they lived through Apple's transition not only from Intel, but from PowerPC (or at least I did, anyway). These things don't happen overnight, but it can happen (not just a competitive product, but ARM-specific programming), and likely will happen. Most people, including most gamers, currently just simply do not need something faster than say a 12600 for single-threaded performance. They truly just do not (I imagine even PS5 pro will have a similar caliber ST cpu; probably ~3.9-4.2ghz), and this is being compared to 13800H; right on the money. I struggle to see, even at the high-end of the consumer spectrum, how most-anyone is going to (in the next 10-15 years) need something more than 10-15% faster than a 14900k; maybe 25-40% for extreme, extreme enthusiasts. That's a gap of only ~50% or so, and arguably can be made up with more performance cores (which is exactly what Qualcomm is doing) assuming that apps/games take advantage of the higher core count. Eventually they most likely will, assuming Intel/AMD don't stay on an 8 P-core design forever. It's a good design; efficient and forward-thinking.

ARM is now capable of good-enough performance done extremely efficiently, and is focused on going wider....as are the x86 folks (working backwards). You can take the over on (slightly) faster ST performance, but at some point (a couple of generations from now) it will probably just all blur together. This is the first step.
Posted on Reply
#29
alwayssts
TumbleGeorge
@Anandtech liveblog
I don't know what they are filming the performance with. The photo is of poor quality.

1:22:00
Posted on Reply
#30
lexluthermiester
alwaysstsI think you're over-complicating things.
Not really, but you do you.
alwaysstsARM is now capable of good-enough performance done extremely efficiently
You just restated what I said earlier... If you're going to respond with a small wall of text, could you at least try to make it lucid?
Posted on Reply
#31
Dr. Dro
stimpy88Can't see the point in this, other than for coding and debugging purposes.
Battery life on ARM laptops is insane. Great for office work and video playback.
Posted on Reply
#32
lexluthermiester
Dr. DroBattery life on ARM laptops is insane. Great for office work and video playback.
Good point! ARM hardware with a standard 6 or 9 cell laptop battery pack? Hell yes!
Posted on Reply
Add your own comment
Jun 2nd, 2024 00:17 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts