Apple doesn't matter beyond its 10% market share, they don't target servers any more.
Ampere is a step away to be fully owned by Oracle, I bet most HN ARM cheering crowd is blessifuly unaware of it.
Graviton is only relevant for AWS customers.
Is it? I presume that a large chunk of the AMD's $3.5B is MI3XX chips, and very little of Intel's $3.5B is AI, so doesn't that mean that Xeon likely still substantially outsells EPYC?
This may be in the cards.
Maybe Pat has lit the much needed fire under them.
Unfortunately for Intel, X Elite was a bad CPU that has been fixed with Snapdragon 8 Elite's update. The core uses a tiny fraction of the power of X Elite (way less than the N3 node shrink would offer). The core also got a bigger frontend and a few other changes which seem to have updated IPC.
Qualcomm said they are leading in performance per area and I believe it is true. Lunar Lake's P-core is over 2x as large (2.2mm2 vs 4.5mm2) and Zen5 is nearly 2x as large too at 4.2mm2 (Even Zen5c is massively bigger at 3.1mm2).
X Elite 2 will either be launching with 8 Elite's core or an even better variant and it'll be launching quite a while before Panther Lake.
> Future Intel generations of chips, including Panther Lake and Nova Lake, won’t have baked-on memory. “It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,” said Gelsinger on Intel’s Q3 2024 earnings call, as spotted by VideoCardz.[0]
[0]: https://www.theverge.com/2024/11/1/24285513/intel-ceo-lunar-...
When you prioritize yourself (way to run the business) over delivering what customers want you're finished. Some companies can get that wrong for a long time, but Intel has a competitor giving the customers much more of what they want. I want a great chip and honestly don't know, care, or give a fuck what's best for Intel.
Unless “way to run the business” means “delivering what the customer wants.”
And also, they compete in the same price bracket as Zen 5, which are more performant with not that much worse battery life.
LNL is too little too late.
[0] https://www.bestbuy.com/site/asus-vivobook-s-14-14-oled-lapt...
We will see whatever they come out with for 17th gen onwards, but for now Intel needs to fucking pay back their CHIPS money.
TSMC Washington is making 160nm silicon [0], and TSMC Arizona is still under construction.
[0] https://www.tsmcwashington.com/en/foundry/technology.html
There's 4-nm "engineering wafer" production happening at TSMC Arizona already, and apparently the yields are decent:
https://finance.yahoo.com/news/tsmc-arizona-chip-plant-yield...
No idea when/what/how/etc that'll translate to actual production.
---
Doing a bit more poking around the net, it looks like "first half 2025" is when actual production is pencilled in for TSMC Arizona. Hopefully that works out.
I'm not saying that TSMC is never going to build anything in the US, but rather that the current Lunar / Arrow Lake chips on the market are not being fabbed in the US because that capacity is simply not online yet.
2025H1 seems much more promising for TSMC Arizona compared to the mess that is Samsung's Taylor, TX plant (also nominally under construction).
AMD is IME more finicky with RAM, chipset / UEFI / builtin peripheral controller quality and so on. Not prohibitively so, but it's more work to get an AMD build to run great.
No trouble with any AMD or Intel Thinkpad T models, Lenovo has taken care of that.
A dying platform and as relevant as VAX/VMS going forward.
Intel should be focused on an x86+RISC-V hybrid chip design where they can control an upcoming ecosystem while also offering a migration path for businesses that will pay the bills for decades to come.
But having worked with Intel on some of those SoCs, it's everything else that fell down. They were late, they were the "disfavored" teams by execs, they were the engineer's last priority, they had stupid hw bugs they refused to fix and respin, they were everything you could do to set up a project to fail.
This was the main thing, as by that point, all native code was being compiled to Arm and not x86. Using x86 meant that some apps, libraries, etc just didn't work.
Shortly after though, ARM launched A15 and the game was over. A15 was faster per clock while using less power too. Intel's future Atom generations never even came close after that.
The BoM was pretty identical to other devices.
The use cases for FPGAs in consumer devices are ... close to zero unless you're talking about implementing copy protection since reverse engineering FPGA bitstreams is pretty much impossible if you're not the NSA, MI6 or Mossad with infinite brains to throw at the problem (and more likely than not, insider knowledge from the vendors).
Qualcomm made a 216-page proposal for their Znew[0] "extension".
It was basically "completely change RISC-V to do what Arm is doing". The only reason for this was that it would allow a super-fast transition from ARM to RISC-V. It was rejected HARD by all the other members.
Qualcomm is still making large investments into RISC-V. I saw an article estimating that the real reason for the Qualcomm v Arm lawsuit is that Qualcomm's old royalties were 2.5-3% while the new royalties would be 4-5.5%. We're talking about billions of dollars and that's plenty of incentive for Qualcomm to switch ISAs. Why should they pay billions for the privilege of designing their own CPUs?
[0] https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...
First I've heard of this. Is this actually a possibility?
To me it seems they just want to keep their lock-in monopoly because they own x86. Very rational albeit stupid, but of course the people who took those decisions are long gone from the company, many are probably retired with their short-term focused bonuses.
Even if it was hard to foresee the success of the iPhone, he surely had the Core Duo in his hands when this happened even if it didn't launch yet so the company just found its footing again and they should've attempted this moonshot: if the volume is low, the losses are low. If the volume is high then economies of scale will make it a win. This is not hindsight 20/20, this is true even if no one could've foreseen just how high the volume would've been.
The public clouds help a lot here by trivializing testing and locking in enough volume to get all of the basic stuff supported, and I think that’s why AMD was more successful now than they were in the Opteron era.
Maybe recent EPYC had caught up? I haven't been following too closely since it hasn't mattered to me. But both companies were suggesting an AMD pass by.
Not surprising at all though, anyone who's been following roadmaps knew it was only a matter of time. AMD is /hungry/.
No, not its not even close. AMD is miles ahead.
This is a Phoronix review for Turin (current generation): https://www.phoronix.com/review/amd-epyc-9965-9755-benchmark...
You can similarly search for phoronix reviews for the Genoa, Bergamo, and Milan generations (the last two generations).
AMD is still going to win a lot of the time, but Intel is better than it seems.
Their acceleration primitives work with many TLS implementations/nginx/SSH amongst many others.
Possibly AMD is doing similar but I'm not aware.
In AI world they have OpenVINO, Intel Neural Compressor, and a slew of other implementations that typically offer dramatic performance improvements.
Like we see with AMD trying to compete with Nvidia software matters - a lot.
And things like the MI300A mean that isn't really a requirement now either.
QAT is an integrated offering by Intel, but there are competing products delivered as add-in cards for most of the things it does, and they have more market presence than QAT. As such, QAT provides much less advantage to Intel than Intel marketing makes it seem like. Because yes, Xeon (including QAT) is better than bare Epyc, but Epyc + third party accelerator beats it handily. Especially in cost, the appearance of QAT seems to have spooked the vendors and the prices came down a lot.
For most users it is like the accelerators do not exist, even if they increase the area and the cost of all Intel Xeon CPUs.
This market segmentation policy is exactly as stupid as the removal of AVX-512 from the Intel consumer CPUs.
All users hate market segmentation and it is an important reason for preferring AMD CPUs, which are differentiated only on quantitative features, like number of cores, clock frequency or cache size, not on qualitative features, like the Intel CPUs, for which you must deploy different program variants, depending on the cost of the CPU, which may provide or not provide the features required for running the program.
The Intel marketing has always hoped that by showing nice features available only in expensive SKUs they will trick the customers into spending more for the top models. However any wise customer has preferred to buy from the competition instead of choosing between cheap crippled SKUs and complete but too expensive SKUs.
Market segmentation sucks, but people buying 10,000+ servers do not do it based on which vendor gives them better vibes. People seem to generally be buying a mix of vendors based on what they are good at.
On the other hand, for any small businesses or individual users, who have no choice but to buy at the list prices or more, the TCO for the Intel server CPUs has become unacceptably bad. Before 2017, until the Broadwell Xeons, the TCO for the Intel server CPUs could be very good, even when bought at retail for a single server. However starting with the Skylake Server Xeons, the price for the non-crippled Xeon SKUs has increased so much that they have been no longer a good choice, except for the very big customers who buy them much cheaper than the official prices.
The fact that Intel must discount so much their server CPUs for the big customers is likely to explain a good part of their huge financial losses during the last quarters.
Developers can barely be bothered to recompile their code for different ISA variants, let alone optimize it for each one.
So often we just build for 1-2 of the most common, baseline versions of an ISA.
Probably doesn't help that (IIRC) ELF executables for the x86-64 System V ABI have now way to indicate precisely which ISA variants they support. So it's not easy during program-loading time to notice if your going to have a problem with unsupported instructions.
(It's also a good argument for using open source software: you can compile it for your specific hardware target if you want to.)
this is one of those things where there's a lot of money on the line, and people are willing to do the math.
the fact that it took this long should tell you everything you need to know about the reality of the situation
AMD has had the power efficiency crown in data center since Rome, released in 2019. And their data center CPUs became the best in the world years before they beat Intel in client.
The people who care deeply about power efficiency could and did do the math, and went with AMD. It is notable that they sell much better to the hyperscalers than they sell to small and medium businesses.
> idk go look at the xeon versus amd equivalent benchmarks.
They all show AMD with a strong lead in power efficiency for the past 5 years.
Edit: awww no trash talking it yet, unlike the 7800x3d :)
Intel has just been removed from the Dow index. They are under performing on multiple levels
https://apnews.com/article/dow-intel-nvidia-sherwinwilliams-...
On the other hand AMD has been very conservative with their EPYC sales and forecast.
The good news is that AMD has, finally, mostly achieved that, and in some ways they are now superior. But that has taken time: far longer than it took AMD to beat Intel at benchmarks.
https://www.ebay.com/itm/235469964291
Best start believin' in the crazy cyberpunk stories. You're in one!
edit: legally that is, assuming there's even enough demand for these tools for anyone to bother cracking them
Where are they pulling those PCB chunks from that have cheap Ultrascale+ chips?
When the Intel 80386-33 came out we thought it was the pinnacle of CPUs, running our Novell servers! We now had a justification to switch from arcnet to token ring. Our servers could push things way faster!
Then, in the middle 1991, the AMD 80386-40 CPU came out. Mind completely blown! We ordered some (I think) Twinhead motherboards. They were so fast we could only use Hercules mono cards in them; all other video cards were fried. 16Mb token ring was out, so some of my clients moved to it with the fantastic CPU.
I have seen some closet-servers running Novell NetWare 3.14 (?) with that AMD CPU in the late '90s. There was a QUIC tape & tape drive in the machine that was never changed for maybe a decade? The machine never went down (or properly backed up).
> While the AM386 CPU was essentially ready to be released prior to 1991, Intel kept it tied up in court.[2] Intel learned of the Am386 when both companies hired employees with the same name who coincidentally stayed at the same hotel, which accidentally forwarded a package for AMD to Intel's employee.[3]
That's amazing!
After all, it sounds like they directly caused a "billion dollar" type of problem for AMD through their mistake.
Don't look too closely at the collision avoidance mechanism in 10base-T1S, standardized in 2020. Sure looks like a virtual token ring passing mechanism if you squint...
I once had a bloke writing a patch for eDirectory in real time in his basement whilst running our data on his home lab gear, on a weekend. I'm in the UK and he was in Utah. He'd upload an effort and I'd ftp it down, put it in place, reboot the cluster and test. Two iterations and job done. That was quite impressive support for a customer with roughly 5,000 users.
For me the CPU wasn't that important, per se. NWFS ate RAM: when the volumes were mounted, the system generated all sorts of funky caches which meant that you could apply and use trustee assignments (ACLs) really fast. The RAID controller and the discs were the important thing for file serving and ideally you had wires, switches and NICs to dole the data out at a reasonable rate.
Good luck doing that on a load balanced rack of 96 core AMD servers today.
SIP hasn't gotten much heavier, nor CGI scripts, and tiny distros like OpenWRT can do a lot with little iron.
Heard lots of rough ADSL era VoIP stories, hopefully you weren't struggling with that back then.
That's not how it works. You need to pump money into fabs to get them working, and Intel doesn't have money. If AMD had fabs to light up their money, they would also have a much lower valuation.
The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$. 225$ was hit when AMD was guiding ~3.5B in datacenter GPU revenue. Now, they're guiding to end the year at 5B+ datacenter GPU revenue, but the stock is ~140$?
I think it's because of how early Nvidia announced Blackwell (it isn't any meaningful volume yet), and the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year? I don't know how to explain the stock price.
Anyway, they hit record quarterly revenue this Q3 and are guiding to beat this record by ~1B next quarter. Price might move a lot based on how AMD guides for Q1 2025.
Being fabless does have an impact because it caps AMD's margins and makes x86 their only moat. They can only extract value if they remain competitive on price. Sure that does not impact Nvidia, but they get to have fat margins because they have virtually no competition.
> The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$.
That's volatility not irrationality. As I wrote AMD's valuation is built on the basis that they will keep executing in the DC space, Intel will keep shitting the bed and their MI series will eventually be competitive with Nvidia. These facts make investor skittish and any news about AMD causes the stock to move.
> the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year?
The only hyperscaler that picked up MI300X is Azure and they GA'ed it 2 weeks ago, both GCP and Azure are holding off. The uncertainty on when (if) it will catch on is a factor but the growing competition from those same hyperscaler building their own chip means that the opportunity window could be closing.
It's ok to be bullish on AMD the same way that I am bearish on it, but I would maintain that the swings have nothing to do with irrationality.
What does “GA” mean in this context?
I’m usually pretty good at deciphering acronyms, but in this case, I have no idea.
GA means Generally Available. To GA something is a shorthand for "to make X generally available".
Many "influencers" have been convinced that: it is all about software - especially in AI. (I happen to agree, but my opinion doesn't matter).
It doesn't matter how well a company is doing if they are targeting the wrong point - their future will be grim. And stock is all about the future.
Everyone I think knew AMD is catching up but thought this was still a year or two out
If I were AMD CEO I would make the top priority to have a software stack on par with CUDA so that AMD GPUs have a chance in the data centers.