AMD outsells Intel in the datacenter space
459 points by baal80spam a day ago | 155 comments
  • jeffbee a day ago |
    Interpretation notes: first time, in the era during which said companies broke out "datacenter" as a reporting category. The last time AMD was clearly on top in terms of product quality, they reported 2006 revenue of $5.3 billion for microprocessors while Intel reported $9.2 billion in the same category. In those years the companies incompletely or inconsistently reported separate sales for "server" or "enterprise".
    • rini17 a day ago |
      Still, there were always clearly defined product lines, like Athlon vs. Opteron.
      • jeffbee 15 hours ago |
        Yes, but not differentiated in the earnings reports.
  • gautamcgoel a day ago |
    Damn, first Intel missed out on Mobile, then it fumbled AI, and now it's being seriously challenged on its home turf. Pat has his work cut out for him.
    • jsheard a day ago |
      Not to mention that ARM keeps closing in on their ISA moat via Apple, Ampere, Graviton and so on. Their last bastion is the fact that Microsoft keeps botching Windows for ARM every time they try to make it happen.
      • DanielHB 19 hours ago |
        Not so much with the latest ARM windows laptops
      • pjmlp 18 hours ago |
        Not really Microsoft, rather the Windows developer ecosystem, backwards compatibility is the name of the game in PC land, and as such there is very little value to add additional development costs to support ARM alongside x86, for so little additional software sales.

        Apple doesn't matter beyond its 10% market share, they don't target servers any more.

        Ampere is a step away to be fully owned by Oracle, I bet most HN ARM cheering crowd is blessifuly unaware of it.

        Graviton is only relevant for AWS customers.

    • bryanlarsen a day ago |
      > seriously challenged on its home turf.

      Is it? I presume that a large chunk of the AMD's $3.5B is MI3XX chips, and very little of Intel's $3.5B is AI, so doesn't that mean that Xeon likely still substantially outsells EPYC?

      • adgjlsfhk1 16 hours ago |
        not necessarily. in the past 5 years, the x86 monopoly in the server world has broken. arm chips like graviton are a substantial fraction (20%?) of the server CPU market.
    • rafaelmn a day ago |
      His work now boils down to prepping Intel for an acquisition.
      • Wytwwww a day ago |
        By whom though? I don't see how any company directly competing with Intel (or even orthogonal e.g. Nvidia and ARM) could be allowed to by Intel (they'd need approval in the US/EU and presumably a few other places) unless it's actually on the brink of bankruptcy?
        • shiroiushi a day ago |
          >unless it's actually on the brink of bankruptcy?

          This may be in the cards.

      • saywhanow a day ago |
        IIRC Intel and AMD have a patent sharing agreement that dissolves if either is purchased.
        • rafaelmn a day ago |
          That's just a thing that needs to be renegotiated - highly doubt these two are getting into a patent war given the state of x86. Unless Intel gets acquired by a litigious company to go after AMD :shrug:
        • tonyhart7 17 hours ago |
          Gov would bail out intel or amd lol, this would negate American tech dominance and would try to prevent that
    • cheema33 a day ago |
      Intel has come back recently with a new series of "Lunar Lake" CPUs for laptops. They are actually very good. For now, Intel has regained the crown for Windows laptops.

      Maybe Pat has lit the much needed fire under them.

      • pantalaimon a day ago |
        The only ugly (for Intel) detail being that they are fabbed by TSMC
      • hollandheese a day ago |
        Snapdragon X Plus/Elite is still faster and has better battery life. Lunar Lake does have a better GPU and of course better compatibility.
        • hajile a day ago |
          X Elite is faster, but not enough to offset the software incompatibility or dealing with the GPU absolutely sucking.

          Unfortunately for Intel, X Elite was a bad CPU that has been fixed with Snapdragon 8 Elite's update. The core uses a tiny fraction of the power of X Elite (way less than the N3 node shrink would offer). The core also got a bigger frontend and a few other changes which seem to have updated IPC.

          Qualcomm said they are leading in performance per area and I believe it is true. Lunar Lake's P-core is over 2x as large (2.2mm2 vs 4.5mm2) and Zen5 is nearly 2x as large too at 4.2mm2 (Even Zen5c is massively bigger at 3.1mm2).

          X Elite 2 will either be launching with 8 Elite's core or an even better variant and it'll be launching quite a while before Panther Lake.

      • pityJuke a day ago |
        Worth noting,

        > Future Intel generations of chips, including Panther Lake and Nova Lake, won’t have baked-on memory. “It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,” said Gelsinger on Intel’s Q3 2024 earnings call, as spotted by VideoCardz.[0]

        [0]: https://www.theverge.com/2024/11/1/24285513/intel-ceo-lunar-...

        • phkahler a day ago |
          “It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,”

          When you prioritize yourself (way to run the business) over delivering what customers want you're finished. Some companies can get that wrong for a long time, but Intel has a competitor giving the customers much more of what they want. I want a great chip and honestly don't know, care, or give a fuck what's best for Intel.

          • nyokodo a day ago |
            > When you prioritize yourself

            Unless “way to run the business” means “delivering what the customer wants.”

            • wongogue a day ago |
              Customer being the OEMs.
              • coder543 a day ago |
                I thought the OEMs liked the idea of being able to demand high profit margins on RAM upgrades at checkout, which is especially easy to justify when the RAM is on-package with the CPU. That way no one can claim the OEM was the one choosing to be anti-consumer by soldering the RAM to the motherboard, and they can just blame Intel.
                • hnav a day ago |
                  Intel would definitely try to directly profit from stratified pricing rather than letting the OEM keep that extra margin (competition from AMD permitting).
                • unnah a day ago |
                  OEMs like it when it's them buying the cheap RAM chips and getting the juicy profits from huge mark-ups, not so much when they have to split the pie with Intel. As long as Intel cannot offer integrated RAM at price equivalent to external RAM chips, their customers (OEMs) are not interested.
      • eBombzor a day ago |
        LNL is a great paper launch but I have yet to see a reasonably priced LNL laptop so far. Nowadays I can find 16GB Airs and X Elite laptops for 700-900 bucks, and once you get into 1400 territory just pay a bit more for M4 MBPs which are far superior machines.

        And also, they compete in the same price bracket as Zen 5, which are more performant with not that much worse battery life.

        LNL is too little too late.

        • phonon a day ago |
          An M4 Macbook Pro 14 with 32 GB of RAM and 1 TB storage is $2,199... a Lunar Lake with the same specs is $1199. [0]

          [0] https://www.bestbuy.com/site/asus-vivobook-s-14-14-oled-lapt...

          • stackghost a day ago |
            Yeah because it's an ASUS product. They make garbage.
          • bigfatkitten 10 hours ago |
            With a build quality planets apart.
            • phonon 9 hours ago |
              My point is it's not "just pay a bit more".
      • Dalewyn a day ago |
        Lunarrow Lake is a big L for Intel because it's all Made by TSMC. A big reason I buy Intel is because they're Made by Intel.

        We will see whatever they come out with for 17th gen onwards, but for now Intel needs to fucking pay back their CHIPS money.

        • justinclift a day ago |
          Are they being fabbed by TSMC in the US, or overseas?
          • vitus 17 hours ago |
            TSMC doesn't have any cutting-edge fabs in the US yet.

            TSMC Washington is making 160nm silicon [0], and TSMC Arizona is still under construction.

            [0] https://www.tsmcwashington.com/en/foundry/technology.html

            • justinclift 16 hours ago |
              That page doesn't really say much about what's currently being produced at TSMC Arizona vs the parts still under construction.

              There's 4-nm "engineering wafer" production happening at TSMC Arizona already, and apparently the yields are decent:

              https://finance.yahoo.com/news/tsmc-arizona-chip-plant-yield...

              No idea when/what/how/etc that'll translate to actual production.

              ---

              Doing a bit more poking around the net, it looks like "first half 2025" is when actual production is pencilled in for TSMC Arizona. Hopefully that works out.

              • vitus 5 hours ago |
                No disagreement here; the link I provided was specifically for TSMC Washington.

                I'm not saying that TSMC is never going to build anything in the US, but rather that the current Lunar / Arrow Lake chips on the market are not being fabbed in the US because that capacity is simply not online yet.

                2025H1 seems much more promising for TSMC Arizona compared to the mess that is Samsung's Taylor, TX plant (also nominally under construction).

      • hedora a day ago |
        Yeah, but can they run any modern OS well? The last N intel laptops and desktops I’ve used were incapable of stably running Windows, MacOS or Linux. (As in the windows and apple ones couldn’t run their preloaded operating systems well, and loading Linux didn’t fix it.)
        • ahartmetz a day ago |
          Very strange. Enough bad things can be said about Intel CPUs, but I have never had any doubts about their stability. Except for that one recent generation that could age to death in a couple of months (I didn't have any of these).

          AMD is IME more finicky with RAM, chipset / UEFI / builtin peripheral controller quality and so on. Not prohibitively so, but it's more work to get an AMD build to run great.

          No trouble with any AMD or Intel Thinkpad T models, Lenovo has taken care of that.

      • otabdeveloper4 21 hours ago |
        > Windows laptops

        A dying platform and as relevant as VAX/VMS going forward.

        • CoastalCoder 17 hours ago |
          You just made me nostalgic for amber screens, line printers, and all-nighters with fellow students.
    • kevin_thibedeau a day ago |
      They didn't miss out. They owned the most desirable mobile platform in StrongARM and cast it aside. They are the footgun masters.
      • hajile a day ago |
        They killed StrongARM because they believed the x86 Atom design could compete. Turns out that it couldn't and most of the phones with it weren't that great.

        Intel should be focused on an x86+RISC-V hybrid chip design where they can control an upcoming ecosystem while also offering a migration path for businesses that will pay the bills for decades to come.

        • kimixa a day ago |
          I'd argue that the Atom core itself could compete - it hit pretty much the same perf/watt targets as it's performance-competitive ARM equivalents.

          But having worked with Intel on some of those SoCs, it's everything else that fell down. They were late, they were the "disfavored" teams by execs, they were the engineer's last priority, they had stupid hw bugs they refused to fix and respin, they were everything you could do to set up a project to fail.

          • RussianCow a day ago |
            > They were late

            This was the main thing, as by that point, all native code was being compiled to Arm and not x86. Using x86 meant that some apps, libraries, etc just didn't work.

          • hajile a day ago |
            Medfield was faster than A9 and Qualcomm Krait in performance, but not so much in power (see Motorola Razr i vs M where the dual-core ARM version got basically the same battery life as the single-core x86 version).

            Shortly after though, ARM launched A15 and the game was over. A15 was faster per clock while using less power too. Intel's future Atom generations never even came close after that.

          • raverbashing a day ago |
            Maybe the Atom core itself was performant, but I doubt they could take all the x86 crap around it to make it slim enough for a phone
            • kimixa a day ago |
              They were SoCs, fundamentally the same as any ARM-based phone SoC - some atom SoCs even had integrated modems.

              The BoM was pretty identical to other devices.

              • ksec 17 hours ago |
                Exactly. Most people still dont get it. What killed Atom on Phone wasn't x86. It was partly software and mostly hardware and cost. It simply wasn't cost competitive, especially when Intel were used to high margin business.
        • Keyframe a day ago |
          Maybe I'm just spitting out random BS, but if I understood Keller correctly when he spoke about Zen that (for it) it's not really a problem to change frontend ISA as large chunk of work is on the backend anyways. If that's the case in general with modern processors, would be cool to see a hybrid that can be switched from x86_64 to RISC-V and, to add even more avangarde to it, associate a core or few of FPGA on the same die. Intel, get on it!
          • vel0city a day ago |
            There were consumer devices with a processor designed to be flexible on its instruction set presented to the user.

            https://en.wikipedia.org/wiki/Transmeta_Crusoe

            https://youtu.be/xtuKqd-LWog?t=332

            • Keyframe a day ago |
              aka the company where Linus worked!
              • nineteen999 a day ago |
                that also kinda failed to reach their goals unfortunately.
          • formerly_proven a day ago |
            "not really a problem to change" in the context and scope of a multi-billion dollar project employing thousands of people full time.
          • dwattttt a day ago |
            If you think about it, that's what Thumb mode on ARM is.
            • kevin_thibedeau a day ago |
              Plus the original Jazelle mode.
          • mschuster91 a day ago |
            > and, to add even more avangarde to it, associate a core or few of FPGA on the same die

            The use cases for FPGAs in consumer devices are ... close to zero unless you're talking about implementing copy protection since reverse engineering FPGA bitstreams is pretty much impossible if you're not the NSA, MI6 or Mossad with infinite brains to throw at the problem (and more likely than not, insider knowledge from the vendors).

          • mshockwave a day ago |
            Reminds me that's also many people's speculation on how Qualcomm builds their RISCV chips -- swap an ARM decoder for a RISCV one.
            • hajile 10 hours ago |
              That's not speculation.

              Qualcomm made a 216-page proposal for their Znew[0] "extension".

              It was basically "completely change RISC-V to do what Arm is doing". The only reason for this was that it would allow a super-fast transition from ARM to RISC-V. It was rejected HARD by all the other members.

              Qualcomm is still making large investments into RISC-V. I saw an article estimating that the real reason for the Qualcomm v Arm lawsuit is that Qualcomm's old royalties were 2.5-3% while the new royalties would be 4-5.5%. We're talking about billions of dollars and that's plenty of incentive for Qualcomm to switch ISAs. Why should they pay billions for the privilege of designing their own CPUs?

              [0] https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...

          • neerajsi a day ago |
            From what I gather the one time I got to speak with chip engineers is that real estate is still at a premium. Not necessarily the total size of the chip, but certain things need to be packed close together to meet the timing requirements. I think that means that you'd be paying a serious penalty to have two parallel sets of decoders for different ISAs.
        • arcanemachiner a day ago |
          > Intel should be focused on an x86+RISC-V hybrid chip design where they can control an upcoming ecosystem while also offering a migration path for businesses that will pay the bills for decades to come.

          First I've heard of this. Is this actually a possibility?

          • mshockwave a day ago |
            RP2350 is using a hybrid of ARM and RISCV already. Also it's not really hard to use RISCV not as the main computing core but as a controller in the SoC. Because the area of RISCV cores are so small it's pretty common to put a dozen (16 to be specific) into a chip.
        • raverbashing a day ago |
          Sounds like Intel has a big boomer problem
        • deelowe a day ago |
          They killed strongarm because of nepotism. That's been the issue at Intel for decades. They are the epitome of ego over merit and x86 was king.
          • DanielHB 19 hours ago |
            Nepotism? Like execs from different divisions fighting each other?

            To me it seems they just want to keep their lock-in monopoly because they own x86. Very rational albeit stupid, but of course the people who took those decisions are long gone from the company, many are probably retired with their short-term focused bonuses.

      • ThrowawayB7 a day ago |
        They had a second attempt with x86 smartphone chips and bungled that too: https://www.pcworld.com/article/414673/intel-is-on-the-verge...
      • chx a day ago |
        Yeah Otellini disclosed Jobs asked them for a CPU for the iPhone and he turned the request down because Jobs was adamant on a certain price and he just couldn't see it.

        Even if it was hard to foresee the success of the iPhone, he surely had the Core Duo in his hands when this happened even if it didn't launch yet so the company just found its footing again and they should've attempted this moonshot: if the volume is low, the losses are low. If the volume is high then economies of scale will make it a win. This is not hindsight 20/20, this is true even if no one could've foreseen just how high the volume would've been.

    • nine_k a day ago |
      You forgot the 10 nm / 7 nm node troubles that continued for years and held back their CPU architectures (which honestly kept improving).
  • bloody-crow a day ago |
    Surprising it took so long given how dominant the EPYC CPUs were for years.
    • jsheard a day ago |
      Nobody ever got fired for buying Intel.
      • ginko a day ago |
        They should be.
      • speed_spread a day ago |
        But some caught on fire by standing too close.
      • browningstreet a day ago |
        That’s not a thing.
    • acdha a day ago |
      One thing to remember is that the enterprise space is very conservative: AMD needed to have server-grade CPUs for all of the security and management features on the market long enough for the vendors to certify them, promise support periods, etc. and they need to get the enterprise software vendors to commit as well.

      The public clouds help a lot here by trivializing testing and locking in enough volume to get all of the basic stuff supported, and I think that’s why AMD was more successful now than they were in the Opteron era.

    • parl_match a day ago |
      Complicated. Performance per watt was better for Intel, which matters way more when you're running a large fleet. Doesn't matter so much for workstations or gamers, where all that matters is performance. Also, certification, enterprise management story, etc was not there.

      Maybe recent EPYC had caught up? I haven't been following too closely since it hasn't mattered to me. But both companies were suggesting an AMD pass by.

      Not surprising at all though, anyone who's been following roadmaps knew it was only a matter of time. AMD is /hungry/.

      • dhruvdh a day ago |
        > Performance per watt was better for Intel

        No, not its not even close. AMD is miles ahead.

        This is a Phoronix review for Turin (current generation): https://www.phoronix.com/review/amd-epyc-9965-9755-benchmark...

        You can similarly search for phoronix reviews for the Genoa, Bergamo, and Milan generations (the last two generations).

        • pclmulqdq a day ago |
          You're thinking strictly about core performance per watt. Intel has been offering a number of accelerators and other features that make perf/watt look at lot better when you can take advantage of them.

          AMD is still going to win a lot of the time, but Intel is better than it seems.

          • andyferris a day ago |
            Are generic web server workloads going to use these features? I would assume the bulk of e.g. EC2 spent its time doing boring non-accelerated “stuff”.
            • everfrustrated a day ago |
              Intel does a lot of work developing sdks to take advantage of its extra CPU features and works with open source community to integrate them so they are actually used.

              Their acceleration primitives work with many TLS implementations/nginx/SSH amongst many others.

              Possibly AMD is doing similar but I'm not aware.

              • pclmulqdq a day ago |
                AMD is not doing similar stuff yet.
              • kkielhofner a day ago |
                ICC, IPP, QAT, etc are definitely an edge.

                In AI world they have OpenVINO, Intel Neural Compressor, and a slew of other implementations that typically offer dramatic performance improvements.

                Like we see with AMD trying to compete with Nvidia software matters - a lot.

          • kimixa a day ago |
            But those accelerators are also available for AMD platforms - even if how they're provided is a bit different (often on add-in cards instead of a CPU "tile").

            And things like the MI300A mean that isn't really a requirement now either.

            • pclmulqdq a day ago |
              They are not, at the moment. Google "QAT" for one example - I'm not talking about GPUs or other add-in cards at all.
              • Tuna-Fish 18 hours ago |
                You might not be, but the parent poster is.

                QAT is an integrated offering by Intel, but there are competing products delivered as add-in cards for most of the things it does, and they have more market presence than QAT. As such, QAT provides much less advantage to Intel than Intel marketing makes it seem like. Because yes, Xeon (including QAT) is better than bare Epyc, but Epyc + third party accelerator beats it handily. Especially in cost, the appearance of QAT seems to have spooked the vendors and the prices came down a lot.

                • tecleandor 17 hours ago |
                  I've only used a couple QAT accelerators and I don't know that field much... What relatively-easy-to-use and not-super-expensive accelerators are available around?
          • adrian_b 21 hours ago |
            That is true, but the accelerators are disabled in all cheap SKUs and they are enabled only in very expensive Xeons.

            For most users it is like the accelerators do not exist, even if they increase the area and the cost of all Intel Xeon CPUs.

            This market segmentation policy is exactly as stupid as the removal of AVX-512 from the Intel consumer CPUs.

            All users hate market segmentation and it is an important reason for preferring AMD CPUs, which are differentiated only on quantitative features, like number of cores, clock frequency or cache size, not on qualitative features, like the Intel CPUs, for which you must deploy different program variants, depending on the cost of the CPU, which may provide or not provide the features required for running the program.

            The Intel marketing has always hoped that by showing nice features available only in expensive SKUs they will trick the customers into spending more for the top models. However any wise customer has preferred to buy from the competition instead of choosing between cheap crippled SKUs and complete but too expensive SKUs.

            • pclmulqdq 17 hours ago |
              Wise customers buy the thing that runs their workload with the lowest TCO, and for big customers on some specific workloads, Intel has the best TCO.

              Market segmentation sucks, but people buying 10,000+ servers do not do it based on which vendor gives them better vibes. People seem to generally be buying a mix of vendors based on what they are good at.

              • adrian_b 11 hours ago |
                Intel can offer a low TCO only for the big customers mentioned by you, who buy 10000+ servers and have the force to negotiate big discounts from Intel, buying the CPUs at prices several times lower that their list prices.

                On the other hand, for any small businesses or individual users, who have no choice but to buy at the list prices or more, the TCO for the Intel server CPUs has become unacceptably bad. Before 2017, until the Broadwell Xeons, the TCO for the Intel server CPUs could be very good, even when bought at retail for a single server. However starting with the Skylake Server Xeons, the price for the non-crippled Xeon SKUs has increased so much that they have been no longer a good choice, except for the very big customers who buy them much cheaper than the official prices.

                The fact that Intel must discount so much their server CPUs for the big customers is likely to explain a good part of their huge financial losses during the last quarters.

            • CoastalCoder 17 hours ago |
              I think Intel made a strategic mistake in recent years by segmenting its ISA variants. E.g., the many flavors of AVX-512.

              Developers can barely be bothered to recompile their code for different ISA variants, let alone optimize it for each one.

              So often we just build for 1-2 of the most common, baseline versions of an ISA.

              Probably doesn't help that (IIRC) ELF executables for the x86-64 System V ABI have now way to indicate precisely which ISA variants they support. So it's not easy during program-loading time to notice if your going to have a problem with unsupported instructions.

              (It's also a good argument for using open source software: you can compile it for your specific hardware target if you want to.)

      • aryonoco a day ago |
        Performance per Watt was lost by Intel with the introduction of the original Epyc in 2017. AMD overtook in outright performance with Zen 2 in 2019 and hasn't looked back.
      • Hikikomori a day ago |
        Care to post any proof?
        • parl_match a day ago |
          idk go look at the xeon versus amd equivalent benchmarks. theyve been converging although amd's datacenter offerings were always a little behind their consumer

          this is one of those things where there's a lot of money on the line, and people are willing to do the math.

          the fact that it took this long should tell you everything you need to know about the reality of the situation

          • Tuna-Fish a day ago |
            Sorry, but everything about this is wrong.

            AMD has had the power efficiency crown in data center since Rome, released in 2019. And their data center CPUs became the best in the world years before they beat Intel in client.

            The people who care deeply about power efficiency could and did do the math, and went with AMD. It is notable that they sell much better to the hyperscalers than they sell to small and medium businesses.

            > idk go look at the xeon versus amd equivalent benchmarks.

            They all show AMD with a strong lead in power efficiency for the past 5 years.

          • p1necone a day ago |
            Are you looking at userbenchmark? They are not even slightly reliable.
            • wongogue a day ago |
              He is so much biased against AMD that PC builders and even Intel forums have banned that site.
            • ChoGGi a day ago |
              Oh thanks for the reminder! I gotta go read their 9800x3d review, I'm always up for a good laugh.

              Edit: awww no trash talking it yet, unlike the 7800x3d :)

          • Hikikomori a day ago |
            I know what the benchmarks are like, I wish that you would go and update your knowledge. If we take cloud as a comparison it's cheaper to use AMD, think they're doing some math?
      • xcv123 a day ago |
        Outdated info. AMD / TSMC has beat Intel at efficiency for years. Intel has fallen behind. We need them to catch up and provide strong competition.

        Intel has just been removed from the Dow index. They are under performing on multiple levels

        https://apnews.com/article/dow-intel-nvidia-sherwinwilliams-...

    • ksec a day ago |
      Intel did an amazing job at holding on to what they had. From Enterprise Sales connection which AMD had very little from 2017 to 2020. And then bundling other items, essentially discount without lowering price, and finally some heavy discount.

      On the other hand AMD has been very conservative with their EPYC sales and forecast.

    • topspin a day ago |
      I don't agree that this is surprising. To be "dominant" in this space means more than raw performance or value. One must also dominate the details. It has taken AMD a long time to iron out a large number of these details, including drivers, firmware, chipsets and other matters, to reach real parity with Intel.

      The good news is that AMD has, finally, mostly achieved that, and in some ways they are now superior. But that has taken time: far longer than it took AMD to beat Intel at benchmarks.

    • j_walter a day ago |
      Server companies have long term agreements in place...waiting for those to expire before moving to AMD is not unexpected. This was the final outcome expected by many.
    • heraldgeezer a day ago |
      Servers are used for a long time and then Dell/HP/Lenovo/Supermicro has to deliver them and then customers have to buy them. This is a space with very long lead times. Not surprising.
      • fulafel 21 hours ago |
        The metric was "who sells most now" so the long product life doesn't directly affect it. Lead times can be longer than desktop but not years.
    • elorant a day ago |
      Upgrade cycles at datacenters are really long.
      • wmf a day ago |
        AMD has been ahead for 5 years and upgrade cycles are 4-6 years so AMD should have ~80% market share by now.
        • adgjlsfhk1 16 hours ago |
          first 2 gens of epic didn't sell that much compared to Intel because companies didn't want to make huge bets on AMD until there was more confidence that they would stick around near the top for a while. also server upgrade cycles are lengthening (probably more like 5-7 years now) since CPUs aren't gaining per core performance as quickly
  • iwontberude a day ago |
    I am sure AMD has been delivering more value for even longer. I bet the currently deployed AMD Exaflops are significantly higher than Intel. It was a huge consideration for me when shopping between the two. As big as 50% more compute per dollar.
  • SilverBirch a day ago |
    I’d still like a decent first fpga. Guys? I’m still here guys! Please make me some FPGAs!
    • jsheard a day ago |
      Sorry, you can have a cheap-ish FPGA that came out 10 years ago, or a new FPGA that costs more than your car and requires a $3000 software license to even program. Those are the only options allowed.
      • Neywiny a day ago |
        The new COP FPGAs are in the $100-400 range. Not cheap but nothing compared to the high end parts.
        • RF_Savage a day ago |
          So Intel has abandoned the sub-100usd segment to AMD/Xilinx, Lattice, Efinix and Microchip?
          • bgnn a day ago |
            Luckily they are spinning off the FPGA business to be Altera again
          • Neywiny a day ago |
            The COP is AMD/Xilinx. I have no idea what the agilex 3 and 5 costs are, I'm not an Altera user. I will note though, having used Lattice, Microchip, and (admittedly at the start of Titanium) Efinix, none of the tools come close to Vivado/Vitis. I'm on lattice at the moment and I've lost countless hours to the tools not working or working poorly on Linux relative to Xilinx. Hobbyist me doesn't care, I'll sink the hours in. Employee me does care, though.
          • namibj a day ago |
            There's also Cologne Chip.
      • schmidtleonard a day ago |
        Nah, the hobby strat is to buy a chunk o' circuit board and learn BGA soldering. "Chip Recovery," they call it.

        https://www.ebay.com/itm/235469964291

        Best start believin' in the crazy cyberpunk stories. You're in one!

        • jsheard a day ago |
          Virtex UltraScales require Vivado EE so you'd still need the $3000 license to do anything with it :(

          edit: legally that is, assuming there's even enough demand for these tools for anyone to bother cracking them

          • immibis a day ago |
            (legally)
            • NavinF a day ago |
              Is Vivado easy to pirate? Now I'm interested
              • 15155 a day ago |
                Yes, or you can just keep getting a trial license.
          • schmidtleonard a day ago |
            This is sofware written by hardware guys. Cracking it is the easy part. Then you have to make it work...
        • tecleandor 17 hours ago |
          OMG, price per one new unit of those XCVU095 varies form 4k to 35k depending on the store. The variability!

          Where are they pulling those PCB chunks from that have cheap Ultrascale+ chips?

      • 15155 a day ago |
        Buy a Xilinx U50C or U55 (C1100) - neither require a Vivado license and both have HBM/many LUTs (VU35P chips.) Neither will exceed $1500.
    • snvzz a day ago |
      I'd look at whatever nextpnr supports.
  • WaitWaitWha a day ago |
    Oh my, allow me to reminisce.

    When the Intel 80386-33 came out we thought it was the pinnacle of CPUs, running our Novell servers! We now had a justification to switch from arcnet to token ring. Our servers could push things way faster!

    Then, in the middle 1991, the AMD 80386-40 CPU came out. Mind completely blown! We ordered some (I think) Twinhead motherboards. They were so fast we could only use Hercules mono cards in them; all other video cards were fried. 16Mb token ring was out, so some of my clients moved to it with the fantastic CPU.

    I have seen some closet-servers running Novell NetWare 3.14 (?) with that AMD CPU in the late '90s. There was a QUIC tape & tape drive in the machine that was never changed for maybe a decade? The machine never went down (or properly backed up).

    • taspeotis a day ago |
      Some AMD 80386DX-40 drama:

      > While the AM386 CPU was essentially ready to be released prior to 1991, Intel kept it tied up in court.[2] Intel learned of the Am386 when both companies hired employees with the same name who coincidentally stayed at the same hotel, which accidentally forwarded a package for AMD to Intel's employee.[3]

      • firecall a day ago |
        Far out LOL

        That's amazing!

      • justinclift a day ago |
        Wonder if the hotel had a liability problem from that?

        After all, it sounds like they directly caused a "billion dollar" type of problem for AMD through their mistake.

    • intothemild a day ago |
      I remember that 386-40. That was a great time.
    • gkanai a day ago |
      Token ring networks! So glad we moved on from that.
      • crest a day ago |
        Quick! Everyone! Someone dropped the token. Get up and look behind your desks.
      • addaon a day ago |
        > So glad we moved on from that.

        Don't look too closely at the collision avoidance mechanism in 10base-T1S, standardized in 2020. Sure looks like a virtual token ring passing mechanism if you squint...

    • gerdesj a day ago |
      NW 3.12 was the final version I think. I recall patching a couple for W2K. NetWare would crash a lot (abend) until you'd fixed all the issues and then it would run forever, unless it didn't.

      I once had a bloke writing a patch for eDirectory in real time in his basement whilst running our data on his home lab gear, on a weekend. I'm in the UK and he was in Utah. He'd upload an effort and I'd ftp it down, put it in place, reboot the cluster and test. Two iterations and job done. That was quite impressive support for a customer with roughly 5,000 users.

      For me the CPU wasn't that important, per se. NWFS ate RAM: when the volumes were mounted, the system generated all sorts of funky caches which meant that you could apply and use trustee assignments (ACLs) really fast. The RAID controller and the discs were the important thing for file serving and ideally you had wires, switches and NICs to dole the data out at a reasonable rate.

    • fijiaarone a day ago |
      In 1996 we set up a rack (department store surplus) of Cyrix 586 (running on 486 socket C motherboards) running at 75mhz with 16mb of RAM and could serve 100 concurrent users with CGI scripts and image maps doing web serving and VOIP with over 1 million requests a month on a single T1 line.

      Good luck doing that on a load balanced rack of 96 core AMD servers today.

      • simfree a day ago |
        Peak requests per second (and whether a SIP invite or CGI script being run) would be very useful to know.

        SIP hasn't gotten much heavier, nor CGI scripts, and tiny distros like OpenWRT can do a lot with little iron.

        Heard lots of rough ADSL era VoIP stories, hopefully you weren't struggling with that back then.

      • einsteinx2 18 hours ago |
        Are you seriously arguing that a rack of modern servers can’t handle 100 concurrent users?
        • bigfatkitten 10 hours ago |
          It would be harder than it needs to be with modern software 'engineering' practices.
  • pixelpoet a day ago |
    Please, don't talk about how well AMD is doing! You'll only make the stock price slide another 10%, as night follows day... [irrational market grumbling intensifies]
    • belval a day ago |
      The market can hardly be called irrational on this. AMD's market value pretty much already priced in that they would take over Intel's place in the datacenter, their valuation is more than double Intel's with a PE of 125, despite them being fabless and ARM gaining ground in the server space. That's why you are seeing big swings in prices, because anything short of "we are bankrupting Intel and fighting Nvidia in the AI accelerator space" is seen as a loss.
      • dhruvdh a day ago |
        > despite them being fabless

        That's not how it works. You need to pump money into fabs to get them working, and Intel doesn't have money. If AMD had fabs to light up their money, they would also have a much lower valuation.

        The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$. 225$ was hit when AMD was guiding ~3.5B in datacenter GPU revenue. Now, they're guiding to end the year at 5B+ datacenter GPU revenue, but the stock is ~140$?

        I think it's because of how early Nvidia announced Blackwell (it isn't any meaningful volume yet), and the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year? I don't know how to explain the stock price.

        Anyway, they hit record quarterly revenue this Q3 and are guiding to beat this record by ~1B next quarter. Price might move a lot based on how AMD guides for Q1 2025.

        • belval a day ago |
          > That's not how it works.

          Being fabless does have an impact because it caps AMD's margins and makes x86 their only moat. They can only extract value if they remain competitive on price. Sure that does not impact Nvidia, but they get to have fat margins because they have virtually no competition.

          > The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$.

          That's volatility not irrationality. As I wrote AMD's valuation is built on the basis that they will keep executing in the DC space, Intel will keep shitting the bed and their MI series will eventually be competitive with Nvidia. These facts make investor skittish and any news about AMD causes the stock to move.

          > the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year?

          The only hyperscaler that picked up MI300X is Azure and they GA'ed it 2 weeks ago, both GCP and Azure are holding off. The uncertainty on when (if) it will catch on is a factor but the growing competition from those same hyperscaler building their own chip means that the opportunity window could be closing.

          It's ok to be bullish on AMD the same way that I am bearish on it, but I would maintain that the swings have nothing to do with irrationality.

          • trogdor 14 hours ago |
            > The only hyperscaler that picked up MI300X is Azure and they GA'ed it 2 weeks ago

            What does “GA” mean in this context?

            I’m usually pretty good at deciphering acronyms, but in this case, I have no idea.

            • belval 14 hours ago |
              Sorry the corporate lingo is eating into my brain.

              GA means Generally Available. To GA something is a shorthand for "to make X generally available".

        • sam_goody 17 hours ago |
          AMD keeps on projecting a message of: it's all about hardware.

          Many "influencers" have been convinced that: it is all about software - especially in AI. (I happen to agree, but my opinion doesn't matter).

          It doesn't matter how well a company is doing if they are targeting the wrong point - their future will be grim. And stock is all about the future.

      • hmm37 9 hours ago |
        But their PE ratio is only that high because of their acquisition of Xilinx. That's why the forward PE ratio for AMD is much much lower than 125.
  • hasnain99 a day ago |
    greag
  • INTPenis a day ago |
    I'm not a HW guy but my HW friends have been designing HCI solutions with AMD for maximum IO throughput because AMD CPUs have more PCI lanes.
    • storrgie a day ago |
      I think for _most_ people it comes down to this: how much can I cram into the platform. More lanes is more high speed storage, special purpose processing, and networking interfaces.
      • wmf a day ago |
        VMware users are starting to say that Epyc is too powerful for one server because they don't want to lose too much capacity due to a single server failure. Tangentially related, network switch ASICs also have too much capacity for a single rack.
  • Havoc a day ago |
    Isn't that ahead of schedule?

    Everyone I think knew AMD is catching up but thought this was still a year or two out

  • seanp2k2 a day ago |
    Therefore, AMD stock is down 17.1% in the past month.
  • DeathArrow 21 hours ago |
    If Nvidia releases a good server CPU, they can eat into both Intel and AMD profits. Maybe it's not as lucrative as selling GPUs but having a good portion of the market may pay bigger dividends in the future.

    If I were AMD CEO I would make the top priority to have a software stack on par with CUDA so that AMD GPUs have a chance in the data centers.

  • ssijak 18 hours ago |
    So AMD is first in datacenters and grows AI related chips quarter over quarter. And with PS 5 pro launching hopefully that will grow again their custom graphics chips sales. Looks like a solid buy for me at the moment.
    • Narishma 17 hours ago |
      PS5 Pro is a niche expensive system for enthusiasts. It won't be selling a ton.
  • snakeyjake 8 hours ago |
    Except for some very, VERY, specific use cases (many of which are now irrelevant due to Optane's death) it is professionally negligent to recommend intel in the datacenter.