• Thaxll 6 days ago |
    "in Geekbench 6."
    • ceejayoz 6 days ago |
      "8 too long", says HN.
  • eigenspace 6 days ago |
    I'll admit to some reflexive skepticism here. I know GeekBench at least used to be considered an entirely unserious indicator of performance and any discussion relating to its scores used to be drowned out by people explaining why it was so bad.

    Do those criticisms still hold? Are serious people nowadays taking Geekbench to be a reasonably okay (though obviously imperfect) performance metric?

    • whynotminot 6 days ago |
      You’re not the only one — but just curious where this skepticism comes from.

      This is M4 — Apple has now made four generations of chips and each one were class leading upon release. What more do you need to see?

      • LeafItAlone 6 days ago |
        I don’t think they are skeptical of the chip itself. Just asking about the benchmark used.

        If I was reviewing cars and used the number of doors as a benchmark for speed, surely I’d get laughed at.

        • whynotminot 6 days ago |
          Right but we keep repeating this cycle. A new M series chip comes, the geekbench leaks and its class leading.

          Immediately people “but geEk BeNcH”

          And then actual people get their hands on the machines for their real workloads and essentially confirm the geekbench results.

          If this was the first time, then fair enough. But it’s a Groundhog Day style sketch comedy at this point with M4.

          • philistine 6 days ago |
            I blame it on the PC crowd being unconsciously salty the most prestigious CPU is not available to them. You heard the same stuff when talking about Android performance versus iPhone.

            There is a lot to criticize about Apple's silicon design, but they are leading the CPU market in terms of mindshare and attention. All the other chipmakers all feel like they're just trying to follow Apple's lead. It's wild.

            • MBCook 5 days ago |
              I was surprised and disappointed to see that the industry didn’t start prioritizing heat output more after the M1 generation came out. That was absolutely my favorite thing about it, it made my laptop silent and cool.

              But anyway, what is it you see to criticize about Apple‘s Apple Silicon design? The way RAM is locked on package so it’s not upgradable, or something else?

              I’m kind of surprised, I don’t hear a lot of people suggesting it has a lot to be criticized for.

              • lispm 5 days ago |
                It was wild to see the still ongoing overclocking Ghz competition, while suddenly one could use a laptop with good performance, no fans, no noise and while using it mobile.
              • philistine 4 days ago |
                The lack of multiple display support early on. The M2 generation produced much more heat than the M1, but that could be the new Macbook Air. GPU weakness with the bigger chips.
        • speedgoose 5 days ago |
          By the way, have you heard about the recent Xiaomi SU7 being the fastest 4-doors car on the Nurburgring Nordschleife?

          It has 4 doors! It’s all over the shitty car news medias. The car is a prototype with only one seat though.

      • eigenspace 6 days ago |
        If that's all we cared about we wouldn't be discussing a Geekbench score in the first place. The OP could have just posted the statement without ever mentioning a benchmark.

        I was just curious if people had experience with how reliable Geekbench has been at showing relative performance of CPUs lately.

      • nabakin 6 days ago |
        In power efficiency maybe, but not top performance
      • sangnoir 5 days ago |
        > Apple has now made four generations of chips and each one were class leading upon release.

        Buying up most of TSMC's latest node capacity certainly helps. Zen chips on the same node turn out to be very competitive, butAMD don't get first dibs.

        • rowanG077 5 days ago |
          I disagree with the wording "AMD don't get first dibs". It's more like "AMD won't pay for first dibs"
          • whynotminot 5 days ago |
            I don’t think a lot of people fully understand how closely Apple works with TSMC on this, too. Both in funding them, holding them accountable, and providing the capital needed for the big foundry bets. It’s kind of one of those IYKYK things, but Apple is a big reason TSMC actually is the market leader.
        • stouset 5 days ago |
          It’s more like Apple fronts the cash for TSMC’s latest node. But regardless, in what way does that detract from their chips being class-leading at release?
          • fragmede 5 days ago |
            Because the others can't use that node, there are no others in that same class. If there was a race, but one person is on foot, and the other is in a car, it's not surprising if the person in the car finishes first.
            • whynotminot 5 days ago |
              Eventually Apple moves off a node and the others move on.

              People pretend like this isn’t a thing that’s already happened, and that there aren’t fair comparisons. But there are. And even when you compare like for like Apple Silicon tends to win.

              Line up the node, check the wattages, compare the parts. I trust you can handle the assignment.

            • cbzbc 5 days ago |
              I have some sympathy with this view because it's in no way mass market.

              Nevertheless, product delivery is a combination of multiple things of which the basic tech is just one component.

      • gamblor956 5 days ago |
        As demonstrated by the M1-M3 series of chips, essentially all of that lead was due to being the first chips on a smaller process, rather than to anything inherent to the chip design. Indeed, the Mx series of chips tend to be on the slower side of chips for their process sizes.
        • whynotminot 5 days ago |
          Show your work.

          Most people who say things like this tend to deeply misunderstand TDP and end up making really weird comparisons. Like high wattage desktop towers compared to fan-less MacBook Airs.

          The process lead Apple tends to enjoy no doubt plays a huge role in their success. But you could also turn around and say that’s the only reason AMD has gained so much ground against Intel. Spoiler: it’s not. Process node and design work together for the results you see. People tend to get very stingy with credit for this though if there’s an Apple logo involved.

    • ceejayoz 6 days ago |
      I'd be reflexively skeptical if I didn't have a M1 Mac. It really is something.
      • eigenspace 6 days ago |
        I'm not skeptical of Apple's M-series chips. They have proven themselves to be quite impressive and indeed quite competitive with traditional desktop CPUs even at very low wattages.

        I'm skeptical of Geekbench being able to indicate that this specific new processor is robustly faster than say a 9950x in single-core workloads.

        • selectodude 6 days ago |
          It's robustly faster at the things that Geekbench is measuring. You can find issue with the test criteria (measures meaningless things or is easy to game) but the tests themselves are certainly sound.
          • hu3 6 days ago |
            > You can find issue with the test criteria (measures meaningless things or is easy to game).

            That's exactly their point.

            • stouset 5 days ago |
              On the other hand, I have yet to see any benchmark where people didn’t crawl out of the woodwork to complain about it.
    • zamadatix 6 days ago |
      It's by no means a be all end all "read this number and know everything you need to know" benchmark but it tends to be good enough to give you a decent idea of how fast a device will be for a typical consumer.

      If I could pick 1 "generic" benchmark to base things off of I'd pick PassMark though. It tends to agree with Geekbench on Apple Silicon performance but it is a bit more useful when comparing non-typical corner cases (high core count CPUs and the like).

      Best of all is to look at a full test suite and compare for the specific workload types that matter to you... but that can often be overkill if all you want to know is "yep, Apple is pulling ahead on single thread performance".

    • TiredOfLife 6 days ago |
      You are thinking of AnTuTu.
    • llm_nerd 6 days ago |
      Geekbench is an excellent benchmark, and has a pretty good correlation with the performance people see in the real world where there aren't other limitations like storage speed.

      There is a sort of whack-a-mole thing where adherents of particular makers or even instruction sets dismiss evidence that benefits their alternatives, and you find that at the root of almost all of the "my choice doesn't win in a given benchmark means the benchmark is bad" rhetoric. Then they demand you only respect some oddball benchmark where their favoured choice wins.

      AMD fans long claimed that Geekbench was in cahoots with Intel. Then when Apple started dominating, that it was in cahoots with ARM, or favoured ARM instruction sets. It's endless.

      • BoingBoomTschak 5 days ago |
        Any proprietary benchmark that's compiled with the mystery meat equivalent of compiler/flags isn't "excellent" in any way.

        SPECint compiled with either the vendor compiler (ICC, AOCC) or the latest gcc/clang would be a good neutral standard, though I'd also want to compare SIMD units more closely with x265 and Highway based stuff (vips, libjxl).

        And how do you handle the fact that you can't really (yet) use the same OS for both platforms? Scheduler and power management counts, even for dumb number crunching.

        • janwas 5 days ago |
          Good points. gemma.cpp can also be an interesting benchmark, it also uses Highway.
        • llm_nerd 5 days ago |
          Geekbench is a highly regarded benchmark because it effectively reflects the overall performance of various platforms as experienced by the average user. By "platform," we mean the combination of hardware and software—how systems are actually used in day-to-day scenarios.

          Specint, on the other hand, is useful for assessing specific tasks if you plan to run identical workloads. However, its individual test results vary widely. For example, Apple Silicon chips generally perform well in Specint but might match a competing chip in one test and be three times faster in another. These tests focus on very narrow tasks that can highlight the unique strengths of certain instructions or system features but are not representative of overall real-world performance.

          The debate over benchmarks is endless and, frankly, exhausting, as it often rehashes the same arguments. In practice, most people accept that Geekbench is a reliable indicator of performance, and I maintain it’s an excellent benchmark. You might disagree, but my stance stands.

          • BoingBoomTschak 4 days ago |
            Lots of appeal to popularity, "most people accept" a lot of things.

            >Specint, on the other hand, is useful for assessing specific tasks if you plan to run identical workloads. [...] These tests focus on very narrow tasks that can highlight the unique strengths of certain instructions or system features but are not representative of overall real-world performance.

            What? First, SPECint is an aggregate of 12 benchmarks (https://en.wikipedia.org/wiki/SPECint#Benchmarks), none of them synthetic in any way. They're also ranging from low to high level, it's not just number crunching. Sure, it's missing stuff like browser benchmarks to better represent the average user, but it's certainly not as useless as what you seem to imply.

            Any "system wide" benchmark is aggregating too much into a single number to mean anything, in any case.

            And this subthread is about using benchmarks to compare HARDWARE, not whole systems, so this discussion is pretty much meaningless.

            • llm_nerd 4 days ago |
              I never said SPECInt was synthetic though did I? What are you arguing against?

              Yet a benchmark of how Xalan-C++ transforms XML documents has shockingly little relevance to most of the things I do. And the M1 runs the 400.perlbench benchmark slower than the 5950X, yet it runs the 456.hmmer benchmark twice as quickly, both I guess mattering if I'm running those specific programs?

              As with the strawman that I said it was synthetic, I also didn't say it was useless. Not sure why you're making things. It's an interesting benchmark, but most people (yup, there's that appeal again) find Geekbench more informative.

              And, again, most people, including the vast majority of experts in this field, respect geekbench as a decent broad-spectrum benchmark. As with all things there are always contrarians.

              >And this subthread is about using benchmarks to compare HARDWARE, not whole system

              Bizarre. This submission is specifically about Geekbench, specifically about the M4 running, of course, macOS. This subthread is someone noting that they can't escape the negatron contrarians who always pipe up with the No True Benchmark noise.

    • trynumber9 5 days ago |
      It'll still be at the top of SPECint 2017 which is the real industry standard. Geekbench 6.3 slightly boosted Apple Silicon scores by adding SME - a very niche instruction set extension which is never used in SPECint workloads. So the gap may not be as wide as GB6.3 implies.
    • 2OEH8eoCRo0 5 days ago |
      If it shows a good result for Apple then it's perfectly accurate, otherwise it's flawed.
    • jwr 5 days ago |
      I verified Geekbench results to be very tightly correlated with my use case and workloads (JVM, Clojure development and compilation) as measured by my wall times. So yes, I consider it to be a very reliable indicator of performance.
      • stogot 5 days ago |
        Curious how you verified that? I should possibly do thesame
        • qeternity 5 days ago |
          Run Geekbench on a sample of hardware. Run your workload along same hardware. Regress.
          • m00x 5 days ago |
            That's not very scientific at all. With how close the CPUs are, how would you compare the tiny differences?
        • jwr 4 days ago |
          Run my compilation on a CPU, note down the time it took and the Geekbench score for that CPU.

          Run the compilation on another CPU, note down the time it took and the Geekbench score for that CPU.

          Now look at the ratios — if Geekbench scores implied the faster CPU was, say, 20% faster, is my compilation 20% faster?

          I'm looking at my notes and without digging too much, I can see two reasonably recent cases: 30% faster compilation (Geekbench said 30%), and another one: 40% faster compilation (Geekbench said 38%).

          So yes, I do consider Geekbench to be a very reliable indicator of performance.

    • m00x 5 days ago |
      Did no one check the scores? They're not the top consumer CPU by quite a range. It's probably the best power per watt, but not the most powerful CPU.
      • jocaal 5 days ago |
        The performance-per-watt isn’t necessarily the best. These scores are achieved when boosted and allowed to draw significantly more power. Apple CPUs may seem efficient because, most of the time, computers don’t require peak performance. Modern ARM microarchitectures have been optimized for standby and light usage, largely due to their extensive use in mobile devices. Some of MediaTek and Qualcomm's CPUs can offer better performance-per-watt, especially at lower than peak performance. The issue with these benchmarks is that they overlook these nuances in favor of a single number. Even worse, people just accept these numbers without thinking about what they mean.
        • m00x 4 days ago |
          M4 is also an ARM architecture, why would Qualcomm's be more efficient?
    • skavi 5 days ago |
      GB6 is great. Older versions weren’t always very representative of real workloads. Mostly because their working data sets were way too small.

      But GB6 aligns pretty well with SPEC2017.

    • quitit 5 days ago |
      Benchmarking itself is of limited usefulness, since in reality it is only an internally-relevant comparison.

      Translating these scores into the real world is problematic. There are numerous examples of smart phones powered by Apple chips versus Qualcomm chips having starkly different performance with actual use. This is in spite of the chips themselves scoring similarly in benchmarks.

      The interesting thing here isn't really how high it's scored against other chip brands, but how it outperformed the M2 Ultra. There was some hum of expectation on HN that the differences between M1, M2, M3 etc would be token and that Apple's chips devision is losing its touch. Yet the M2 Ultra in the Mac Studio was released in June 2023, and the M4 Pro in the mini now for November 2024. That is quite the jump in performance over time and a huge change in bang for buck.

  • carlgreene 6 days ago |
    I have been out of the PC world for a long time, but in terms of performance efficiency, is Apple running away from the competition? Or are AMD and Intel producing similar performing chips at the same wattage?
    • api 6 days ago |
      There have always been higher performing x64 chips than the M series but they use several times more power to get that.

      Apple seems to be reliably annihilating everyone on performance per watt at the high end of the performance curve. It makes sense since the M series are mobile CPUs on ‘roids.

      • MBCook 5 days ago |
        Irony being that’s the same thing Intel learned from the P4.

        They gave it up for Core, grown from their low power chips, which took them far further with far less power than the P4 would have used.

        • api 5 days ago |
          Since power is the limiting factor due to heat, optimize for power first then scale up.
    • LorenDB 6 days ago |
      AMD's latest Strix Point mobile chips are on par with M3 silicon: https://youtube.com/watch?v=Z8WKR0VHfJw
      • radicalbyte 6 days ago |
        I was looking into this recently as my M1 Max screen suddenly died out of the blue within warranty and Apple are complete crooks wrt honouring warranties.

        The AMD mobile chips are right there with M3 for battery life and have excellent performance only I couldn't find a complete system which shipped with the same size battery as the MBP16. They're either half or 66% of the capacity.

        • matwood 6 days ago |
          > and Apple are complete crooks wrt honouring warranties

          Huh? I've used AC for both MBP and iPhones a number of times over the years, and never had an issue. They are known for some of the highest customer ratings in the industry.

          • radicalbyte 5 days ago |
            They claimed that it wasn't covered because the machine was brought in Germany. I live in The Netherlands and brought it here. Also I contacted Apple Support to checked my serial number and then gave me the address to take it to. Which I did.

            They charged me $100 to get my machine back without repair.

            Also bear in mind that the EU is a single market, warranties etc are, by law, required to be honoured over the ENTIRE single market. Not just one country.

            Especially when the closest Apple Store to me is IN GERMANY.

            I have since returned it to Amazon who will refund it (they're taking their sweet time though, I need to call them next week as they should have transferred already).

            • bartekrutkowski 5 days ago |
              So you haven't purchased it from Apple but instead you've purchased it from Amazon. This may change things. In Europe you have two ways of dealing with it, either by manufacturer warranty (completely good will and on terms set by the manufacturer) or by consumer rights (warranted you by law, overruling any warranty restrictions).

              Sellers often will try to steer you to use warranty as it removes their responsibility, Amazon is certainly shady here. Apple will often straight on give you a full refund or a new device (often newer model), that happened to me with quite few iPhones and MacBooks.

              Know your rights.

              • radicalbyte 4 days ago |
                Amazon helped instantly however my mistake was talking to Apple. They didn't even ask if I'd spoken to the retailer. I was, at the time, focused on just getting it fixed as I needed to get the data off of it (the entire Apple + external monitors thing is also a shit-show, terrible UX, terrible design and terrible documentation).

                I'll keep buying from Amazon as their support is great and prices competitive. I don't trust Apple buying from them directly.

            • dkarbayev 5 days ago |
              I had a Macbook Pro 2018 that a friend of mine bought for me in Moscow because it was much cheaper there (due to grey import, I think). I didn't have Apple Care or anything. When its touchbar stopped working in 2020, I brought it to Apple Store in Amsterdam and complained about it and also about faulty butterfly keys (one keycap fell off, "t" and "e" key were registering double presses each time). So the guys at Apple Store simply took it and replaced the whole top case so I've got a new keyboard, new touchbar, and - the best part - a new battery.
          • trimbo 5 days ago |
            10 years ago, the Genius Bar would fix my problem to my satisfaction in almost no time -- whether or not I had Apple Care. They'd send it off for repair immediately or fix it in less than an hour. 2 out of 3 iPhone 6 that had an issue, they just handed me a new device.

            Today, Apple wastes my time.

            Instead of the old experience of common sense, today the repair people apparently must do whatever the diagnostic app on the iPad says. My most recent experience was spending 1.5 hours to convince the guy to give me a replacement Airpods case. Time before that was a screen repair where they broke a working FaceID ... but then told me the diagnostics app told them it didn't work, so they wouldn't fix it.

            I'm due for another year of AppleCare on my MBP M1, and I'm leaning towards not re-upping it. Even though it'd be 1/20th of the cost of a new one, I don't want to waste time arguing with them anymore.

          • seec 5 days ago |
            That was before the Tim Cook bean counter takeover. Apple used to be generous with accessories and stuff before too. Now they even remove stickers to save a few cents per device.

            Apple still makes good hardware but the scrooge attitude is disgusting for such premium products.

        • lancesells 6 days ago |
          I would go multiple routes with Apple if you're able. They tend to be pretty good with in warranty and even out of warranty.
      • MBCook 5 days ago |
        Is that in performance, or performance while matching thermals?

        If a competing laptop has to run significantly hotter/louder than in my mind that’s not on par.

    • ttul 6 days ago |
      My assessment is that ARM is running away from the competition. Apple is indeed designing the chip, but without the ARM architecture, Apple would have nothing to work with. This is not to diminish the incredible work of Apple’s VLSI team who put the chip architecture together and deftly navigated the Wild West of the fabrication landscape, but if you look at the specialized server chip side, it’s now dominated by ARM IP. I think ARM is the real winner here.
      • torginus 6 days ago |
        Even compared to other ARM cores, Apple is in a league of its own.
        • jsheard 6 days ago |
          They have a good silicon design team, but having so much money that they can just buy out exclusive access to TSMCs most advanced processes doesn't hurt either. The closest ARM competitor to the M4, the Snapdragon X Elite, is a full node behind on 4nm while Apple is already using 2nd generation 3nm.
          • nottorp 6 days ago |
            So then it should be comparable to the M1 or M2? Which isn't bad at all, if true.

            But is it, for the same power consumption?

            • philistine 6 days ago |
              For some benchmarks the Snapdragon is on par with the M3. But the weirdo tests I found online did not say which device they compared, since the M3 is available in fan-less machines, which limits its potential.
        • tempest_ 6 days ago |
          Outside of Ampere(who are really more server focused) who else is designing desktop/laptop ARM cpus?
          • ttul 6 days ago |
            That’s a really fair point. I think it’s tough for anyone else to break into the consumer / desktop segment with ARM chips. Apple can do it because they control the whole stack.
          • wmf 5 days ago |
            Qualcomm and Nvidia.
            • dagmx 5 days ago |
              NVIDIA don’t have custom cores
              • wmf 5 days ago |
                What if I told you they don't need them.
                • dagmx 5 days ago |
                  What if I told you that the rest of the context was about custom cores.
        • IshKebab 5 days ago |
          They also have the advantage that they could break software compatibility with the M1, e.g. using 16 kB pages and 128 byte cache blocks.
          • dagmx 5 days ago |
            They do mixed page mode depending on the program running.
      • amarshall 5 days ago |
        Is this really about the architecture itself, or about the licensing of it? AMD and Intel are, afaik, the only ones legally allowed to use x86, and likely have no plans to allow anyone else.
    • tempest_ 6 days ago |
      Their margins tend to allow them to always use the latest TSMC process so they will often be pretty good just based on that. They are also ARM chips which obviously have been more focused on efficiency historically.
      • philistine 6 days ago |
        Oh how the mighty have fallen. For decades, when comparing Mac versus PCs, it was always about performance, with any other consideration always derided.

        Yet here we are, with the excuses of margins and silicon processes generations. But you haven't answered the question. Is Apple pulling ahead or is the x86 cabal able to keep up?

        • wmf 5 days ago |
          Apple is ahead. The fab stuff is an explanation of why they are ahead, not an excuse for being behind.
      • omikun 6 days ago |
        They actually work with TSMC to develop the latest nodes. They also fund the bulk of the development. It's not as if without Apple's funds someone else will get the same leading edge node.
      • FuckButtons 5 days ago |
        To a greater or lesser extent, Apple funds tsmc’s latest nodes.
    • juancn 6 days ago |
      The M1 was a complete surprise, it was so far ahead that it was ridiculous.

      The M2-4 are still ahead (in their niche), but since the M1, Intel and AMD have been playing catchup.

      • moffkalast 6 days ago |
        Or more accurately AMD is playing catch up with the Strix series, while Intel seems too busy shooting themselves in the foot to bother.
    • trynumber9 5 days ago |
      Apple is slightly pulling away. AMD's top desktop chips were on par with M1/M2/M3 1T but now they cannot match even M4 despite releasing a new design (Zen 5) this year.

      It's partially because AMD is on a two year cadence while Apple is on approximately a yearly cadence. And AMD has no plans to increase the cadence of their Zen releases.

      2020 - M1, Zen 3

      2021 - ...

      2022 - M2, Zen 4

      2023 - M3

      2024 - M4, Zen 5

      Edit: I am looking at peak 1T performance, not efficiency. In that regard I don't think anyone has been close.

      • rbanffy 5 days ago |
        > Edit: I am looking at peak 1T performance, not efficiency. In that regard I don't think anyone has been close.

        Indeed. Anything that approaches Apple performance does so at a much higher power consumption. Which is no biggie for a large-ish desktop (I often recommend getting middle-of-the-road tower servers for workstations).

        • MBCook 5 days ago |
          Don’t thermals basically explode non-linearly with speed?

          It’s possible Apple’s chips could be dramatically faster if they were willing to use up 300W.

          I remember seeing an anecdote where Johny Srouji, the chief Apple Silicon designer, said something like the efficiency cores get 90% of the performance of the performance cores at like 10% of the power.

          I don’t remember the exact numbers but it was staggering. While the single core wouldn’t be as high, it sounded as if they could (theoretically) make a chip of only 32 efficiency cores and just sip power.

          • rbanffy 5 days ago |
            I’m now imagining an “M4 Hot” for the next MacPro with an Intel Xeon W power budget.

            > they could (theoretically) make a chip of only 32 efficiency cores and just sip power.

            Intel and AMD did that with their latest data-center chips to compete with Ampere and AWS’s Graviton. I’d love to build a workstation out of one such beast.

            • MBCook 5 days ago |
              Yeah it makes a lot of sense for servers. For normal desktop/laptop users who don’t need much parallelism it would just be wasted silicon.
              • rbanffy 5 days ago |
                > For normal desktop/laptop users

                NEVER call me that ;-)!

      • kuschku 2 days ago |
        Well, the two major factors for the quality of a CPU are the node its made on and the architecture.

        If you want to evaluate the quality of two different architectures, you should be comparing samples on the same fab node.

        M1 and Zen 4 were on the same node, and M3 and Zen 5 are on the same node. In both cases they're within spitting distance of one another.

        The majority of Apple's advantage is just Apple paying for early access to TSMC's newest node.

    • 2OEH8eoCRo0 5 days ago |
      TSMC is running away from the competition
    • spockz 5 days ago |
      For many workloads I think they are pulling definitely ahead. However, I think there is still much to gain in software. For example, my Linux/Fedora desktop with 5900X is many times more responsive than my 16” M1 Pro.

      Java runs faster. GraalVM native generated native images run way waster. Golang runs faster. X86_64 has seen more love from optimalisations than aarch64 has. One of the things I hit was different GC/memory performance due to different page sizes. Moreover, docker runs natively on Linux, and the network stack itself is faster.

      But even given all of that, the 16” M1 PRO edges close to the desktop. (When it is not constrained by anti virus.) And it does this in a portable form factor, with way less power consumption. My 5900X tops out at about 180W.

      So yes, I would definitely say they are pulling ahead.

      • MBCook 5 days ago |
        I suspect that’s an OS issue. Linux is simply more optimized and faster at equivalent OS stuff.

        Which isn’t too surprising given a lot of the biggest companies in the world have been optimizing the hell out of it for their servers for the last 25+ years.

        On the flipside of the coin though Apple also clearly optimizes their OS for power efficiency. Which is likely paying some good dividends.

        • spockz 5 days ago |
          For sure, the networking is on the OS level because in both I’m using the tcp stack provided by the OS. (Also GUI in Linux has less transitions and feels snappier in general.)

          The remainder can be attributed to compiler optimisations or lack thereof.

    • jeswin 5 days ago |
      > is Apple running away from the competition?

      No.

      On the same node, the performance is quite similar. Apple's previous CPU (M3) has been a 3nm part, while AMD's latest and greatest Zen 5 is still on TSMC's 4nm.

  • hanniabu 6 days ago |
    Good enough to be used for gaming? Really want Apple to get into that because dealing with Windows sucks.
    • whynotminot 6 days ago |
      They’ve always been good enough for gaming. The problem has just been whether or not publishers would bother releasing the games. It’s unfortunate that Apple can’t seem to really build enough momentum here to become a gaming destination.
      • yalogin 6 days ago |
        I think they will get there in time. They like to focus on things and not spread themselves thin. They always wanted to get the gaming market share but AI is taking all their time now.
        • macintux 6 days ago |
          I'm not sure how many chances they'll get to persuade developers that this time they really mean it. It sounds like Apple Arcade is a flop.
        • rswail 6 days ago |
          Given that a Mac mini with an M4 is basically the same size and shape as an Apple TV, they could make a new Apple TV that was a gaming console as well.

          Why is the Apple TV only focused on passive entertainment?

      • crazygringo 6 days ago |
        It has always baffled me why Apple doesn't take gaming seriously. It's another revenue stream, it would sell more Macs. It's profit.

        Is it just some weird cultural thing? Or is there some kind of genuine technical reason for it, like it would involve some kind of tradeoffs around security or limiting architecture changes or something?

        Especially with the M-series chips, it feels like they had the opportunity to make a major gaming push and bring publishers on board... but just nothing, at least with AAA games. They're content with cartoony content in Apple Arcade solely on mobile.

        • ceejayoz 6 days ago |
          > It has always baffled me why Apple doesn't take gaming seriously.

          They aren't really the ones that have to.

          • crazygringo 6 days ago |
            But they are. They need to subsidize porting AAA games to solve the chicken-and-egg problem.

            Gaming platforms don't just arise organically. They require partnership between platform and publishers, organized by the platform and with financial investment by the platform.

            • talldayo 6 days ago |
              > They need to subsidize porting AAA games to solve the chicken-and-egg problem.

              glances at the Steam Machine

              And how long do they have to fail at that before trying a new approach?

        • whynotminot 6 days ago |
          If you look at things like Game Porting Toolkit, Apple actually is investing resources here.

          It just feels like they came along so late to really trying that it’s going to be a minute for things to actually happen.

          I would love to buy the new Mac Mini and sit it under my TV as a mini console. But it just feels like we’re not quite there yet for that purpose, even though the horse power is there.

        • fastball 6 days ago |
          Apple does take gaming seriously. They've built out comprehensive Graphics APIs and things like the GPTK to make migrating games to Apple's ecosystem actually not too bad for developers. The problem is that a lot of game devs just target Windows because every "serious" gamer has a windows PC. It's a chicken-and-egg problem that results from Apple always having a serious minority share of the desktop market. So historically Apple has focused on the segments of the market that they can more easily break into.
        • have_faith 6 days ago |
          I always assumed it was the nature of the gaming workload on the hardware for why they don't ever promote it. AAA games pegging the CPU/GPU at near max for long periods of time goes against what they optimise their machines for. I just think they don't want to promote that sort of stress on the system. On top Apple taking themselves very seriously and seeing gaming as below them.
        • nottorp 6 days ago |
          > Is it just some weird cultural thing?

          I think so. I think no one in apple management has ever played computer games for fun so they simply do not understand what customers would want.

        • ffsm8 6 days ago |
          They do take gaming seriously, that's likely the bulk of their AppStore revenue after all.

          They just don't care about desktop gaming, which is somewhat understandable. While the m-series chips have a GPU, it's about as performant for games as a dedicated GPU from 10-14 years ago (It only needs a fraction of the electricity though, but very few desktop gamers care about that).

          The games you can play have to run at silly low resolution (fullHD at most) and rarely even reach 60fps.

          • matwood 6 days ago |
            > They do take gaming seriously

            They do take gambling seriously.

        • Cthulhu_ 6 days ago |
          Apple has one of the, if not the biggest gaming platforms in existence (the iphone and ipad), but everyone seems to have a blind spot for that and disregards it. Sure, the Mac isn't a big gaming platform for them because their systems are mostly used professionally (assumption), but there too, the Mac represents only 1/10th of the sales they get from the iPhone, and that's only on the hardware.

          Mac gaming is a nice-to-have; it's possible, there's tools, there's Steam for Mac there's toolkits to port PC games to Mac, there's a games category in the Mac app store, but it isn't a major point in their marketing / development.

          But don't claim that Apple doesn't take gaming seriously, gaming for them it's a market worth tens of billions, they're embroiled in a huge lawsuit with Epic about it, etc. Finally, AAA games get ported to mobile as well and once again earn hundreds of millions in revenue (e.g. CoD mobile).

          • goosedragons 6 days ago |
            I feel like for myself at least, mobile gaming is more akin to casino gaming than video gaming. Sure, iOS has loads of gaming revenue but the games just ain't fun and are centred way too heavily on getting microtransactions out of people.
          • sunshowers 5 days ago |
            The iPhone gaming market is abusive and predatory, essentially a mass exploit on human psychology. Certainly not something to be proud of.
        • philistine 6 days ago |
          Apple owns the second largest gaming platform by users and games, and first by profit: iPhone.

          In terms of gaming that's only on PC and consoles, I didn't understand Apple's blazé attitude until I discovered this eye-opening fact: there are around 300 million people who are PC and console gamers, and that number is NOT growing. It's stagnant.

          Turns out Apple is uninterested by a stagnant market, and dedicates all its gaming effort where growth is: mobile.

      • jsheard 6 days ago |
        Apple's aggressive deprecation policies haven't done them any favors when it comes to games, they expect software to be updated to their latest ISAs and APIs in perpetuity but games are rarely supported forever. In many cases the developers don't even exist anymore. A lot of native Mac game ports got wiped out by 32bit being EOL'ed, and it'll probably happen again when they inevitably phase out Rosetta 2 and OpenGL support.
      • goosedragons 6 days ago |
        No they haven't. For years the best you could get was "meh" to terrible GPUs at high price points. Like $2000+ was where getting a discrete GPU began. The M series stuff finally allows the entry level to have decent GPUs but they have less storage out of the box than a $300 Xbox Series S. Apple's priorities just don't align well with gamers. They prioritize resolution over refresh rate and response time, make mice unusable for basically any FPS made in the past 20 years and way overcharge for storage and RAM.
    • evilduck 6 days ago |
      That depends on what you want to play and what other things that suck that you’re willing to tolerate.

      The GPUs in previous M chips aren’t beating AMD or NVidia’s top offerings on anything except VRAM but you can definitely play games with them. Apple has released their Game Porting Toolkit a couple years ago which is basically like Wine/Proton in Linux and if you’re comfortable with Wine and approximately what a Steam Deck can run then that’s about what you can expect to run on a newer Mac.

      Installing Steam or GOG Galaxy with something like Whiskey.app (which leverages the game porting toolkit) opens up a large number of games on macOS. Games that need Windows root kits are probably a pain point, and you’re probably not going to push all those video setting sliders to the far right for Ultra graphics on a 4K screen, but there’s a lot of games that are very playable on macOS and M chips.

      • mattgreenrocks 6 days ago |
        Wow, had no idea this worked as well as it does. I remember the initial hype when this showed up but didn't follow along. Looks like I don't have to regard my Steam library as entirely gone.

        Steam Deck-level performance is quite fine, I mainly just want to replay the older FromSoft games and my favorite indies every now and then.

        • evilduck 6 days ago |
          Fair warning, I haven't dug that deep into compatibility issues or 32 bit gaming compatibility but it's definitely something to experiment with and for the most part you can find out for free before making a purchasing decision.

          First and foremost, it's just worth checking if your game has a native port: https://store.steampowered.com/macos People might be surprised what's already available.

          With Wine syscall and Rosetta x86 code translation, issues do pop up from time to time though, like games that have cutscenes that are encoded as Windows Media Player specific formats, or any other media codecs which aren't immediately available since it's not like games advertise those technology requirements anywhere and you may encounter video stuttering or artifacts since the hardware is obviously dramatically different than what the game developers were originally developing against and there's things happening in the background that an x86 Windows system never does. This isn't stuff that's overly Mac specific since it usually impacts Linux equally but it's a hurdle to jump that you don't have to deal with in native Windows. Like I said, playing Windows games outside of Windows is just a different set of pain points and you have to be able to tolerate it. Some people think it's worth it and some people would rather have higher game availability and keep the pain of Windows. Kudos to Valve with creating a linux based handheld and the Wine and Proton projects for improving this situation dramatically though.

          Besides the Game Porting Toolkit (which was originally intended for game developers to create native application bundles that could be put on the App Store), there's also Crossover for Mac that does their own work towards resolving a lot of these issues and they have a compatibility list you can view on their site: https://www.codeweavers.com/ and alternatively, some games run acceptably inside virtualization if you're willing to still deal with Windows in a sandboxed way. Parallels is able to run many games with better compatibility since you're actually running Windows, though last I checked DX12 was a problem.

      • fioan89 6 days ago |
        With Thunderbolt 5 it should be fairly reasonable to use an external GPU for more power.
        • evilduck 6 days ago |
          Apple Silicon Macs don't have support for eGPUs: https://support.apple.com/en-us/102363

          Maybe with future TB5 support they will include that feature.

        • kimixa 5 days ago |
          Apple no longer has drivers for anything newer than AMD RDNA2 and have completely dismantled the driver team.

          Unless you're running bootcamp you're extremely limited by driver support.

      • orangecat 5 days ago |
        In addition to Whisky, it seems to not be well known that VMWare Fusion is free for personal use and can run the ARM version of Windows 11 with GPU acceleration. I tried it on my M1 Pro MBP and mid-range Steam games ran surprisingly well; an M4 should be even better.
    • paol 6 days ago |
      Well we're about to find out now that CDPR have announced Cyberpunk 2077 will get a native Metal port. I for one am extremely curious with the result. Apple have made very lofty claims about their GPU performance, but without any high-end games running natively, it's been hard to evaluate those claims.

      That said, expectations should be kept at a realistic level. Even if the M4 has the fastest embedded GPU (it probably does), it's still an embedded GPU. They aren't going to be topping any absolute performance charts.

    • eigenspace 6 days ago |
      Gaming isn't just about hardware, it's also about software, economics, trust and relationships.

      Apple has quite impressive hardware (though their GPUs are still not close to high-end discrete GPUs), but they're also fast enough. The problem now is that Apple systematically does not have a culture that respects gaming or is interested in courting gamers. Games also rely on OS stability, but Apple has famously short and severe deprecation periods.

      They ocassionally make pushes in that direction, but I think they lack the will to make a concerted effort, and I also think they lack the restraint to not try and force everything through their own payment processors and distribution systems causing sour relations with developers.

    • braymundo 6 days ago |
      Incidentally, CD Projekt announced Cyberpunk 2077 Ultimate Edition for Mac yesterday. There is hope! :)

      https://www.cyberpunk.net/en/news/50947/just-announced-cyber...

    • Refusing23 6 days ago |
      you can game on linux though. almost all games work just fine. (well, almost)
    • dogleash 5 days ago |
      Valve has/continues to do way more to make Linux a viable gaming platform than Apple will likely ever do for mac

      I get it, you want to leave windows by way of mac. But your options are to either bite the bullet and expend a tiny bit of your professional skill on setting up a machine with linux, or stay on windows for the foreseeable future.

  • post_break 6 days ago |
    What's crazy is that the M4 Pro is in the Mac mini, something so tiny can handle that chip. The Mac Studio with the M4 Max will be awesome but the Mini is remarkable.
    • carlgreene 6 days ago |
      I am still astounded the huge change moving from an Intel Mac to an Apple Silicon Mac (M1) has had in terms of battery performance and heat. I don't think i've heard the fans a single time I've had this machine and it's been several years.

      Nor have I had any desire to upgrade.

      • cloudking 6 days ago |
        M3 Pro user here, agree with the same points. It's nice to be able to actually have your laptop on your lap, without burning your legs.
        • mh- 6 days ago |
          Hey, I can have my Intel MBP on my lap without burning my legs (or making me feel nauseated).

          As long as I don't open Chrome, Safari, Apple Notes, or really any other app...

          • Etheryte 5 days ago |
            Sometimes even not opening any apps is not enough if Spotlight decides that now is the time to update its index or something similar. Honestly nuts looking back at it.
            • 0x457 5 days ago |
              I remember when macOS switched to evented way of handling input and for some reason decided that dropping keyboard events is okay...anyway if spotlight was updating its index, then unlocking your laptop with a password was impossible.
        • _hyn3 6 days ago |
          ARM processors have always been good at handling heat and low power (like AWS Graviton), but what laptop did you have before that would overheat that much during normal usage? That seems like a very poor design.
          • ddingus 5 days ago |
            My 2010 i7 MBP would do that under heavy loads. All aluminum body, with fans, and when that CPU really had to work it put out a lot of heat.

            Compiling gcc multi thread a few times would be enough.

            • MBCook 5 days ago |
              Not only that, they seemed to get hotter with each subsequent generation.

              My 2015 could get hotter than my 2010. I think my work 2019 gets hotter still.

              I think the Intels were hotter than my G4, but it’s been too long and the performance jump was worth it.

              Got an M1 Air, it blows them all out of the water (even 2019, others aren’t a surprise). And it does it fanless, as opposed to emulating a jet engine.

              And if you really push it, it gets pleasantly warm. Not possibly-burn-inducingly hot.

              • ddingus 5 days ago |
                My experience is the same. I only owned one Intel MacBook Pro.

                Was the only one I needed thankfully. Missed the whole port reduction and power bar mess.

                I love my M1 Air. It is the first general purpose computing hardware that felt like a real advance. I measured that two ways:

                How much closer to my mobile is it?

                How much faster is it?

                The Air feels like a Mobile Computer if that makes any sense. One USB port expander to serve as a dock of sorts later and it makes for a great desktop experience.

                When using it on the go, it has that light, powerful feel much like running my phone does.

                Great machine. It is easily my favorite computer Apple has ever made, 8 bit greatness and an older age aside.

                • MBCook 5 days ago |
                  Same. Easily the best computer I’ve owned in the last 25 years in terms of satisfaction/niceness.
                  • ddingus 4 days ago |
                    People really like that machine.

                    Mine is sticky. As in when others get hold of it, next thing I hear is usually, "oooh" and then it takes some time for it to come back!

                    I got mine for a song. Sweet deal, but it is the 8GB 256GB configuration. Not too big of a deal, but more internal storage would be nice. Maybe I will send it out somewhere to get a boost.

                    Would have already, but I worry a little about those services.

          • mistrial9 5 days ago |
            anything that pegged the CPU for extended periods of time, caused many Apple laptop models to overheat. There is some design tradeoff about power specs, cooling, "typical workloads" and other things.. A common and not-new example of heat-death-workload was video editing..
          • renewiltord 5 days ago |
            I’d place my top of the line Intel Mac on my feet to warm them and then bend over and write code while sitting on my chair.
          • ponector 5 days ago |
            I had a MBP 2019 which with default fan settings was really hot from the 1h videocall in Bluejeans. Or 5 minutes navigating in Google maps and street view in Chrome.

            Only solution was to increase fan speed profile to max rpm.

            • MBCook 5 days ago |
              On my 2019, if a single process hits 100% of one core the fan becomes quite noticeable (not hairdryer though) and the top of the keyboard area where the CPU is gets rather toasty.

              It’s way too easy to heat up.

          • alemanek 5 days ago |
            2019 16” i9 MacBook Pro. These days it has to be on a laptop cooler when I push it so that it won’t turn itself off due to the heat.
          • antifa 3 days ago |
            > but what laptop did you have before that would overheat that much during normal usage?

            That's pretty much all Intel laptops I've owned since 2007.

        • _dain_ 5 days ago |
          you still shouldn't have it on your lap though. it's bad for your posture.
          • theshackleford 5 days ago |
            Not for everyone. It turns out by following standard ergonomic guidelines I was doing more damage. I have to actually look way down at monitors, even on my desk. It has to be well below eye height, basically slammed.
      • wing-_-nuts 6 days ago |
        I do wonder if PC desktops will eventually move to a similar design. I have a 7800x3d on my desktop, and the thing is a beast but between it and the 3090 I basically have a space heater in my room
        • Cthulhu_ 6 days ago |
          It would make sense, but it depends heavily on Windows / Linux support, compatibility with nvidia / amd graphics cards, and exclusivity contracts with Intel / AMD. Apple is not likely to make their chips available to OEMs at any rate, and I haven't heard of any 4th party working on a powerful desktop ARM based CPU in recent years.
        • jwells89 6 days ago |
          It would be nice. Similarly have a 5950X/3080Ti tower and it’s a great machine, but if it were an option for it to be as small and low-noise as the new Mini (or even the previous mini or Studio), I’d happily take it.
          • heelix 5 days ago |
            For what it is worth, I'm running that with open loop water cooling. If your chassis has the space for it, my rig won't even need to turn on fans for large amounts of the day. (Loop was sized for a threadripper, which were not really around for home builders) Size is an issue, however :)
        • philistine 6 days ago |
          I sincerely believe that the market for desktop PCs is completely coopted by the gaming machines. They do not care one whit about machine size or energy efficiency, with only one concern in mind: bare performance. This means they buy ginormous machines, incredibly inefficient CPUs and GPUs, with cavernous internals to chuck heat out with no care for decibels.

          But they spend voriously. And so the desktop PC market is theirs and theirs alone.

          • ryandrake 6 days ago |
            Desktop PCs have become the Big Block V8 Muscle Cars of the computing world. Inefficient dinosaur technology that you pour gasoline through and the output is heat and massive raw power.
            • rpmisms 5 days ago |
              Desktops are actually pickup trucks. Very powerful and capable, capable of everyday tasks, but less efficient at them. Unbeatable at their specialty, though.
          • wing-_-nuts 6 days ago |
            Yeah. It's been the case for a while now that if someone just wants a general computer, they buy a laptop (even commonly a mac).

            That's why the default advice if you're looking for 'value' is to buy a gaming console to complement your laptop. Both will excel at their separate roles for a decade without requiring much in the way of upgrades.

            The desktop pc market these days is a luxury 'prosumer' market that doesn't really care about value as much. It feels like we're going back to the late 90's, early 2000's.

            • 0x457 5 days ago |
              Unless you play games where you stare at the map while balancing exel spreadsheets.
              • philistine 5 days ago |
                That's okay, Factorio has awesome Apple Silicon support.
                • 0x457 5 days ago |
                  What about Paradox games? genuinely curious about that.
                  • rpmisms 5 days ago |
                    Stellaris is great on my M2
                  • philistine 4 days ago |
                    I played a bunch of EU4 and HOI4 without any issues. But I think those use emulation under the hood.

                    That's the thing with macs, all the strategy games tend to release there because the market for mac users and strategy gamers is a circle.

          • 0x457 5 days ago |
            Well because that's the audience that upgrades before something breaks and also lets you capture high-end market of professionals.
          • corimaith 5 days ago |
            The price of a high end gaming pc (7800x3d and 4080) is around 2k USD. That's comparable to the MacBook Pro.

            Yeah sure, if you start buying unnecessary luxury cases, fans and custom water loops it can jump up high, but that's more for clueless rich kids or enthusiasts. So I wouldn't place pc gaming as an expensive hobby today, especially considering Nvidia money grubbing practices that won't stay forever.

        • Aurornis 5 days ago |
          A game I play with friends introduced a Mac version. I thought it would be great to use my Apple Silicon MacBook Pro for some quiet, low-power gaming.

          The frame rate wasn’t even close to my desktop (which is less powerful than yours). I switched back to the PC.

          Last time I looked, the energy efficiency of nVidia GPUs in the lower TDP regions wasn’t actually that different from Apple’s hardware. The main difference is that Apple hardware isn’t scaled up to the level of big nVidia GPUs.

        • ChoGGi 5 days ago |
          That 3090 uses about 5x more power than the 7800x3d.
        • zuminator 5 days ago |
          I just bought a Beelink SER9 mini pc, about the same size as the Mac Mini. It's got the ridiculously named AMD Ryzen AI 9 HX 370 processor, a laptop CPU that is decently fast for an X64 chip (2634/12927 Geekbench 6 scores) but isn't really competition for the M4. The GPU isn't up to desktop performance levels either but it does have a USB4 port capable of running eGPUs.
        • deafpolygon 5 days ago |
          After having my PC for (almost) 4 years, I can say that this beast is the last large form computer I will buy.
      • mdasen 6 days ago |
        > Nor have I had any desire to upgrade

        I never thought I'd see a processor that was 50% faster single-core and 80% faster multi-core and just shrug. My M1 Pro still feels so magically fast.

        I'm really happy that Apple keeps pushing things and I'll be grateful when I do decide to upgrade, but my M1 Pro has just been such a magical machine. Every other laptop I've ever bought (Mac or PC) has run its fan regularly. I did finally get fan noise on my M1 Pro when pegging the CPU at 800% for a while (doing batch conversion of tons of PDFs to images) - and to be fair, it was sitting on a blanket which was insulating it. Still, it didn't get hot, unlike every other laptop I've ever owned did even under normal usage.

        It's just been such a joyful machine.

        I do look forward to an OLED MacBook Pro and I know how great a future Apple Silicon processor will be.

        • spiderfarmer 6 days ago |
          My best Apple purchases in 20 years of being their customer: The Macbook M1 Pro 16 inch and the Pro Display XDR. When Steve Jobs died I really thought Apple was done, but their most flawless products (imho) came much later.
          • eastbound 5 days ago |
            Yeah, don’t forget the 10 dark years between the Butterfly Keyboard Macbook Pro 2016, the Emoji Macbook Air, until the restoration of the USB ports… around 2022.

            That was truely the dark age of Apple.

            • jamiek88 5 days ago |
              Those were the Ive Unleashed years.

              Unbridled control over all design in one hyper opinionated guy was an error well resolved.

              • robotresearcher 5 days ago |
                The first guy did alright.
            • adriand 5 days ago |
              I had the 2015 MBP and I held onto it until the M1 came out…I still have it and tbh it’s still kind of a great laptop. The two best laptops of the past decade for sure.
            • sixothree 5 days ago |
              2008 mbp was my last apple laptop. Still feel quite slighted by Apple's treatment on that one.
            • Damogran6 5 days ago |
              That's the common hipster take on it...but I kinda liked the way the butterfly keys felt and my impression of the touchbar was chaotic neutral. By the time they let up a little on the ports, I'd lived with 4 USB-C ports so long that it really wasn't that big a deal.

              What got me, however, was that was the time where their trade-in program was really kicking in. I think I got $800 for my touchbar mac which made the jump to an M1 Pro 14 a little less painful. Now you don't seem to so much pay for hardware as lease the Apple experience, so long as the hardware is still good.

          • paraboli 5 days ago |
            What do you like about the Pro Display XDR?
            • spiderfarmer 4 days ago |
              With 6k I can have both vscode and the website I'm working on in one display, while still having lots of vertical space and no noticeable pixels. I tried using 2 4k screen next to each other among other things, but nothing works ergonomically as good as this.
        • matwood 6 days ago |
          Yeah, I have an M1 Max 64GB and don't feel any need to upgrade. I think I'll hit the need for more ram before a processor increase with my current workload.
          • kstrauser 5 days ago |
            Same for me, in a Mac Studio. It's only 2 years old so it's not like I would expect it to suck, but it's a rocket.
        • dzhiurgis 6 days ago |
          Folding/rolling screen would be awesome. 16” form factor that turns into 20” screen.
        • lispm 5 days ago |
          I have the M4 iPad with the new OLED. That screen would be great in a Macbook Air.
      • dlachausse 6 days ago |
        It still blows my mind how infrequently I have to charge my M3 Pro MacBook Pro. It is a complete game changer.
      • JansjoFromIkea 6 days ago |
        If it's a M1 Macbook Air there's a very good reason you've never heard a fan!

        Blows my mind how it doesn't even have a fan and is still rarely even anything above body temperature. My 2015 MBP was still going strong for work when I bailed on it late last year but the transition purely on the heat/sound emitted has been colossal.

        • philistine 6 days ago |
          Factorio: Space Age is the first piece of software that my M1 shows performance issues with. I'm not building xCode projects or anything, but it is a great Mac. Maybe even the greatest.
          • collinvandyck76 5 days ago |
            There's a known issue on arm macs with external monitors that messes with the framerates. Hopefully it gets fixed soon because pre-space age factorio was near flawless in performance on my m2.
            • philistine 5 days ago |
              What! That's exactly what I'm doing. Woah, I can't wait for the inevitable fix, those guys have been releasing patches like crazy.
              • collinvandyck76 a day ago |
                There's a few threads on the technical help forums that are tracking this -- I've subscribed to them because I'm super excited about them getting around to fixing it! :)
        • alpaca128 5 days ago |
          It's not just that; at times I pushed all CPU cores to 100% in the M1 Mini and even after 30+ minutes I couldn't hear the fan. Too bad the Macbook Airs got nothing but a literal aluminium sheet as cooling solution.
      • jihadjihad 6 days ago |
        I've got a coworker who still has an Intel MacBook Pro, 8-core i9 and all that, and I've been on M chips since they launched. The other day he was building some Docker images while we were screensharing and I was dumbfounded at how long it took. I don't think I can even remember a recent time when building images, even ones pushed to CDK etc., takes more than a minute or so. We waited and waited and finally after 8 minutes it was done.

        He told me his fans were going crazy and his entire desk was hot after that. Apple silicon is just a game changer.

        • grahamj 5 days ago |
          For sure. I had one of those for work when I got my personal M1 Air and I couldn't believe how much faster it was. A fanless ultraportable faster than an 8-core i9!

          I was so happy when I finally got an M1 MBP for work because as you say Docker is so much faster on it. I feel like I don't wait for anything anymore. Can't even imagine these new chips.

          • MBCook 5 days ago |
            Same situation for me.

            I’m going to be very happy when it’s time to replace my Intel MBP at work.

            It’s rarely loud, but boy it likes to be warm/toasty at the stupidest things.

        • rahkiin 5 days ago |
          Oh I know the feeling… when using ict-managed windows laptops at work
        • moondev 5 days ago |
          Sounds like they were building an aarch64 image, building an x86_64 image on Apple Silicon will also be slow - unless you are saying the M* builds x86_64 faster than an i9?
          • robotresearcher 5 days ago |
            Are you saying it's faster to build a binary with native architecture? Why is that?
            • 0x457 5 days ago |
              Because there two ways to get to the same result:

              - use native toolchain to produce artifacts for a different architecture

              - use emulation to run different arch toolchain to produce different arch artifacts

              First one is fast, second one is slow. In docker only second variant is possible.

              • PoignardAzur 5 days ago |
                That doesn't sound right. It's not like you need to run in WebAssembly to produce WebAssembly binary.

                Why would you need to emulate x86 to produce x86-shaped bytes?

                • kapoolo 5 days ago |
                  Emulating the target architecture's SDK can be much easier and simpler to setup and avoid mistakes. You do not need to make any changes or configuration to make it compile.

                  For exmaple, that is generally the way you cross compile Flatpaks in cli or ide. In, for example, GNOME Builder you can just select the device you want to build for, like your smartphone, and it uses QEMU to emulate the entire SDK in the target architecture, you can also seemlessly run the Flatpak on your host through QEMU user mode emulation too.

                • 0x457 5 days ago |
                  Well, WASM runs in a VM, not on a hardware? There isn't hardware that runs WASM directly (at least nothing serious, maybe some hobby FPGA or POC).

                  So when you compiled to WASM you're going with route #1. WASM was designed with this this in mind.

                  • robotresearcher 4 days ago |
                    A compiler doesn’t need to run the instructions it generates. Why should cross-compilation be slower than native compilation?
                    • 0x457 4 days ago |
                      It doesn't need (unless it does in some cases), but entire build process needs to be aware of what needs to be native and what doesn't. Mainly a lot of issue would come from linker. Yes, it's not part of the compilation technically, but aside from hellworld.c everything you build would probably require linker at one point or another.

                      Then there are some gcc quirks:

                      - for gcc compilation target is defined when gcc is compiled, so only way to cross-compile with gcc that I know of is emulation of target arch and runing gcc for target arch

                      However, we're talking about docker containers here, emulation would be the default way and path of least resistance.

                      Again, I will reiterate: every cross-compilation strategy falls into one of these two buckets. In some cases what I've describe in #1 is possible (WASM, java bytecode or really (almost) anything that targets a VM), in some cases it isn't and then you gotta go with #2 (docker, gcc)

                      • robotresearcher 4 days ago |
                        How about Xcode's LLVM on an Intel Mac building for ARM iOS?
                        • 0x457 4 days ago |
                          Two things here:

                          - clang and LLVM don't have that GCC quirk (LLVM support less platforms, so give gcc some credit)

                          - it was in apple's financial interest make sure their things are easy to cross-compile.

                          - linker job is extremely straighforward in case of iOS

                             - everything is provided by either apple or application developer
        • JohnBooty 5 days ago |
          That was what finally got me to spend the cash and go with Apple Silicon - we switched to a Docker workflow and it was just doooooog slow on the Intel Macs.

          But this M1 Max MBP is just insane. I'm nearly 50 and it's the best machine I've ever owned; nothing is even close.

      • voisin 6 days ago |
        Do you have an Air or Pro?
      • huijzer 6 days ago |
        > I don't think i've heard the fans a single time I've had this machine and it's been several years.

        Yes I agree. I sometimes compile LLVM just to check whether it all still works. (And of course to have the latest LLVM from main ready in case I need it. Obviously.)

      • dzhiurgis 6 days ago |
        16” M1 still perfectly good machine for my work (web dev). Got a battery replacement which also replaced top cover and keyboard - it’s basically new. Extended applecare for another year which will turn it into fully covered 4 year device.
      • Aurornis 5 days ago |
        > I am still astounded the huge change moving from an Intel Mac to an Apple Silicon Mac (M1) has had in terms of battery performance and heat

        The battery life improvements are great. Apple really did a terrible job with the thermal management on their last few Intel laptops. My M1 Max can consume (and therefore dissipate) more power than my Intel MBP did, but the M1 thermal solution handles it quietly.

        The thermal solution on those Intel MacBooks was really bad.

        • Tagbert 5 days ago |
          Those MacBooks were designed when Intel was promising new, more efficient chips and they didn’t materialize. Apple was forced to use the older and hotter chips. It was not a good combination.
          • microtherion 5 days ago |
            Another factor might be that Intel MacBook Pros got thinner and thinner. The M1 MBP was quite a bit thicker than its Intel predecessors, and I think the form factor has remained the same since then.
            • 0x457 5 days ago |
              Yes, but I had about every generation of intel MPB - it never was as good at it as M1 MBP.
            • Tagbert 3 days ago |
              The 2012-2015 MBPs were slimmer than the previous unibodies but a lot of that was due to dropping the optical and spinning hard drives. The thermals were not a particular problem. Where that did become a problem was with the 2016 redesign that introduces the wedge-shaped case. The design of that would have been locked in 2-3 years earlier. From what I have read, when the case designs were being worked on Intel was promising Apple that their next generation of chips would be on a smaller processor node and would run cooler so that defined the thermal envelope of the new MBPs. Unfortunately, as we all saw, Intel’s production stalled for 5-6 years and the only chips they could produce were power hungry and hot. That caused problems for those thinner MBPs.

              Apple seems to have taken that to heart when they designed the cases for the Apple Silicon MBPs and those have excellent cooling (and more ports).

        • mschuster91 5 days ago |
          > My M1 Max can consume (and therefore dissipate) more power than my Intel MBP did, but the M1 thermal solution handles it quietly.

          You have to really, REALLY put in effort to make it operate at rated power. My M2 MBA idles at around 5 watts, my work 2019 16-inch i9 is around 30 watts in idle.

      • varispeed 5 days ago |
        On extremely heavy workloads the fans do engage on my M1 Max, but I need to get my ear close to the machine to hear them.

        Recently my friend bought a laptop with Intel Ultra 9 185h. It roared fans even when opening Word. That was extraordinary and if it was me making the purchase I would have sent it back straight away.

        My friend did fiddle a lot with settings, had to update BIOS and eventually the fan situation was somewhat contained, but man I am never going to buy Intel / AMD laptop. You don't know how annoying fan noise is until you get a laptop that is fast and doesn't make any noise. With Intel is like having a drill pointed to your head that can goes off at any moment and let's not mention phantom fan noise, where it gets imprinted in your head that your brain makes you think the fans are on, but they are not.

        Apple has achieved something extraordinary. I don't like MacOS, but I am getting used to it. I hope one day this Asahi effort will let us replace it.

        • jltsiren 5 days ago |
          When I play Baldur's Gate 3 on my M2 Max, the fans get loud. You need a workload that is both CPU-heavy and GPU-heavy for that. When you are stressing only the CPU or the GPU but not both, the fans stay quiet.
        • kristianp 5 days ago |
          I have an i7 12th gen thinkpad and the fans would often be audible when I first got it with Windows 11. Then I installed linux. Now the fan is only audible when something has gone wrong and a process (usually chrome) is pinning a core.
      • hhejrkrn 5 days ago |
        Lol ... You were not around for the ppc -> Intel change ... Same thing happened then ... Remarkable performance uplift from the last instruction set ... And we had Rosetta which allowed compatibility... The m1 and arm took power efficiency to another level .... But yeah what has happened before will happen again
        • nl 5 days ago |
          I was around for that.

          The thing then was it was just Apple catching up with windows computers which had had a considerable performance lead for a while. It didn't really seem magical to just see it finally matched. (Yes Intel Mac's got better then Windows computers but that was later. At launch it was just matching)

          It's very different this time because you can't match the performance/battery trade off in anyway.

          • GeekyBear 5 days ago |
            Intel chips had better integer performance and PowerPC chips had better floating point performance, which is why Apple always used Photoshop performance tests to compare the two platforms.

            Apple adopted Intel chips only after Intel replaced the Pentium 4 with the much cooler running Core Solo and Core Duo chips, which were more suitable for laptops.

            Apple dropped Intel for ARM for the exact same reason. The Intel chips ran too hot for laptops, and the promised improvements never shipped.

            • majormajor 5 days ago |
              The G5 in desktops was more competitive but laptops were stuck on G4s that were pretty easy to beat by lots of things in the Windows world by the time of the Intel switch. And Photoshop was largely about vectorized instructions, as I recall, not just general purpose floating point.
              • GeekyBear 5 days ago |
                Yes, and when it became clear that laptop sales would one day outpace desktop sales, Apple made the Intel switch, despite it meaning they had to downgrade from 64 bit CPUs to 32 bit CPUs until Core2 launched.

                The Apple ecosystem was most popular in the publishing industry at the time, and most publishing software used floating point math on tower computers with huge cooling systems.

                Since IBM originally designed the POWER architecture for scientific computing, it makes sense that floating point performance would be what they optimized for.

          • hhejrkrn 5 days ago |
            I owned a g4 power Mac ... Yes moving to intel at the time was magical... Maybe not for you but for me it was....
        • 0x457 5 days ago |
          Yeah, but that Rosetta was usually delivering "i guess it works?" results. It was so slow.
      • jki275 5 days ago |
        I went from an i9 MBP to an M1 Max earlier this year. I can't even describe it. Blows my mind.
      • hcarvalhoalves 5 days ago |
        Last year I bought an M1 Pro used, but the last MPB I had was an early 2015. I just didn't bothered upgrading, in fact the later Intel models were markedly worse (keyboard, battery life, quality control). The Apple Silicon era is going to be the PowerPC prime over again.
    • mortenjorck 6 days ago |
      Indeed, the $1599 M4 Pro Mac mini beats the current $3999 M2 Ultra Mac Studio on GeekBench 6 multi-core: https://www.macrumors.com/2024/10/31/m4-pro-chip-benchmark-r...
      • philistine 6 days ago |
        Back in the early 90s, when Apple was literally building their first product line with the Mac, they would come out with their second big honking powerhouse Mac: the Macintosh IIx. It blew everything out of the water. Then they would come out with their next budget all-in-one machine. But computing was improving so fast, with prices for components dropping so quickly, that the Macintosh SE/30 ended up as impressive as the Macintosh IIx with a much lower price. That's how the legend of the SE/30 was born, turning it into the best Mac ever for most people.

        With how fast and impressive the improvements are coming with the M-series processors, it often feels like we're back in the early 90s. I thought the M1 Macbook Air would be the epitome of Apple's processor renaissance, but it sure feels like that was only the beginning. When we look historically at these machines in 20 years, we'll think of a specific machine as the best early Apple Silicon Mac. I don't think that machine is even out yet.

        • drcongo 6 days ago |
          I think that machine is this M1 Pro MBP I'm using right now, and will probably still be using in 20 years.
          • nxobject 5 days ago |
            I look forward to seeing you at a conference in 20 years' time where we'll both be running Fedora 100 on our M1 MacBook Pros.
            • gessha 5 days ago |
              I hope the SSDs lasts long enough for you folks (the only thing + money that hold holds be back from snagging one of those)
          • grahamj 5 days ago |
            Agreed. It might share the title with the M1 Air which was incredible for an ultraportable, but the M1MBP was just incredible period. Three generations later it's still more machine than most people need. M2/3/4 sped things up but the M1 set the bar.
            • superb_dev 5 days ago |
              The airs are incredible. I’ve been doing all my personal dev work on an M2 Air for years, it’s the best laptop I’ve ever owned.

              I’m only compelled to upgrade for more ram, but I only feel the pressure of 8gb in rare cases. (I do wish I could swap the ram)

              • grahamj 5 days ago |
                I was the same with the M1 Air until a couple months ago when I decided I wanted more screen real estate. That plus the 120Hz miniLED and better battery and sound make the 16" a great upgrade as long as the size and weight aren't an issue. I just use it at home so it's fine but the Air really is remarkable for portability.
                • Reason077 5 days ago |
                  I have the M1 Air, too. I just plug in to a nice big Thunderbolt display when I need more screen!

                  I'll likely upgrade to the M4 Air when it comes out. The M4 MacBook Pro is tempting, but I value portability and they're just so chunky and heavy compared to the Air.

            • Affric 5 days ago |
              My only regret is getting base RAM.

              It’s not a server so it’s not a crime to not always be using all of it and it’s not upgradable so it needs to be right the first time. I should have got 32GB to just be sure.

              • Reason077 5 days ago |
                Apple's sky-high RAM prices and strong resale values make this a tough call, though. It might just about be better to buy only the RAM you need and upgrade earlier, considering you can often get 50% or more of the price of a new one back by selling your old one.

                Thankfully, Apple recently made 16GB the base RAM in all Macs (including the M2/M3 MacBook Airs) anyway. 8GB was becoming a bad joke and it could add 40% to the price of some models to upgrade it!

              • masklinn 5 days ago |
                Yep, that's definitely a thing I'm proud or correctly foreseeing. I was upgrading from an old machine on 8GB, but I figured especially with memory being non upgradable it was better being safe than sorry, and if I kept the machine a decade it would come out to sanwitch money in the end.
        • thfuran 5 days ago |
          In the 90s, you probably wouldn't want to be using a desktop from 4 years ago, but the M1 is already 4 years old and will probably be fine for most people for years yet.
          • zjp 5 days ago |
            No kidding. The M1 MacBook Pro I got from work is the first time I've ever subjectively considered a computer to be just as fast as it was the day I got it.
            • rconti 5 days ago |
              I think by the time my work-provided M1 MacBook Pro arrived, the M2s were already out, but of course I simply didn't care. I actually wonder when it will be worth the hassle of transferring all my stuff over to a new machine. Could easily be another 4 years.
              • readyplayernull 5 days ago |
                Funny that we can buy renewed Intel Macs for less than $200 and do exactly the same.
                • robotresearcher 5 days ago |
                  ...until the battery runs out hours earlier.
                  • lostlogin 5 days ago |
                    Or your lap gets hot. Or the fans drive you mad. Good luck with the available ports. Oh, it’s slow AF too, but if you get the right model you can use that stupid Touch Bar.
                • markdown 5 days ago |
                  Where can you get Intel Macs for $200?
                  • alwillis 5 days ago |
                    I bought an Intel 13-inch MacBook Pro for a friend that I’m working with for $200-$250 from woot.com.
                  • readyplayernull 4 days ago |
                    Amazon. Search for "renewed".
                    • markdown a day ago |
                      Thanks. Are these the same as Apple refurbished (as in completely reskinned with a new genuine battery)?
                • kalleboo 5 days ago |
                  Maybe the desktops, but the laptops were always nigh-unusable for my workloads (nothing special, just iOS dev in Xcode). The fans would spin up to jet takeoff status, it would thermal throttle, and performance would nosedive.

                  The M1 Pro was a revelation.

                  • TylerE 5 days ago |
                    There was a really annoying issue with a lot of the intel MacBooks where due to the board design one of the two power sockets would cause them to run quite a bit hotter.
                    • kalleboo 5 days ago |
                      Yeah I remember that, I posted a YouTube video complaining about it 6 years ago, before I could find any other references to the issue online. https://www.youtube.com/watch?v=Rox2IfViJLg

                      That would cause it to throttle even when idle! But even on battery or using the right-hand ports, under continuous load (edit-build-test cycles) it would quickly throttle.

                      • TylerE 5 days ago |
                        Oh yeah, I'm very aware. My work machine was a 2015 MBP until about 6 months ago. It was really bad towards the end.
                        • kalleboo 5 days ago |
                          Congratulations on upgrading away from it!
          • abrookewood 5 days ago |
            To be fair, MOST computers are like that nowadays, regardless of brand. I'm using a Intel desktop that is ~8 years old and runs fine with an upgraded GPU.
            • thfuran 5 days ago |
              Sure, apple isn't the only one making good laptops, though they do make some of the best. My point was just that we definitely aren't back at 90s level of progress. Frequency has barely been scaling since node shrinks stopped helping power density much, and the node shrinks are fewer and farther between.
          • Terretta 5 days ago |
            Apple's marketing is comparing this season's M4s to M1s and even two generations of Intel ago. The 2x or 4x numbers suggests they are targeting and catering to this longer cycle where subliminally suggested updates are remarkably better, rather than suggesting an annual treadmill even though each release is "our best ever".
            • olliej 5 days ago |
              Yeah they did this last year and before. It’s super annoying. I’d say it’s super stupid, but I’m sure from a marketing point of view it isn’t.

              I was going to say why not compare it to something older! 100000x faster than a pc-xt!

              • kaba0 5 days ago |
                I mean, most people don't buy a new phone each year, let alone something as expensive as a laptop. They are probably still targeting Intel Mac, or M1 users for the most part.
            • drewbeck 5 days ago |
              I think it's also part of the sales pitch, tho – a lot of folks are sitting on M1s and pretty happy but wondering if they should upgrade.
          • taftster 5 days ago |
            So long as Apple is willing to keep operating system updates available for the platform. This is by far the most frustrating thing. Apple hardware, amazing and can last for years and even decades. Supported operating system updates, only a couple of years.

            I'm typing this from my mid-2012 retina mac book pro. I'm on Mojave and I'm well out of support for the operating system patches. But the hardware keeps running like a champ.

            • abhinavk 5 days ago |
              Have you tried OpenCore Patcher? It allows newer macOS to be installed on unsupported macs.
            • thfuran 5 days ago |
              Luckily m1 has Linux.
            • alwillis 5 days ago |
              Apple hardware, amazing and can last for years and even decades. Supported operating system updates, only a couple of years.

              That’s not accurate.

              Just yesterday, my 2017 Retina 4k iMac got a security update to macOS Ventura 13.7.1 and Safari even though it’s listed as “vintage.”

              Now that Apple makes their own processors and GPUs, there’s really no reason in the foreseeable future that Apple would need to stop supporting any Mac with an M-series chip.

              The first M1 Macs shipped in November 2020—four years ago but they can run the latest macOS Sequoia with Apple Intelligence.

              Unless Apple makes some major changes to the Mac’s architecture, I don’t expect Apple to stop supporting any M series Mac anytime soon.

          • DeathArrow 5 days ago |
            I bought an M1 MacBook Pro just to use it for net and watching movies when in bed or traveling. I got the Mac because of its 20 hours battery life.

            Since Snapdragon X laptops caught up to Apple on battery life I might as well buy one of those when I'll need to change. I don't need the fastest mobile CPU for watching movies and browsing the internet. But I like to have a decent amount of memory to keep a hundred tabs open.

        • leptons 5 days ago |
          >the Macintosh IIx. It blew everything out of the water.

          naa... Amiga had the A2500 around the same time, the Mac IIx wasn't better with regards to specs in most ways. And at about $4500 more expensive (Amiga 2500 was around $3300, Mac IIx was $7769), it was vastly overpriced as is typical for Apple products.

          • Reason077 5 days ago |
            Worth remembering that Amiga went out of business just a few years later, while Apple today is the largest company in the world by market capitalisation. Doesn't matter how good the product is: if you're not selling it for a profit, you don't have a sustainable business. Apple products aren't overpriced as long as consumers are still buying them and coming back for more.
            • leptons 5 days ago |
              [flagged]
              • philistine 3 days ago |
                The 100 million dollar investment from Apple ended up not being needed. Jobs put the hatchet to enough projects to reverse the trend himself. That investment from Microsoft was valuable because they promised to keep releasing Office for Mac.
                • leptons 3 days ago |
                  Nonetheless, Apple was almost out of business at one point. Microsoft invested in Apple instead of Commodore. If the opposite happened, we may be having a discussion about Commodore now, and not Apple.
                  • sgerenser 2 days ago |
                    That doesn't even make sense. Microsoft hadn't released any software for the Amiga AFAIK, while the Mac market for Word/Excel/Powerpoint was still a decent chunk of revenue for Microsoft at the time (obviously still much less than the Windows/PC market).
          • lispm 5 days ago |
            The Nubus in the IIx was great.
            • leptons 5 days ago |
              It never went far. "Nubus" was around for a few years and then ditched when Apple went to PCI.
              • lispm 5 days ago |
                Far enough. I had a Symbolics Lisp Machine on a NuBus card, the MacIvory. Actually I still have it, in a Quadra 950.
        • gcanyon 5 days ago |
          I owned an SE/30. I watched my first computer video on that thing, marveling that it was able to rasterize (not the right word) the color video real-time. I wish I had hung onto that computer.
        • syngrog66 5 days ago |
          > Back in the early 90s, when Apple was literally building their first product line with the Mac,

          cough

          like saying, "Back in the 70s with Paul McCartney's first band, Wings (...)"

          kids? get off my lawn

          • ARandomerDude 5 days ago |
            Getting older comes faster than you think. I was an adult, blinked, and decades had passed seemingly in a moment.
          • philistine 14 hours ago |
            They had the Apple II, sure, but the Mac was the first time they built a line of different computers for different users.
        • lostlogin 5 days ago |
          Playing Chuck Yeager’s air combat or Glider Pro on the SE/30 was great.
        • TacticalCoder 5 days ago |
          > I thought the M1 Macbook Air

          I've got one and it's really not that impressive. I use it as a "desktop" though and not as a laptop (as in: it's on my desk hooked to a monitor, never on my laps).

          I'm probably gonna replace it with a Mini with that M4 chip anyway but...

          My AMD 7700X running Linux is simply a much better machine/OS than that MacBook M1 Air. I don't know if it's the RAM on the 7700X or the WD-SN850X SSD or Linux but everything is simply quicker, snappier, faster on the 7700X than on the M1.

          I hope the M4 Mini doesn't disappoint me as much as the M1 Air.

          • mkesper 5 days ago |
            There are tons of affordable and upgradable NUC form factor machines for running Linux, so why a Mac Mini (if you're not running LLMs locally, then the good support and fast integrated RAM might be a reason)?
        • smm11 5 days ago |
          I'm reminded of the Intel dominance. Whatever happened?
      • cosmotic 6 days ago |
        Yes, but I suspect the 64GB of memory in the studio compared to 24GB in the mini the is going to make that studio a lot faster in many real-world scenarios.
        • losvedir 5 days ago |
          In that case, you can get the mini with 64GB of memory for $1999.
          • cosmotic 5 days ago |
            It would be $2,199 for the highest end CPU and the 64GB of memory but I think you're point remains: the Studio is not a great buy until it receives the M4 upgrades.
            • alsetmusic 4 days ago |
              But it was a great buy for the customers who needed it when it was released. I presided over IT at an architecture firm that bought a bunch of Studios when they were new. Just because it's no longer a good buy two and a half years later, when compared to the thing that ships next week, doesn't mean it wasn't a great machine.
        • nextos 5 days ago |
          And the bandwidth. A M4 Ultra would be a nice upgrade for large LLM inference on a budget.
      • throwaway106382 5 days ago |
        this is crazy, i'm more than happy with the current performance of my M1 Max Studio but an M4 Max or Ultra might actually be too good to pass up
        • TylerE 5 days ago |
          I’m already planning on swapping mine for an M4 Ultra.

          I love my M1 Studio. It’s the Mac I always wanted - desktop Mac with no integrates peripherals, a ton of ports - although I still use a high end hub to plug in… lot more. Two big external SSDs, my input peripherals (I’m a wired mouse and keyboard kind of guy) then a bunch of audio and USB midi devices.

          It’s even a surprisingly capable gaming machine for what it is. Crossover is pretty darn good these days, and there are ARM native Factorio and World of Warcraft ports that run super well.

          • throwaway106382 5 days ago |
            I haven’t dug too much into gaming since I have a Linux PC that supports all my steam games but what’s the experience of running Crossover on Apple Silicon like? Can you run x86 Windows games using Rosetta (or is it some other method)?
            • TylerE 5 days ago |
              Yea, it’s running via Rosetta. Just install windows steam and launch game. That is with the paid version of crossover though.

              Don’t expect to play dark souls on it, but for indies and the like it’s fine.

              • throwaway106382 5 days ago |
                Cool. Just saw it has a 14 day trial, will give it a shot on my M1 Max Studio and see how it goes.

                (Dark Souls is my favourite game/series…how did you know)

                • TylerE 4 days ago |
                  I mean, you actually probably COULD play Dark Souls on it, but you'd be turning settings down a bit I bet (I don't actually know how optimized that game is). I'm pushing a 1440p display which certainly doesn't make things easier on the Mac.

                  The biggest annoyance I've hit actually is that game controller support is pretty bad. Don't expect generic USB HID game controllers to work, support for that isn't baked into MacOS the way it is Windows (Via DirectInput, etc).

                  The happy path is pretty much specifically a bluetooth xbox controlle.

      • dimgl 5 days ago |
        I've been thinking about getting a Mac mini as a small server box due to how powerful it is.
        • lostlogin 5 days ago |
          What would you be running on it?

          I’d like a few VMs for a media server and the associated apps. Pihole too ideally, but I keep that separate as that VM going bad is never good.

        • thejazzman 5 days ago |
          It's my plex server and nas (m1). I've abandoned all the complicated complications and just have an 8 bay thunderbolt enclosure full of drives (JBOD)

          And pg server. And a few web sites server. And something running in orb stack.

          It's the 8gb model and I have around 2gb free most of the time

          • bscphil 5 days ago |
            What enclosure do you have? Do you like it?
            • thejazzman 5 days ago |
              Owc thunderbay 8

              I like it in every way except price. It just works, comes back online after a power outage, etc. I don't recall any unscheduled disconnects.

              --

              Additional thoughts: I think there are complaints about the fan being loud so I swapped it out when I first got it. I also have it in my basement so I don't hear anything anyway -- HDDs are loud, especially the gold ones

            • alsetmusic 4 days ago |
              Not who you asked, but I have the OWC Express 4M2 [0]. Four M.2 chips RAIDed in a thunderbolt chassis. Fast as can be. I absolutely love it.

              [0] https://eshop.macsales.com/shop/express-4m2

      • pier25 5 days ago |
        And both those machines are way faster than the last Intel Mac Pro which started at around $7000 iirc
      • wslh 5 days ago |
        Personally, the Mac Mini will be my reentry into desktop computers after more than 1.5 decades [1]. Your comment got me thinking: could this be another perfectly calculated move by Apple? After all, I’ve only bought mobile devices from them until now. I’m eager to see Apple’s financial results for Q4 2024 and Q1 2025 to understand how this strategy plays out.

        [1] https://news.ycombinator.com/item?id=41996008

    • thomassmith65 5 days ago |
      Does anyone know if this Mac Mini can survive longer than a year? Apple's approach to hardware design doesn't prioritize thermal issues*.

      In fact, the form factor is why I'm leaning toward taking a pass - I don't want a Mac Mini I would have to replace every 12 months.

      * or rather, Apple doesn't target low enough temperatures to keep machines healthy beyond warranty

      • ezfe 5 days ago |
        I'm not sure why you think it would be worse than a MacBook Air which literally has no fan
        • thomassmith65 5 days ago |
          Are the new MacBook Airs the ones that have throttling issues due to heat?
          • alpaca128 5 days ago |
            Yes, but you only really encounter that when pushing the CPU to 100% for more than a few minutes. The cooling is objectively terrible, but still easily enough for most users, that's the crazy thing.
            • nox101 5 days ago |
              maybe? as local LLM/SD etc get more common it might be common to push it. I've been getting my fans to come on and get burning hot quite often lately because of new tech. I get that I'm a geek but with Apple, Google and everyone else trying to run local ML it's only a matter of time.
              • nox101 5 days ago |
                After posting this I thought of a few possible use cases. They might never come to pass but ... Some tech similar to DLSS might come along that lets streaming services like youtube and netflix to send 1/10th the data and get twice as good an image but require extreme processing on the client. It would certainly be in their interest (less storage, less bandwidth, decompression-upscaling costs pushed to client) Whether that will ever happen I have no idea. I was just trying to think of an example of something that might need lots of compute power at home for the masses.

                Another could be realtime video modification. People like to stream and facetime. They might like it even more if they could change their appearance more than they already can using realtime ML based image processing. We already have some of that in the various video conferencing / facetime apps but it's possible it could jump up in usage and needed compute power with the right application.

              • alpaca128 5 days ago |
                Apple's chips already have AI accelerators for things like content-based image search. They would never retroactively worsen battery life and performance just for a few more AI features when they could instead use it as selling point for the next hardware generation.

                And if you regularly use local generative AI models the Pro model is the more reasonable choice. At that point you can forget battery life either way.

                • nox101 5 days ago |
                  see above. your comment sounds like "640k is all you need". usage needs change as new usages appear
                  • alpaca128 5 days ago |
                    No, I'm saying if you have 640k you can't just download more RAM, and Apple wouldn't ever try to because that's a free new feature they can market in the next model.
          • thomassmith65 5 days ago |
            Hopefully not? I honestly don't know. It's been around three years (whichever year it was they replaced Target Disk Mode) since I followed Apple news very closely.
          • ezfe 3 days ago |
            Every Apple Silicon MacBook Air throttles after 5-10 minutes of sustained load because they have passive cooling - but the amount of load needed and the throttled speed is not noticeable to casual users.

            You only notice throttling on the MacBook Air when doing things like video renders that use max power for an extended period of time.

      • tiltowait 5 days ago |
        Do you have any source on this?
        • thomassmith65 5 days ago |
          It might be different post-Intel? I'm too lazy to dig up sources for Apple's past lost class action lawsuits, etc.

          That Rossman guy, the internet-famous repairman, built his youtube channel on videos about Apple's inadequate thermal management. They're probably still archived on his channel.

          Hell, I haven't owned a Mac post the year 2000 that didn't regularly hit temperatures above 90 celsius.

          • johnklos 5 days ago |
            Why would you, or anyone, ever compare a line of Intel machines with a line of machines that have a vastly different architecture and power usage? It'd be like comparing Lamborghini's tractors and cars and asking if the tractors will scrape on steep driveways because you know the cars do.
            • thomassmith65 5 days ago |
              On the other hand, it is comparing Apples to Apples.

              The Gods didn't deliver specs to Apple for Intel machines locking the company to placement/grades/design/brands/sizes of chassis, fans, logic board, paste etc. Apple, in the Intel years, just prioritized small form factor, at the expense of longevity.

              And Apple's priorities are likely still the same.

              My concern is that, given cooler-running chips, Apple will decrease form factor until even the cooler-running chips overheat. The question, in my mind, is only whether the team at Apple who design chips can improve them to a point where the chips run so coolly that the rest of Apple can't screw it up (ie: with inadequate thermal design).

              If that has happened, then... fantastic, that's good for consumers.

              • Affric 5 days ago |
                Jonny Ive left and Apple decided thinness wasn’t the only value.

                100% Apple Silicon is that for computers. Very rarely do my fans whizz up. It’s noticeable when someone is using an x64 and you’re working with them because you will hear their computer’s fans on.

                The work Apple has done to create a computer with good thermals is outrageous. Minimising distances for charges to be induced over.

                I run Linux on my box. It’s great for what it does but these laptops are just the slickest computers I have ever used.

                Never gets hot. Fans only come on during heavy compilation tasks or graphic intensive workloads.

                • thomassmith65 5 days ago |
                  That is encouraging to read, and hopefully it truly is the case that Apple has weened itself from its obsession with thinness.

                  Some of the choices Apple made after SJ's death left such an unpleasant taste in my mouth that I know have knee-jerk reactions to certain Apple announcements. One of those is that I experience nausea when Apple shrinks the form factor of a product. Hopefully that has clouded my judgement here, and in fact these Mac Minis have sufficient airflow to survive several years.

          • brigade 5 days ago |
            Do you disagree with Intel's stated Tjunction, or disagree that Intel is capable of controlling clocks to remain within its stated thermal limits?

            Like even with Intel chips that actually died early en masse (13th and 14th gen), the issue wasn't temperature.

            • thomassmith65 5 days ago |
              2x correct amount of thermal paste... not good.

              Insufficient airflow from blowers... not good.

              110 celsius heat... not good for lead-free solder... not good for computer.

              This whole thread is starting to feel surreal to me. Pretty soon everyone will have me believing I dreamt up Apple's reputation for bad thermal management.

              • brigade 5 days ago |
                Well, when you don’t appear to know or care about the actual issues stemming from poor thermals (Intel relying too much on turbo clocks, toasty crotches, low battery life, noisy fans) and instead complain about made-up issues, yeah.
                • thomassmith65 5 days ago |
                  My frustration was with the totality of comments in the thread, not yours exclusively. I'd have no problem with any one reply in this thread, on its own. Apologies if I came across as rude.

                  There's nothing in a comment thread so cringeworthy and boring as a person trumpeting their own expertise, so I'll refrain, and leave off here.

          • hedgehog 5 days ago |
            Rossmann is like Scotty Kilmer, the automotive guy. Lots of clickbaity breathless videos, highly variable technical accuracy.
      • reaperducer 5 days ago |
        Does anyone know if this Mac Mini can survive longer than a year? Apple's approach to hardware design doesn't prioritize thermal issues.

        I've had an M1 Mac Mini inside a hot dresser drawer with a TV on top since 2020.

        It doesn't do much other than act as a media server. But it's jammed pretty tight in there with an eero wifi router, an OTA ATSC DVR, a box that records HDMI, a 4K AppleTV, a couple of external drives, and a full power strip. That's why it's hot.

        So far, no problems. Except for once when I moved, it's been completely hands-off. Software updates are done over VNC.

      • nyarlathotep_ 5 days ago |
        I've had a mac mini m1 on my desk with nearly 100% uptime since launch.

        It only gets powered off only when there's a power outage or when I do an update.

    • pxmpxm 5 days ago |
      Here's the geekbench link https://browser.geekbench.com/v6/cpu/8593555

      How/where are they getting 128gb of ram? I don't see that as an option for any of the pre-orders pages.

      Still pretty impressive, I get 1217/10097 with dual xeon gold 6136 that doubles as a space heater in the winter.

      • abhinavk 5 days ago |
        Switch to M4 Max 16-core CPU, it will unlock 64 and 128GB options for memory.
        • pxmpxm 5 days ago |
          Mac minis aren't available with set up - seems only macbook pro?
          • abhinavk 4 days ago |
            There are no Mac mini SKUs with M4 Max CPUs. Only base M4 and M4 Pro.
    • djmips 5 days ago |
      Is it crazy? The chip itself is small. I'm not up on the subject but is it unusual? Are we talking power draw and cooling adding greatly to the size? I guess the M4 Pro must have great specs when it comes to running cool.
  • xbenjii 6 days ago |
    I'm confused. They're claiming "Apple’s M4 Max is the first production CPU to pass 4000 Single-Core score in Geekbench 6." yet I can see hundreds of other test results for single core performance above 4000 in the last 2 years?
    • api 6 days ago |
      Could those be overclockers? I often see strange results on there that looks like either overclockers or prototypes. Maybe they mean this is the fastest general purpose single core you can buy that is that fast off the shelf with no tinkering.
    • ceejayoz 6 days ago |
      Are those production results?

      https://browser.geekbench.com/v6/cpu/1962935 says it was running at 13.54 GHz. https://browser.geekbench.com/v6/cpu/4913899 looks... questionable.

      • xbenjii 6 days ago |
        Yeah that's fair lol
      • zeroonetwothree 6 days ago |
        • ceejayoz 6 days ago |
        • wtallis 5 days ago |
          7614 MT/s on the RAM is a pretty large overclock for desktop DDR5.
          • ThatMedicIsASpy 5 days ago |
            There are 8000MT/s CUDIMMs for the new Intel Chips now...
            • wtallis 5 days ago |
              They've been announced, within the past two weeks, and as far as I can tell aren't actually available for purchase from retailers yet: the only thing I've seen actually purchasable is Crucial's 6400MT/s CUDIMMs, and Newegg has an out-of-stock listing for a G.Skill kit rated for 9600MT/s.

              The linked Geekbench result from August running at 7614 MT/s clearly wasn't using CUDIMMs; it was a highly-overclocked system running the memory almost 20% faster than the typical overclocked memory speeds available from reasonably-priced modules.

              • m00x 5 days ago |
                Geekbench is run pre-release by the manufacturers.
                • MBCook 5 days ago |
                  But by definition that means it’s not a production machine yet.

                  So it doesn’t invalidate Apple‘s chip being the fastest in single core for a production machine.

                  • m00x 4 days ago |
                    The post doesn't say anything about production machine. It talks about consumer computing.
    • t-sauer 6 days ago |
      As far as I can tell those are all scroes from overclocked CPUs.
      • zeroonetwothree 6 days ago |
        • t-sauer 6 days ago |
          That result is completely different from pretty much every other 13700k result and it is definitely not reflective of how a 13700k performs out of the box.
        • zamadatix 6 days ago |
          Geekbench doesn't really give accurate information (or enough of it) in the summary report to make that kind of conclusion for an individual result. The one bit of information it does reliably give, memory frequency, says the CPU's memory controller was OC'd to 7600 MT/s from the stock 5600 MT/s so it feels safe to say that number with 42% more performance than the entry in the processor chart also had some other tweaks going on (if not actual frequency OCs/static frequency locks then exotic cooling or the like). The main processor chart https://browser.geekbench.com/processor-benchmarks which will give you a solid idea of where stock CPUs rank - if a result has double digit differences from that number assume it's not a stock result.

          E.g. this is one of the top single core benchmark result for any Intel CPU https://browser.geekbench.com/v6/cpu/5568973 and it claims the maximum frequency was stock as well (actually 300 MHz less than thermal velocity boost limits if you count those).

    • Refusing23 6 days ago |
      AMDs upcoming flagship desktop CPU (9800 X3D) reaches about 3300 points on singlecore (the previous X3D hit 2700ish)
      • grecy 5 days ago |
        Are you saying a product that has not been released yet will be faster than a product that is?

        And that a desktop part is going to outperform a laptop part?

        • IshKebab 5 days ago |
          I think he was backing up Apple's claim.
        • Dylan16807 5 days ago |
          No, neither of those.
  • grahamj 6 days ago |
    So Ultra used to be the max but now Max is max… until Ultra goes past the max and Max is no longer the max.

    Until the next Max that goes beyond Ultra!

    • fnikacevic 6 days ago |
      Can't wait until there's an M4 Ultramax too!
      • grahamj 6 days ago |
        Will that me the maximum Ultra or the Ultimate Max?
      • moffkalast 6 days ago |
        Until there's an Ultramax+ Pro 2
        • Keyframe 5 days ago |
          Personally, I'm waiting for Panamax edition.
      • allenrb 5 days ago |
        And these CPUs will be available in… max!
    • lancesells 6 days ago |
      Apple has gotten into Windows and PC territory with their naming for chips and models. Kind of funny to see the evolution of a compact product line and naming convention slowly turn into spreadsheet worthy comparison charts.

      That all said, I only have an M1 and it's still impressive to me.

      • piva00 6 days ago |
        I think they're still keeping it somewhat together, agree it got ever more confusing with the introduction of more performance tiers but after 3 generations it's been easy to keep track of: Mx (base) -> Pro -> Max -> Ultra.

        Think it's still quite far away from naming conventions of PC territory.

        Now I got curious on what naming scheme could be clearer for Apple's performance tiers.

        • grahamj 5 days ago |
          yeah, as I was jokingly implying the names themselves aren't what I would have went with, but overall sticking to generation + t-shirt size + 2 bins is about as simple as it gets.
        • ClassyJacket 5 days ago |
          Max implies it's the top. There shouldn't be anything above max
          • jibbers 5 days ago |
            “Ultra” means going beyond others.
          • tedunangst 5 days ago |
            What do you believe the word ultra means?
        • MBCook 5 days ago |
          I agree it’s kind of weird. I do wonder if the ultra was part of their original M1 plan, or it came along somewhere in the middle of development and they just had to come up with a name to put it above the Max.

          That said it’s far better than any PC scheme. It used to be easy enough when everything was megahertz. But I left the Wintel world around 2006 or so and stopped paying attention.

          I’ve been watching performance reviews of some video game things recently and to my ears it’s just total gobbledygook now. The 13900KS, 14900K, 7900X, 7950X3D, all sorts of random letters and numbers. I know there’s a method to the madness but if you don’t know it it’s a mess. At least AMD puts a generation in their names. Ryzen 9 is newer than Ryzen 7.

          Intel has been using i3, i5, i7, and i9 forever. But the problem is you can’t tell what generation they are just from that. Making them meaningless without knowing a bunch more.

          At least as far as I know they didn’t remember everything. I remember when graphics cards were easy because a higher number meant better, until the numbers got too big so they released the best new ones with a lower number for a while.

          At least I find Apple’s name tractable both between generations and within a generation.

          • abhinavk 5 days ago |
            Ryzen 9 refers to flagship CPUs, not year. Same scheme as i3...i9. The first number in 4-digits is the generation but they increment it by 2.

            This year's Zen 5 lineup consists of R9 9950x (has 16 cores), R9 9900x (12c), R7 9800x3d (8c with 3D Cache), R7 9700x (8c) and R5 9600x (6c).

            • MBCook 5 days ago |
              Oh. I guess I accidentally reinforced my own point didn’t I.

              Thanks.

      • ddingus 5 days ago |
        Same. I have an M1 Air and it is an amazing machine!
    • tiltowait 5 days ago |
      I think I finally made sense of it in my mind:

      The Max is their best CPU. The Ultra is two of their best CPUs glued together.

      The Ultra isn’t a better CPU, it’s just more.

  • ChrisArchitect 6 days ago |
    • hyperjeff 6 days ago |
      Different chips though, and different links. (Also, it’d be nice if we stopped linking directly to social media posts and instead used an intermediary that didn’t require access or accounts just to follow discussions here.)
    • ChrisArchitect 5 days ago |
      whoops, Related:

      Mac Mini with M4 Pro is the fastest Mac ever benchmarked

      https://news.ycombinator.com/item?id=42014791

  • LorenDB 6 days ago |
    How many people does this actually affect? Gamers are better off with AMD X3D chips, and most productivity workloads need good multicore performance. Obviously MR is great silicon and I don't want to downplay that, but I'm not sure that best singlecore performance is an overly useful metric for the people who need performance.
    • smith7018 6 days ago |
      It affects the millions of people that buy the machine by way of longevity.
      • Salgat 6 days ago |
        Usually when I see advances, it's less about future proofing and more about obsoletion of old hardware. A more exaggerated case of this was in the 90s, people would upgrade to a 200 MHz p1 thinking they were future proofing but in a couple years you had 500Mhz P2s.
    • iJohnDoe 6 days ago |
      Based on my limited knowledge. Most applications aren’t great at using all cores, so single-core performance is really important most of the time.
      • MBCook 5 days ago |
        Yep. And even if they can use multiple threads, faster single threaded performance means each of those multiple threads gets done faster.

        There’s a reason consumer CPUs aren’t slower with 1024 cores instead.

    • HumblyTossed 6 days ago |
      People who browse the web and want that fastest javascript performance they can get.
      • nicce 6 days ago |
        Or users of Slack, Spotify, Teams.. you name it. But I don't want to make an excuse that Electron-like frameworks should be encouraged to be used even more if we have super single core computers available.
        • MBCook 5 days ago |
          Even if we ignore them, most tasks people do on a computer end up being heavily influenced by single threaded performance.

          Amdahl's law is still in control. For a great mini users single threaded performance is extremely important.

      • nottorp 6 days ago |
        But for a javascript (browser or electron) workload the new 16 Gb as the starting ram still isn't enough :)
    • nottorp 6 days ago |
      > Gamers are better off with AMD X3D chips

      Yeah but then you'd have to use Windows. I'd rather just play whatever games can be emulated and take the performance penalty.

      It helps that most AAAs put me to sleep...

      • layer8 6 days ago |
        The problem with the M chips is that you have to use macOS (or tinker with Asahi two generations behind). They are great hardware, but just not an option at all for me for that reason.
      • LorenDB 6 days ago |
        There's nothing stopping you from using Linux.
      • hu3 6 days ago |
        > Yeah but then you'd have to use Windows.

        Why? Linux gaming has been great since Wine.

        Even better now with Valve investment.

        Surely leagues better than gaming with macOS.

        • nottorp 6 days ago |
          Sir, you are not a real gamer(tm) either. Use a puny alternative OS and lose 3 fps, Valve support or not? Unacceptable!

          As for Linux, I abandoned it as the main GUI OS for Macs about 10 years ago. I have linux and windows boxes but the only ones with displays/keyboards are the macs and it will stay that way.

          • hu3 6 days ago |
            I have all 3 OSs each on their own hardware:)

            4 if you count steamdeck.

            I do the real gaming, not some subpar emulated crap or anemic macOS steam library.

          • yxhuvud 5 days ago |
            It is definitely not a given you lose FPS on Linux. It is not uncommon for games to give better FPS on Linux. It will all end up depending on the exact games you want to play.
          • realusername 5 days ago |
            That explains your comment then, lots of things changed from 10 years ago and gaming on Linux is pretty good now. The last games you can't play are the ones with strong anti-cheats basically. You can't compare that to the Mac situation where you can't play anything.
            • nottorp 5 days ago |
              Good thing that at least the poster I replied to didn’t take it seriously :)
      • lomase 5 days ago |
        Mac OS is amazing, using a Mac for games is not a good idea. Most AAA, AA, and indie games don't run on Mac.
        • ashirviskas 5 days ago |
          How is it amazing? In my experience it is full of bugs and bad design choices if you ever dare to steer from the path Apple expects masses to take. If you try to use workspaces/desktops to the full extent, you know.
        • maybeben 5 days ago |
          Mac OS was awful. OS X was amazing. macOS feels like increasingly typical design-by-committee rudderless crapware by a company who wishes it didn't have to exist alongside iOS.
    • jwr 5 days ago |
      Single core performance is what I need as a developer for quick compilation or updates of Javascript in the browser, when working on a Clojure/ClojureScript app. This affects me a lot.
  • asmvolatile 6 days ago |
    All I want is a top of the line MBP, with all it’s performance and insane battery life, but running a Linux distro of my choice :(
    • iJohnDoe 6 days ago |
      Agreed, but probably getting a Lenovo Legion will be your best bet in the near term.
      • _hyn3 6 days ago |
        I'm driving a 2022 XPS. Lots of people will (and should) disagree, but I've completely shifted over from Thinkpads to Dell XPS (or Precision) for my laptops.
        • totalhack 5 days ago |
          Running a 2024 xps 13 with Ubuntu for work and it's been solid. Had a Lenovo before this which was great bang for the buck but occasional issues with heating up during sleep. Would consider trying a Framework next.
        • volkandkaya a day ago |
          Good choice, shocked that Dell haven't decided to create their own linux version or invest heavily into Ubuntu for software/hardware.
    • whalesalad 6 days ago |
      asahi linux
      • p_j_w 6 days ago |
        Doesn't run on M3 or M4 yet.
      • grahamj 5 days ago |
        I'm guessing "of my choice" was key there. Though I suppose you could use asahi just as a virtualizer.
    • fooker 5 days ago |
      These things are so fast that you can run a virtual Linux without even noticing performance issues.
      • ttarr 3 days ago |
        I literally tried that few days a go, what you're saying is not true.

        Compiling Linux:

        AMD 6800HS, ~4mins.

        Apple M1 Pro, Linux VM, ~10 mins.

        • fooker 3 days ago |
          What's your virtualization setup?
  • jeffbee 6 days ago |
    Anybody got a Speedometer 3.0 result from the M4 Max? It seems more relevant to "consumer computing".
    • johnklos 5 days ago |
      It has to be at least 50 times the speed of a fast m68030 ;)
      • jeffbee 5 days ago |
        There's a 20x spread in Speedometer results on OpenBenchmarking, just including modern Intel Core CPUs, so yeah I would not be surprised if an M4 outran a 68030 by anywhere from 50x to 1000x
  • bhouston 6 days ago |
    This confuses me because I thought all of the Mx series chips in the same generation ran at the same speed and has the same single-core capabilities?

    The main thing that caused differential single-core CPU performance was just throttling under load for the devices that didn't have active cooling, such as the MacBook Air and iPad Pros.

    Based on this reasoning, the M4, M4 Pro and M4 Max in active cooled devices, the MacBook Pro and Mac Mini, should have the same single-core performance ratings, no?

    • jsheard 6 days ago |
      It might be down to the memory latency, the base M4 uses LPDDR5X-7500 while the bigger models use LPDDR5X-8533. I think that split is new this generation, and the past gens used the same memory across the whole stack.
      • bhouston 6 days ago |
        Ah, interesting. I didn't catch that change.
    • wmf 5 days ago |
      The Pro and Max have more cache and more memory bandwidth. Apple also appears to be crippling the frequency by a percent or two so that the Max can be the top.
  • mattlondon 6 days ago |
    For how long? There are a lot of superlatives ("simply incredible" etc) - when some new AMD or Intel CPU beats this score, will that be "simply incredible" too?

    New chips are slightly faster than previous ones. I am not incredulous about this. Were it a 2x or 3x or 4x improvement or something, sure. But it ain't - it's incremental. I note how even in the Apple marketing they compare it to generations 3 or 4 chips ago (e.g. comparing increases against i7 performance from years ago etc, not against the M3 from a year or so ago because then it is "only" 12% - still good, but not "simply incredible" in my eyes).

    • ssijak 6 days ago |
      Why is so hard for people to understand why apple did that?

      They want the people who are still clinging to intel mac to convert finally. And as for m1 comparisons, people are not changing laptops every year and that is the cohort of m users that is the most likely to upgrade. It's smart to do what apple did.

      • mattlondon 6 days ago |
        I get that argument, but it comes across as hugely disingenuous to me especially when couched with so much glitz and glamour and showmanship. They're aim is to present these things as huge quantum leaps in performance and it's only if you look into the details that it's clear that they're not and they're fudging the figures to make them look better than they are.

        "New Car 2025 has a simply incredible top speed 30x greater than previous forms of transport!* (* - previous form of transport slow walk at 4mph)"

        It's marketing bullshit really let's be honest. I don't accept that their highly-polished entire marketing spiel and song and dance is aimed 100% only at people who have 3 or 4 generation old Mac already. They're not spending all this time and money and effort just to try and get people to upgrade. If you believe that, then you are in the distortion field.

        • philistine 6 days ago |
          Yet you do not propose an alternative theory that makes sense.

          Our point: Apple is laser-focused on comparing with laptops that are 4-5 year old. That's usually when Mac users start thinking about upgrading. They're building their marketing for them. It causes issues when directly trying to compare with the last generation.

          Your point: Apple shouldn't be glamorous and a good showman when marketing their products because they know the only true marketing is comparing directly with your very last chip. Any other type of marketing is bullshit.

          • mattlondon 6 days ago |
            The alternative theory is they are trumping up the numbers in a disingenuous way to make it sound better than it is.
            • spacedcowboy 6 days ago |
              But they're not "trumping" (which makes it sound as if they're making it up). They're just looking at

              - who is likely to upgrade.

              - target advertising at those people.

              Seems eminently sensible to me.

        • spacedcowboy 6 days ago |
          shrug I just upgraded an M1-ultra studio to an M4-Max MBP. I'm not going to splash that much cash every year on an upgrade, and I don't think that's uncommon.

          Just like the phone comparisons are from more than one year ago, the computer comparisons (which are even more expensive) make more sense to be from more than one year ago. I don't see why you wouldn't target the exact people you're trying to get to upgrade...

        • r00fus 5 days ago |
          No one in the industry uses Apple's marketing in any real sense. The marketing is not for you - its sole purpose is to sell more Macs to their target market.

          That you are distracted by it is not Apple's problem - and most other industry players don't GAF about Apple's self-comparisons either.

    • dogleash 5 days ago |
      Incremental progress gonna increment.

      We're on a perpetual upgrade treadmill. Even if the latest increment means an uncharacteristically good performance or longevity improvements... I can't bring myself to care.

    • dmix 5 days ago |
      > I note how even in the Apple marketing they compare it to generations 3 or 4 chips ago

      Apple is just marketing to the biggest buyer group (2 generation upgrades) in their marketing material?

      This isn’t like iPhones where people buy them every 1-2 years (because they break or you lose it etc), laptops have a longer shelf life, you usually run to the ground over 2+ yrs and then begrudgingly upgrade.

      • christophilus 5 days ago |
        The + is doing some heavy lifting there. I’m on a 2019 XPS running Fedora with niri. It doesn’t feel like it’s kicking the bucket any time soon.
        • MBCook 5 days ago |
          And my 2019 Intel MBP is still working too. Use it every day.

          The idea of a 6x (or whatever) performance jump is certainly tempting. Exactly as they intend it to be. If I was in charge of replacing it I would be far more likely to buy than if I had an M3.

          They’re trying to entire likely buyers.

    • MBCook 5 days ago |
      There are a LOT of corporate Macs out there that are still on Intel.

      The replacement cycle may just be that long. Or maybe they chose to stick with Intel. Maybe because that’s what they were used to or maybe because had specific software needs. So they were still buying them after Apple Silicon machines had been released.

      Yeah it’s not a big deal for the enthusiast crowd. But for some of their customers it’s absolutely a consideration.

  • crazymoka 6 days ago |
    unfortunately I could only afford the M4 Pro model MBP lol
  • daft_pink 6 days ago |
    Can’t wait for the Mac Studio/Pro to be released.
  • silvestrov 6 days ago |
    So what is the role of the Mac Studio now?

    It only has:

    - faster memory and up to 192 GB.

    - 1 ekstra Thunderbolt port.

    That is not much for such a large price difference:

    Mac Mini (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $2500

    Mac Studio (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $5000

    • 015a 6 days ago |
      The GPU difference might be material.

      But it is obviously a bad time to invest in a Mac Studio.

    • fckgw 6 days ago |
      The Mac Studio hasn't been updated yet. The equation changes once it's also on the M4 Max and Ultra.
      • eastbound 5 days ago |
        Does it? What can it do better than M4 / 128GB…
        • burnerthrow008 5 days ago |
          Well, judging by the M1 and M2, the M4 Ultra will support 256GB of memory, so there's that. And it will have 2x the GPU and 2x the CPU cores...
          • geerlingguy 5 days ago |
            And the port assortment is overall nicer in terms of not requiring an External TB4 hub for production environments (I literally have something plugged into every port on my M1 Max Mac Studio, even on the front!)
    • burnerthrow008 5 days ago |
      > Mac Mini (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $2500

      > Mac Studio (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $5000

      In those configurations, the Studio would have roughly 2x the GPU power of the Mini, with equivalent CPU power. It also has twice as many Thunderbolt ports (albeit TB4 instead of TB5), and can support more monitors.

      • tom_ 5 days ago |
        It's probably also got better cooling. And you get some ordinary USB sockets as well!
    • nextos 5 days ago |
      AFAIK, memory bandwidth. M2 Ultra 800GB/s, whereas M4 Max is just 546GB/s. For example, local LLM inference has a big bottleneck on bandwidth. 50% extra is significant.

      I wish the Studio received an upgrade, with a new M4 Ultra potentially going over 1TB/s. It also offers better cooling for long computations.

  • bee_rider 6 days ago |
    This is almost certainly a dumb question, but has anybody tried using these for scientific computing/HPC type stuff?

    I mean no Infiniband of course, but how bad would a cluster of these guys using Thunderbolt 5 for networking be? 80Gbps is not terrible…

    • wmf 5 days ago |
      People are using Thunderbolt clustering for AI inference. Historically Thunderbolt networking has been much slower than you'd expect so people didn't bother trying HPC.
  • whalesalad 6 days ago |
    • SushiHippie 5 days ago |
      Same, I have a Ryzen 9 7950X and it has 130-140% better performance (according to Geekbench)

      https://browser.geekbench.com/v6/cpu/compare/8593555?baselin...

      Though a MacBook Pro 16" with M4 Max(that's what achieved this geekbench score), but the same amount of memory (64GB) and the same amount of storage (4TB) as my PC, would cost 6079€. That is roughly twice as much as my whole PC Build did cost, and I'm able to expand Storage and upgrade my CPU and GPU in the future (for way less than buying a new Mac in a few years)

      • abhinavk 5 days ago |
        If they release a Studio without increasing prices next year, these specs will cost you 4500€. That's more comparable to your build (sans the upgrade options of course).
      • MBCook 5 days ago |
        Apple’s upgraded are famously expensive, it’s how they get their giant margins. As an Apple enthusiast, yeah it sucks.

        Anyway, they made an Ultra version of the M1 and M2 that was even better than the Max versions by having significantly more cores.

        If they do that again (Mac Pro?) it will be one hell of a chip.

  • rubslopes 6 days ago |
    I have a M1 MacBook Air that I use for Docker, VSCode, etc. And it still runs very smoothly. Interestingly, the only times it slows down is when opening Microsoft Excel.
    • tacker2000 5 days ago |
      Excel performance on Mac is a disaster, and I dont understand why.

      Everytime I paste something it lags for 1-2 seconds… so infuriating!!

      • Matheus28 5 days ago |
        Don’t worry, it’s bad on Windows too
        • abhinavk 5 days ago |
          Whenever it's open, system animations are very janky.
      • MBCook 5 days ago |
        It was originally a Mac app before Microsoft bought it too, wasn’t it?
        • seec 5 days ago |
          You got it all wrong. It was developed for Macs first because Microsoft is in the business of selling software and Windows wasn't such a big thing back then, in fact it barely existed as Windows, it was mostly MS-DOS still.

          Microsoft definitely didn't buy something they created themselves.

          https://en.wikipedia.org/wiki/Microsoft_Excel#Early_history

          • MBCook 5 days ago |
            Oh. I got the Mac part right but not who made it. It looks like I was conflating it with PowerPoint, which was Mac first and made by a 3rd party MS bought.

            https://en.m.wikipedia.org/wiki/Microsoft_PowerPoint

            • seec 3 days ago |
              Ah yes that's right.

              PowerPoint appearing first on Macs is not surprising because historically Macs have been focused on DTP applications, and were much more powerful for that (still true this day). It took quite a while for Windows to be comparatively capable, I think this is why Microsoft bought PowerPoint, to make up for it. Hilariously today PowerPoint is considered to be worse than Keynote, some things never change...

      • seec 5 days ago |
        Because Microsoft takes a least effort approach to porting for macOS. Not that they should do otherwise in my opinion. Office runs much better on Windows and has more functionality.
    • asadm 5 days ago |
      same BUT my 16G ram is not enough and disk goes full.
  • QuiEgo 6 days ago |
    It’s so refreshing after the normal “here’s today’s enshitification of this thing you used to love” threads to read threads like this.
  • hulitu 5 days ago |
    > Apple's M4 Max chip is the fastest single-core performer in consumer computing

    Single taskink OSs are long gone. Single core performance is irrelevant in the world of multitasking/multithreading/ preemtible threads.

    • timbit42 5 days ago |
      There are lots of apps that only run in a single thread. If you want them to run fast, you need fast single-core performance.
    • guhidalg 5 days ago |
      If that were true, why isn't my GPU running my UI loop or running my JS event-loop?

      Single-core performance is still king for UI latency and CPU-bound tasks.

    • wtallis 5 days ago |
      Amdahl's law has not actually been overturned.
  • 8f2ab37a-ed6c 5 days ago |
    Too bad it's still sluggish for latest tech game dev with engines like UE :( It'd be great to ditch the Windows ecosystem, at least at dev time.
    • miohtama 5 days ago |
      Game performance is often GPU bound, not CPU bound.
      • jrockway 5 days ago |
        It depends on the game. If there are a lot of game simulation calculations to do for every frame, then you're going to be CPU constrained. If it's a storybook that you're walking through and every pixel is raytraced using tons of 8000x8000 texture maps, then it's going to be GPU constrained.

        Most games are combinations of the two, and so some people are going to be CPU limited and some people are going to be GPU limited. For games I play, I'm often CPU limited; I can set the graphics to low at 1280 x 720, or ultra at 3840 x 2160 and get the same FPS. That's CPU limiting.

        • rbanffy 5 days ago |
          > If there are a lot of game simulation calculations

          Why not move at least some of that into the GPU as well? Lots of different branchy code paths for the in-game objects?

          • TinkersW 5 days ago |
            Latency of communication with GPU is too large, and GPUs suck at things CPUs are good at.
            • rbanffy 5 days ago |
              Wouldn’t the latency be because of no shared memory and game context data having to go over the PCIe bus?

              GPUs suck at branchy code, but is that the case with this simulation data? I can see why it’d suck for NPC simulation, but not physics.

        • magicalhippo 5 days ago |
          I recently swapped out my AMD 3800X with a 5900X as an alternative to a full platform upgrade. I got it mostly for non-gaming workloads, however I do enjoy games.

          Paired with my aging but still chugging 2080Ti, the max framerates in games I play did not significantly increase.

          However I did get a significant improvement in 99-percentile framerate, and the games feel much smoother. YMMV, but it surprised me a bit.

      • loaph 5 days ago |
        Do you have a source? My experience is the opposite is often true
      • FactKnower69 5 days ago |
        1. parent is talking about Unreal Editor, not playing games

        2. yes, different pieces of software have different bottlenecks under different configurations... what is the point of a comment like this?

        • MBCook 5 days ago |
          Even if it’s a relatively niche tool, if that’s the tool of your job depends on then that’s the thing that gates if you can use the computer.
  • ggernov 5 days ago |
    Can't wait to see if / when they release the m4 ultra.
    • rbanffy 5 days ago |
      I bet the Studio and the Pro will have that option. I'm hoping the Pro has more versatile PCIe slots as well.
  • gigel82 5 days ago |
    Was this verified independently? Because people can submit all sorts of results for Geekbench scores. Look at all these top scorers (most of which are obviously fake or overclocked chips): https://browser.geekbench.com/v6/cpu/singlecore
  • dwayne_dibley 5 days ago |
    How impressed should I be. In terms of Apples history of manufacturing chips compared to, say, intel. This is their 4th generation of the M chip and it seems to be so far ahead of intel, a company with significant bigger history of chip production.
    • quink 5 days ago |
      They were in the PowerPC consortium starting in 1991, co-developed ARM6 starting in the late 80s and the M series chips are part of the Apple Silicon family that goes back to at least 2010's Apple A4 (with non-Apple branded chips before then).

      They've been in the chip designing business for a while.

      • comboy 5 days ago |
        > back to at least 2010's Apple A4

        basically Jim Keller happened, I think they are still riding on that architecture

        • starspangled 5 days ago |
          Actually difficult to know if it was Keller. Apple bought PA Semi which is where he came from. But he came on as a VP after it was founded by other engineers who had worked on the Alpha chips a year before that. Did he make the difference? Who knows.

          What does seem to be constant is that the best CPU designs have been touched by the hands of people who can trace their roots to North Eastern US. Maybe the correlation doesn't exist and the industry is small and incestuous enough that most designs are worked on by people from everywhere, but sometimes it seems like some group at DEC or Multiflow stumbled on the holy grail of CPU design and all took a drink from the cup.

    • amarshall 5 days ago |
      It’s not quite a fair comparison, given Intel has their own fab, while Apple uses TSMC—and pays them a lot to get exclusive access to new nodes before anyone else.
      • SG- 5 days ago |
        Yet Intel is using TSMC for their latest chips now too.
        • amarshall 5 days ago |
          TIL. Crazy times, at least Intel finally admitted failure in their fab, I guess. Still doesn’t get the latest nodes, though.
    • jccalhoun 5 days ago |
      It is impressive but it is also important to remember that Intel, AMD, and Qualcomm make dozens of different chips while Apple makes a handfull. That means they can't be as focused as Apple.
      • dagmx 5 days ago |
        The majority of those different chips use the same cores though as each other, and vary mostly in packaging or binning.
  • SamAsEnd 5 days ago |
    Am I missing something? I don't know where this informations came from, but you can check out GeekBench v6 single-core benchmarks here.

    https://browser.geekbench.com/v6/cpu/singlecore

    • dialup_sounds 5 days ago |
      Something is fishy when the top two claim to be Android phones running Ryzen chips, and the third is allegedly a 13GHz Core i3.
      • alpaca128 5 days ago |
        Followed by an i9 with 5 cores and a multicore benchmark score of zero.
        • solardev 5 days ago |
          Probably crashed before it could complete the test.
      • solardev 5 days ago |
        Gosh, back in my day, we were lucky to squeeze out a few extra MHz. The overclockers today must be really skilled.
    • tester756 5 days ago |
      "Do not trust any benchmarks you did not fake yourself"
    • brigade 5 days ago |
      The better chart is https://browser.geekbench.com/processor-benchmarks/ which tries to discount outliers that might be liquid nitrogen cooled and/or faked
      • MBCook 5 days ago |
        The M4 is almost 1/3rd faster than the top Intel (on this benchmark)?

        I had no idea the difference was that big. I don’t know what a normal geek bench score is, so I just sort of assumed that the top of the lung Intel part would be something like 3700 or 3800. Enough that Apple clearly took a lead but nothing crazy.

        No wonder it’s such a big deal.

      • wmf 5 days ago |
        Even thought that was updated hours ago it doesn't list Zen 5 or Arrow Lake.
  • linotype 5 days ago |
    Intel should be utterly embarrassed.
  • jchw 5 days ago |
    Well, firstly, it isn't. There are higher Geekbench 6 CPU scores, even forgetting the ones that appear to be fake or at least broken.

    But secondly, that would absolutely not indicate that it is the "fastest single-core performer in consumer computing". That would indicate that it is the highest scoring Geekbench 6 CPU in consumer computing.

    Whether or not that's actually a good proxy for the former statement is up to taste, but in my opinion it's not. It gives you a rough idea of where the performance stands, but what you really need to be able to compare CPUs is a healthy mix of synthetic benchmarks and real-world workloads. Things like the time it takes to compile some software, scores in video game benchmarks, running different kinds of computations, time to render videos in Premiere or scenes in Blender, etc.

    In practice though, it's hard to make a good Apples-to-Intels performance comparison, since it will wind up crossing both OS boundaries and CPU architecture boundaries, which adds a lot of variables. At least real world tests will give an idea of what it would be like day-to-day even if it doesn't necessarily reveal truisms about which CPU is the absolute best design.

    Of course it's reasonable to use Geekbench numbers to get an idea of where a processor stands, especially relative to similar processors, but making a strong claim like this based off of Geekbench numbers is pretty silly, all things considered.

    Still... these results are truly quite excellent. It would suffice to say that if you did take the time to benchmark these processors you would find the M4 processor performs extremely well against other processors, including ones that suck up more juice for sure, but this isn't too surprising overall. Apple is already on the TSMC N3E process, whereas AMD is currently using TSMC N4P and Intel is currently using TSMC N3B on their most cutting edge chips. So on top of any advantages they might have for other reasons (like jamming the RAM onto the CPU die, or simply better processor design) they also have a process node advantage.

    • GeekyBear 5 days ago |
      SPEC has been the industry standard benchmark for comparing the performance of systems using different instruction sets for decades now.

      Traditionally, Anandtech would have been the first media outlet to publish the single core and multicore integer and floating point SPEC test results for a new architecture, but hopefully some trusted outlet will take up the burden.

      For instance, Anandtech's Zen 5 laptop SKU results vs the M3 from the end of July:

      > Even Apple's M3 SoC gets edged out here in terms of floating point performance, which, given that Apple is on a newer process node (TSMC N3B), is no small feat. Still, there is a sizable deficit in integer performance versus the M3, so while AMD has narrowed the gap with Apple overall, they haven't closed it with the Ryzan AI 300 series.

      https://www.anandtech.com/show/21485/the-amd-ryzen-ai-hx-370...

      Zen 5 beat Core Ultra, but given that Zen 5 only edged out the M3 in floating point workloads, I wouldn't be so quick to claim the M4 doesn't outperform Zen 5 single core scores before the test results come out.

      • spoaceman7777 5 days ago |
        just considering the number of simple calculations a CPU can compute isn't a very good comparison. apple's chips are using an ARM architecture, which is a Reduced Instruction Set Computer (RISC) setup, vs x86-64, which is a Complex-ISC (CISC).

        The only good comparison is to judge a variety of real world programs compiled for each architecture, and run them.

        • erik_seaberg 5 days ago |
          https://en.wikipedia.org/wiki/SPECint talks about a dozen programs they selected for single-core logic and discrete math, including gcc and bzip2 (there are more than a dozen others using floats).

          Over time, RISC and CISC borrowed from each other: https://cs.stanford.edu/people/eroberts/courses/soco/project...

        • GeekyBear 5 days ago |
          > The only good comparison is to judge a variety of real world programs compiled for each architecture, and run them.

          I'm guessing that you don't realize that you are describing SPEC?

          It's been around since the days when every workstation vendor had their own bespoke CPU design and it literally takes hours to run the full set of workloads.

          From the same Anandtech article linked above:

          > SPEC CPU 2017 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

          More info:

          > SPEC is the Standard Performance Evaluation Corporation, a non-profit organization founded in 1988 to establish standardized performance benchmarks that are objective, meaningful, clearly defined, and readily available. SPEC members include hardware and software vendors, universities, and researchers.

          SPEC was founded on the realization that "An ounce of honest data is worth a pound of marketing hype".

          https://www.spec.org/cpu2017/Docs/overview.html

          • spoaceman7777 5 days ago |
            took at look at these benchmarks. they appear to be using some extremely antiquated code, and workloads, that do not take advantage of any of the advanced features and instruction introduced over the past 15-20 years in the x86-64 architecture.

            additionally, the only updates they appear to have made in the last 5+ years involve optimizing the suite for Apple chips.

            thus, it leaves out massive parts of modern computing, and the (many) additions to x86-64 that have been introduced since the 00s.

            i'd encourage you to look into the advancements that have occurred in SIMD instructions since the olden days, and the way in which various programs, and compilers, are changed to take advantage of them

            ARM is nice and all, but the benchmark you've linked appears to be some extremely outdated schlock that is peddled for $1000 a pop from a web page out of the history books. Really. Take a look through what the benchmarks on that page are actually using for tooling.

            I'd consider the results valid if they were calculated using an up to date, and maintained, toolset, like that provided by openbenchmarking.org (the owner of which has been producing some excellent ARM64 vs Intel benchmarks on various workloads, particularly recently).

            • PUSH_AX 5 days ago |
              Does optimising the tests towards one architecture seem like a fair way of testing?
              • kaba0 5 days ago |
                It's optimizing as in previously it had far less attention.
            • GeekyBear 5 days ago |
              > the only updates they appear to have made in the last 5+ years involve optimizing the suite for Apple chip

              How do you theorize that generic C or C++ code that you compile using GCC has been "optimized for an Apple chip"?

              Feankly, it's impossible to take any of this comment seriously.

    • throw_that_away 5 days ago |
      Remember too back in the day when you're looking at a Mac with 1/4 the power of a PC, and it's 4x the price. I think we're starting to see those ratios reversed completely. And at the same time, the power, heat, etc.. is just sitting at the floor.
      • sho_hn 5 days ago |
        Yeah, this rings true. I'm not an Apple customer, but I certainly remember the days when Mac users had to justify the much worse bang-for-the-buck of Apple hardware (then) with it being the only ticket to their preferred software experience.

        These days it appears more that the hardware is fantastic, especially in the laptop form factor and thermal envelope, and perhaps the downside is a languishing macOS.

        • bogeholm 5 days ago |
          I use macOS daily for dev and office work, and to me it doesn’t feel languishing at all. Add Homebrew and Docker or Podman, and we’re off.

          The only places I can see there could be features missing are:

          - IT management type stuff where it looks like Apple are happy just delegating to Microsoft (eg. my workstation is managed with InTune and runs Microsoft Defender pushed by IT),

          - CUDA support if you’re into AI on NVIDIA

          - Gaming I hear, but I don’t have time for that anyway :)

          Of course this is biased, because I also generally just _like_ the look and feel of macOS

          • pipodeclown 5 days ago |
            Just basic customisation functionality is missing in MacOS. Recently bought a MacBook and am constantly amazed about the stuff you can't do. Oh you want do use normal scrolling on your external Bluetooth mouse (like everybody that uses a mouse does) and natural scrolling on the trackpad? Nah bro, choose one for both. You wanna just plug in some generic usb hub connected to multiple monitors like you did with your 500 euro laptop? No can't do. Want to specify the SDR content brightness on an external display separately from HDR content brightness when enabling HDR on the display? Nah bro we're just gonna blast full brightness at you all the time, and I can go on. Hardware is second to none but the software is really.holding the machine back
            • bogeholm 5 days ago |
              Scrolling, you’re right. I usually run alt-tab, rectangle and bettertouchtool to solve scrolling and a couple of other annoyances
            • 3dtopo 4 days ago |
              Any decent mouse should have software that lets you customize that device.
          • numpad0 5 days ago |
            Isn't it the other way around? Majority of noises around macOS seem to come from {front-end|back-end} {mobile|app|Web} {development|asset creation|content authoring}. The way I feel is macOS outside blogmedia hypesphere is still not a lot more relevant than OS like Haiku, and CUDA and gaming are just glitches in the Matrix: the distances between macOS hype-core to glitch walls is not uniform, and those two domains are the closest to the core.
        • 3dtopo 4 days ago |
          One thing to consider is Apple has always made high-quality hardware that often still works after a decade or more.
    • mabedan 5 days ago |
      > you really need to be able to compare CPUs is a healthy mix of synthetic benchmarks and real-world workloads

      I like how Apple got roasted on every forum for using real world workloads to compare M series processors to other processor. The moment there’s a statistic pointing to “theoretical” numbers, we’re back to using real world workload comparison

      • jchw 5 days ago |
        You need both, this is just standard practice. Check for example how CPUs are compared on outlets like Gamer's Nexus or Phoronix.

        Apple didn't get roasted for presenting real world performance, they got roasted for doing the kinds of things that marketing people do: making vague blanket claims about performance that couldn't actually be reasonably validated. (Intel and AMD routinely get roasted for similar things.)

    • adrian_b 5 days ago |
      The higher single-thread GB6 scores are from overclocked Intel or AMD CPUs.

      The M4 core @ 4.5 GHz has a significantly higher ST GB6 performance than Lion Cove @ 5.7 GHz or Zen 5 @ 5.7 GHz (which are almost equal at the same clock frequency, with at most a 2% advantage for Lion Cove).

      Having higher GB6 scores should be representative for general-purpose computing, but there are application domains where the performance of the Apple cores has been poor in the past and M4 is unlikely to have changed anything, i.e. the computations with big integer numbers or the array operations done on the CPU cores, not on the AMX/SME accelerators.

      Nevertheless, I believe that your point about the existence of higher ST GB6 scores is not weakened by the fact that those CPUs are overclocked.

      For a majority of the computer users in the world, the existence of higher ST performance that can be provided by either overclocked Intel/AMD CPUs or by CPUs made by Apple is equally irrelevant, because those users would never choose any of these 2 kinds of CPUs for their work.

      • jchw 5 days ago |
        Honestly, I do agree, it is kind of ridiculous that it is fighting neck and neck with relatively overclocked AMD and Intel CPUs. I always do a little bit of overclocking but some of these overclocks are pretty high.

        Still, I do think some of Apple's inherent advantages make it less surprising that they're able to win on benchmarks. Again, the process node, the RAM directly on the chip. And hell, they are probably also able to get here because of targeting AArch64 with no 32-bit ARM compatibility.

        Either way, Apple appears to be a couple years ahead of the competition right now when it comes to efficient processors, just like they were with the M1.

  • Rapzid 5 days ago |
    The perf and power use is nice but I don't need the dual architecture stuff in my professional life.

    Been extremely happy with Windows and WSL last couple years, so happy to be a node or two behind on AMD laptops.

    Otherwise I use a workstation primarily anyway.

  • haccount 5 days ago |
    And priced as the rest of them together.
  • DeathArrow 5 days ago |
    I don't trust Geekbench.
  • rldjbpin 5 days ago |
    besides the usual caveat of unreleased hardware leak, relying on geekbench as a metric for performance is as useful as userbenchmark.

    semantics rant: on another note, where is the line between "consumer" and "prosumer"/"enthusiast" hardware in terms of pricing? over 3000 usd before taxes seems well in the latter camp in my books.

    on other hand, most enterprise hardware is not optimized for single-core performance. even to my knowledge, the configurations for algo-trading machines are comparable to consumer hardware, if not slower.

  • necovek 4 days ago |
    Obviously, it's not all about the power: I see a lot of comments here about heat, fans and comfort. And everyone is comparing against old Intel Macs, but if that's what you cared about, the benchmark must have been the Thinkpad X1 Carbon.

    And Macs still can't match that with their crappy keyboards, case strength and ergonomics (no sharp edges), spill-protection and non-glossy screens (remember, these are laptops to be used out and about — though I've only used M1/M2 Airs, M1 Max Pro 14 and M2 Pros 13/14, so maybe the newer ones are better).

    X1 Carbons have mostly moved to higher performing Intel chips, and thus lost those silent & cool characteristics, but other than the obvious (performance vs comfort), they are still my preferred choice in laptops.