Do those criticisms still hold? Are serious people nowadays taking Geekbench to be a reasonably okay (though obviously imperfect) performance metric?
This is M4 — Apple has now made four generations of chips and each one were class leading upon release. What more do you need to see?
If I was reviewing cars and used the number of doors as a benchmark for speed, surely I’d get laughed at.
Immediately people “but geEk BeNcH”
And then actual people get their hands on the machines for their real workloads and essentially confirm the geekbench results.
If this was the first time, then fair enough. But it’s a Groundhog Day style sketch comedy at this point with M4.
There is a lot to criticize about Apple's silicon design, but they are leading the CPU market in terms of mindshare and attention. All the other chipmakers all feel like they're just trying to follow Apple's lead. It's wild.
But anyway, what is it you see to criticize about Apple‘s Apple Silicon design? The way RAM is locked on package so it’s not upgradable, or something else?
I’m kind of surprised, I don’t hear a lot of people suggesting it has a lot to be criticized for.
It has 4 doors! It’s all over the shitty car news medias. The car is a prototype with only one seat though.
I was just curious if people had experience with how reliable Geekbench has been at showing relative performance of CPUs lately.
https://www.cpubenchmark.net/singleThread.html
https://www.cpu-monkey.com/en/cpu_benchmark-cinebench_r23_si...
Buying up most of TSMC's latest node capacity certainly helps. Zen chips on the same node turn out to be very competitive, butAMD don't get first dibs.
People pretend like this isn’t a thing that’s already happened, and that there aren’t fair comparisons. But there are. And even when you compare like for like Apple Silicon tends to win.
Line up the node, check the wattages, compare the parts. I trust you can handle the assignment.
Nevertheless, product delivery is a combination of multiple things of which the basic tech is just one component.
Most people who say things like this tend to deeply misunderstand TDP and end up making really weird comparisons. Like high wattage desktop towers compared to fan-less MacBook Airs.
The process lead Apple tends to enjoy no doubt plays a huge role in their success. But you could also turn around and say that’s the only reason AMD has gained so much ground against Intel. Spoiler: it’s not. Process node and design work together for the results you see. People tend to get very stingy with credit for this though if there’s an Apple logo involved.
I'm skeptical of Geekbench being able to indicate that this specific new processor is robustly faster than say a 9950x in single-core workloads.
That's exactly their point.
If I could pick 1 "generic" benchmark to base things off of I'd pick PassMark though. It tends to agree with Geekbench on Apple Silicon performance but it is a bit more useful when comparing non-typical corner cases (high core count CPUs and the like).
Best of all is to look at a full test suite and compare for the specific workload types that matter to you... but that can often be overkill if all you want to know is "yep, Apple is pulling ahead on single thread performance".
There is a sort of whack-a-mole thing where adherents of particular makers or even instruction sets dismiss evidence that benefits their alternatives, and you find that at the root of almost all of the "my choice doesn't win in a given benchmark means the benchmark is bad" rhetoric. Then they demand you only respect some oddball benchmark where their favoured choice wins.
AMD fans long claimed that Geekbench was in cahoots with Intel. Then when Apple started dominating, that it was in cahoots with ARM, or favoured ARM instruction sets. It's endless.
SPECint compiled with either the vendor compiler (ICC, AOCC) or the latest gcc/clang would be a good neutral standard, though I'd also want to compare SIMD units more closely with x265 and Highway based stuff (vips, libjxl).
And how do you handle the fact that you can't really (yet) use the same OS for both platforms? Scheduler and power management counts, even for dumb number crunching.
Specint, on the other hand, is useful for assessing specific tasks if you plan to run identical workloads. However, its individual test results vary widely. For example, Apple Silicon chips generally perform well in Specint but might match a competing chip in one test and be three times faster in another. These tests focus on very narrow tasks that can highlight the unique strengths of certain instructions or system features but are not representative of overall real-world performance.
The debate over benchmarks is endless and, frankly, exhausting, as it often rehashes the same arguments. In practice, most people accept that Geekbench is a reliable indicator of performance, and I maintain it’s an excellent benchmark. You might disagree, but my stance stands.
>Specint, on the other hand, is useful for assessing specific tasks if you plan to run identical workloads. [...] These tests focus on very narrow tasks that can highlight the unique strengths of certain instructions or system features but are not representative of overall real-world performance.
What? First, SPECint is an aggregate of 12 benchmarks (https://en.wikipedia.org/wiki/SPECint#Benchmarks), none of them synthetic in any way. They're also ranging from low to high level, it's not just number crunching. Sure, it's missing stuff like browser benchmarks to better represent the average user, but it's certainly not as useless as what you seem to imply.
Any "system wide" benchmark is aggregating too much into a single number to mean anything, in any case.
And this subthread is about using benchmarks to compare HARDWARE, not whole systems, so this discussion is pretty much meaningless.
Yet a benchmark of how Xalan-C++ transforms XML documents has shockingly little relevance to most of the things I do. And the M1 runs the 400.perlbench benchmark slower than the 5950X, yet it runs the 456.hmmer benchmark twice as quickly, both I guess mattering if I'm running those specific programs?
As with the strawman that I said it was synthetic, I also didn't say it was useless. Not sure why you're making things. It's an interesting benchmark, but most people (yup, there's that appeal again) find Geekbench more informative.
And, again, most people, including the vast majority of experts in this field, respect geekbench as a decent broad-spectrum benchmark. As with all things there are always contrarians.
>And this subthread is about using benchmarks to compare HARDWARE, not whole system
Bizarre. This submission is specifically about Geekbench, specifically about the M4 running, of course, macOS. This subthread is someone noting that they can't escape the negatron contrarians who always pipe up with the No True Benchmark noise.
Run the compilation on another CPU, note down the time it took and the Geekbench score for that CPU.
Now look at the ratios — if Geekbench scores implied the faster CPU was, say, 20% faster, is my compilation 20% faster?
I'm looking at my notes and without digging too much, I can see two reasonably recent cases: 30% faster compilation (Geekbench said 30%), and another one: 40% faster compilation (Geekbench said 38%).
So yes, I do consider Geekbench to be a very reliable indicator of performance.
But GB6 aligns pretty well with SPEC2017.
Translating these scores into the real world is problematic. There are numerous examples of smart phones powered by Apple chips versus Qualcomm chips having starkly different performance with actual use. This is in spite of the chips themselves scoring similarly in benchmarks.
The interesting thing here isn't really how high it's scored against other chip brands, but how it outperformed the M2 Ultra. There was some hum of expectation on HN that the differences between M1, M2, M3 etc would be token and that Apple's chips devision is losing its touch. Yet the M2 Ultra in the Mac Studio was released in June 2023, and the M4 Pro in the mini now for November 2024. That is quite the jump in performance over time and a huge change in bang for buck.
Apple seems to be reliably annihilating everyone on performance per watt at the high end of the performance curve. It makes sense since the M series are mobile CPUs on ‘roids.
They gave it up for Core, grown from their low power chips, which took them far further with far less power than the P4 would have used.
The AMD mobile chips are right there with M3 for battery life and have excellent performance only I couldn't find a complete system which shipped with the same size battery as the MBP16. They're either half or 66% of the capacity.
Huh? I've used AC for both MBP and iPhones a number of times over the years, and never had an issue. They are known for some of the highest customer ratings in the industry.
They charged me $100 to get my machine back without repair.
Also bear in mind that the EU is a single market, warranties etc are, by law, required to be honoured over the ENTIRE single market. Not just one country.
Especially when the closest Apple Store to me is IN GERMANY.
I have since returned it to Amazon who will refund it (they're taking their sweet time though, I need to call them next week as they should have transferred already).
Sellers often will try to steer you to use warranty as it removes their responsibility, Amazon is certainly shady here. Apple will often straight on give you a full refund or a new device (often newer model), that happened to me with quite few iPhones and MacBooks.
Know your rights.
I'll keep buying from Amazon as their support is great and prices competitive. I don't trust Apple buying from them directly.
Today, Apple wastes my time.
Instead of the old experience of common sense, today the repair people apparently must do whatever the diagnostic app on the iPad says. My most recent experience was spending 1.5 hours to convince the guy to give me a replacement Airpods case. Time before that was a screen repair where they broke a working FaceID ... but then told me the diagnostics app told them it didn't work, so they wouldn't fix it.
I'm due for another year of AppleCare on my MBP M1, and I'm leaning towards not re-upping it. Even though it'd be 1/20th of the cost of a new one, I don't want to waste time arguing with them anymore.
Apple still makes good hardware but the scrooge attitude is disgusting for such premium products.
If a competing laptop has to run significantly hotter/louder than in my mind that’s not on par.
But is it, for the same power consumption?
Yet here we are, with the excuses of margins and silicon processes generations. But you haven't answered the question. Is Apple pulling ahead or is the x86 cabal able to keep up?
The M2-4 are still ahead (in their niche), but since the M1, Intel and AMD have been playing catchup.
It's partially because AMD is on a two year cadence while Apple is on approximately a yearly cadence. And AMD has no plans to increase the cadence of their Zen releases.
2020 - M1, Zen 3
2021 - ...
2022 - M2, Zen 4
2023 - M3
2024 - M4, Zen 5
Edit: I am looking at peak 1T performance, not efficiency. In that regard I don't think anyone has been close.
Indeed. Anything that approaches Apple performance does so at a much higher power consumption. Which is no biggie for a large-ish desktop (I often recommend getting middle-of-the-road tower servers for workstations).
It’s possible Apple’s chips could be dramatically faster if they were willing to use up 300W.
I remember seeing an anecdote where Johny Srouji, the chief Apple Silicon designer, said something like the efficiency cores get 90% of the performance of the performance cores at like 10% of the power.
I don’t remember the exact numbers but it was staggering. While the single core wouldn’t be as high, it sounded as if they could (theoretically) make a chip of only 32 efficiency cores and just sip power.
> they could (theoretically) make a chip of only 32 efficiency cores and just sip power.
Intel and AMD did that with their latest data-center chips to compete with Ampere and AWS’s Graviton. I’d love to build a workstation out of one such beast.
NEVER call me that ;-)!
If you want to evaluate the quality of two different architectures, you should be comparing samples on the same fab node.
M1 and Zen 4 were on the same node, and M3 and Zen 5 are on the same node. In both cases they're within spitting distance of one another.
The majority of Apple's advantage is just Apple paying for early access to TSMC's newest node.
Java runs faster. GraalVM native generated native images run way waster. Golang runs faster. X86_64 has seen more love from optimalisations than aarch64 has. One of the things I hit was different GC/memory performance due to different page sizes. Moreover, docker runs natively on Linux, and the network stack itself is faster.
But even given all of that, the 16” M1 PRO edges close to the desktop. (When it is not constrained by anti virus.) And it does this in a portable form factor, with way less power consumption. My 5900X tops out at about 180W.
So yes, I would definitely say they are pulling ahead.
Which isn’t too surprising given a lot of the biggest companies in the world have been optimizing the hell out of it for their servers for the last 25+ years.
On the flipside of the coin though Apple also clearly optimizes their OS for power efficiency. Which is likely paying some good dividends.
The remainder can be attributed to compiler optimisations or lack thereof.
No.
On the same node, the performance is quite similar. Apple's previous CPU (M3) has been a 3nm part, while AMD's latest and greatest Zen 5 is still on TSMC's 4nm.
It is isn't it.
https://forums.appleinsider.com/discussion/234969/apple-arca...
Why is the Apple TV only focused on passive entertainment?
Is it just some weird cultural thing? Or is there some kind of genuine technical reason for it, like it would involve some kind of tradeoffs around security or limiting architecture changes or something?
Especially with the M-series chips, it feels like they had the opportunity to make a major gaming push and bring publishers on board... but just nothing, at least with AAA games. They're content with cartoony content in Apple Arcade solely on mobile.
They aren't really the ones that have to.
Gaming platforms don't just arise organically. They require partnership between platform and publishers, organized by the platform and with financial investment by the platform.
glances at the Steam Machine
And how long do they have to fail at that before trying a new approach?
It just feels like they came along so late to really trying that it’s going to be a minute for things to actually happen.
I would love to buy the new Mac Mini and sit it under my TV as a mini console. But it just feels like we’re not quite there yet for that purpose, even though the horse power is there.
I think so. I think no one in apple management has ever played computer games for fun so they simply do not understand what customers would want.
They just don't care about desktop gaming, which is somewhat understandable. While the m-series chips have a GPU, it's about as performant for games as a dedicated GPU from 10-14 years ago (It only needs a fraction of the electricity though, but very few desktop gamers care about that).
The games you can play have to run at silly low resolution (fullHD at most) and rarely even reach 60fps.
They do take gambling seriously.
Mac gaming is a nice-to-have; it's possible, there's tools, there's Steam for Mac there's toolkits to port PC games to Mac, there's a games category in the Mac app store, but it isn't a major point in their marketing / development.
But don't claim that Apple doesn't take gaming seriously, gaming for them it's a market worth tens of billions, they're embroiled in a huge lawsuit with Epic about it, etc. Finally, AAA games get ported to mobile as well and once again earn hundreds of millions in revenue (e.g. CoD mobile).
In terms of gaming that's only on PC and consoles, I didn't understand Apple's blazé attitude until I discovered this eye-opening fact: there are around 300 million people who are PC and console gamers, and that number is NOT growing. It's stagnant.
Turns out Apple is uninterested by a stagnant market, and dedicates all its gaming effort where growth is: mobile.
The GPUs in previous M chips aren’t beating AMD or NVidia’s top offerings on anything except VRAM but you can definitely play games with them. Apple has released their Game Porting Toolkit a couple years ago which is basically like Wine/Proton in Linux and if you’re comfortable with Wine and approximately what a Steam Deck can run then that’s about what you can expect to run on a newer Mac.
Installing Steam or GOG Galaxy with something like Whiskey.app (which leverages the game porting toolkit) opens up a large number of games on macOS. Games that need Windows root kits are probably a pain point, and you’re probably not going to push all those video setting sliders to the far right for Ultra graphics on a 4K screen, but there’s a lot of games that are very playable on macOS and M chips.
Steam Deck-level performance is quite fine, I mainly just want to replay the older FromSoft games and my favorite indies every now and then.
First and foremost, it's just worth checking if your game has a native port: https://store.steampowered.com/macos People might be surprised what's already available.
With Wine syscall and Rosetta x86 code translation, issues do pop up from time to time though, like games that have cutscenes that are encoded as Windows Media Player specific formats, or any other media codecs which aren't immediately available since it's not like games advertise those technology requirements anywhere and you may encounter video stuttering or artifacts since the hardware is obviously dramatically different than what the game developers were originally developing against and there's things happening in the background that an x86 Windows system never does. This isn't stuff that's overly Mac specific since it usually impacts Linux equally but it's a hurdle to jump that you don't have to deal with in native Windows. Like I said, playing Windows games outside of Windows is just a different set of pain points and you have to be able to tolerate it. Some people think it's worth it and some people would rather have higher game availability and keep the pain of Windows. Kudos to Valve with creating a linux based handheld and the Wine and Proton projects for improving this situation dramatically though.
Besides the Game Porting Toolkit (which was originally intended for game developers to create native application bundles that could be put on the App Store), there's also Crossover for Mac that does their own work towards resolving a lot of these issues and they have a compatibility list you can view on their site: https://www.codeweavers.com/ and alternatively, some games run acceptably inside virtualization if you're willing to still deal with Windows in a sandboxed way. Parallels is able to run many games with better compatibility since you're actually running Windows, though last I checked DX12 was a problem.
Maybe with future TB5 support they will include that feature.
Unless you're running bootcamp you're extremely limited by driver support.
That said, expectations should be kept at a realistic level. Even if the M4 has the fastest embedded GPU (it probably does), it's still an embedded GPU. They aren't going to be topping any absolute performance charts.
Apple has quite impressive hardware (though their GPUs are still not close to high-end discrete GPUs), but they're also fast enough. The problem now is that Apple systematically does not have a culture that respects gaming or is interested in courting gamers. Games also rely on OS stability, but Apple has famously short and severe deprecation periods.
They ocassionally make pushes in that direction, but I think they lack the will to make a concerted effort, and I also think they lack the restraint to not try and force everything through their own payment processors and distribution systems causing sour relations with developers.
https://www.cyberpunk.net/en/news/50947/just-announced-cyber...
I get it, you want to leave windows by way of mac. But your options are to either bite the bullet and expend a tiny bit of your professional skill on setting up a machine with linux, or stay on windows for the foreseeable future.
Nor have I had any desire to upgrade.
As long as I don't open Chrome, Safari, Apple Notes, or really any other app...
Compiling gcc multi thread a few times would be enough.
My 2015 could get hotter than my 2010. I think my work 2019 gets hotter still.
I think the Intels were hotter than my G4, but it’s been too long and the performance jump was worth it.
Got an M1 Air, it blows them all out of the water (even 2019, others aren’t a surprise). And it does it fanless, as opposed to emulating a jet engine.
And if you really push it, it gets pleasantly warm. Not possibly-burn-inducingly hot.
Was the only one I needed thankfully. Missed the whole port reduction and power bar mess.
I love my M1 Air. It is the first general purpose computing hardware that felt like a real advance. I measured that two ways:
How much closer to my mobile is it?
How much faster is it?
The Air feels like a Mobile Computer if that makes any sense. One USB port expander to serve as a dock of sorts later and it makes for a great desktop experience.
When using it on the go, it has that light, powerful feel much like running my phone does.
Great machine. It is easily my favorite computer Apple has ever made, 8 bit greatness and an older age aside.
Mine is sticky. As in when others get hold of it, next thing I hear is usually, "oooh" and then it takes some time for it to come back!
I got mine for a song. Sweet deal, but it is the 8GB 256GB configuration. Not too big of a deal, but more internal storage would be nice. Maybe I will send it out somewhere to get a boost.
Would have already, but I worry a little about those services.
Only solution was to increase fan speed profile to max rpm.
It’s way too easy to heat up.
That's pretty much all Intel laptops I've owned since 2007.
But they spend voriously. And so the desktop PC market is theirs and theirs alone.
That's why the default advice if you're looking for 'value' is to buy a gaming console to complement your laptop. Both will excel at their separate roles for a decade without requiring much in the way of upgrades.
The desktop pc market these days is a luxury 'prosumer' market that doesn't really care about value as much. It feels like we're going back to the late 90's, early 2000's.
That's the thing with macs, all the strategy games tend to release there because the market for mac users and strategy gamers is a circle.
Yeah sure, if you start buying unnecessary luxury cases, fans and custom water loops it can jump up high, but that's more for clueless rich kids or enthusiasts. So I wouldn't place pc gaming as an expensive hobby today, especially considering Nvidia money grubbing practices that won't stay forever.
The frame rate wasn’t even close to my desktop (which is less powerful than yours). I switched back to the PC.
Last time I looked, the energy efficiency of nVidia GPUs in the lower TDP regions wasn’t actually that different from Apple’s hardware. The main difference is that Apple hardware isn’t scaled up to the level of big nVidia GPUs.
I never thought I'd see a processor that was 50% faster single-core and 80% faster multi-core and just shrug. My M1 Pro still feels so magically fast.
I'm really happy that Apple keeps pushing things and I'll be grateful when I do decide to upgrade, but my M1 Pro has just been such a magical machine. Every other laptop I've ever bought (Mac or PC) has run its fan regularly. I did finally get fan noise on my M1 Pro when pegging the CPU at 800% for a while (doing batch conversion of tons of PDFs to images) - and to be fair, it was sitting on a blanket which was insulating it. Still, it didn't get hot, unlike every other laptop I've ever owned did even under normal usage.
It's just been such a joyful machine.
I do look forward to an OLED MacBook Pro and I know how great a future Apple Silicon processor will be.
That was truely the dark age of Apple.
Unbridled control over all design in one hyper opinionated guy was an error well resolved.
What got me, however, was that was the time where their trade-in program was really kicking in. I think I got $800 for my touchbar mac which made the jump to an M1 Pro 14 a little less painful. Now you don't seem to so much pay for hardware as lease the Apple experience, so long as the hardware is still good.
Blows my mind how it doesn't even have a fan and is still rarely even anything above body temperature. My 2015 MBP was still going strong for work when I bailed on it late last year but the transition purely on the heat/sound emitted has been colossal.
He told me his fans were going crazy and his entire desk was hot after that. Apple silicon is just a game changer.
I was so happy when I finally got an M1 MBP for work because as you say Docker is so much faster on it. I feel like I don't wait for anything anymore. Can't even imagine these new chips.
I’m going to be very happy when it’s time to replace my Intel MBP at work.
It’s rarely loud, but boy it likes to be warm/toasty at the stupidest things.
- use native toolchain to produce artifacts for a different architecture
- use emulation to run different arch toolchain to produce different arch artifacts
First one is fast, second one is slow. In docker only second variant is possible.
Why would you need to emulate x86 to produce x86-shaped bytes?
For exmaple, that is generally the way you cross compile Flatpaks in cli or ide. In, for example, GNOME Builder you can just select the device you want to build for, like your smartphone, and it uses QEMU to emulate the entire SDK in the target architecture, you can also seemlessly run the Flatpak on your host through QEMU user mode emulation too.
So when you compiled to WASM you're going with route #1. WASM was designed with this this in mind.
Then there are some gcc quirks:
- for gcc compilation target is defined when gcc is compiled, so only way to cross-compile with gcc that I know of is emulation of target arch and runing gcc for target arch
However, we're talking about docker containers here, emulation would be the default way and path of least resistance.
Again, I will reiterate: every cross-compilation strategy falls into one of these two buckets. In some cases what I've describe in #1 is possible (WASM, java bytecode or really (almost) anything that targets a VM), in some cases it isn't and then you gotta go with #2 (docker, gcc)
- clang and LLVM don't have that GCC quirk (LLVM support less platforms, so give gcc some credit)
- it was in apple's financial interest make sure their things are easy to cross-compile.
- linker job is extremely straighforward in case of iOS
- everything is provided by either apple or application developer
But this M1 Max MBP is just insane. I'm nearly 50 and it's the best machine I've ever owned; nothing is even close.
Yes I agree. I sometimes compile LLVM just to check whether it all still works. (And of course to have the latest LLVM from main ready in case I need it. Obviously.)
The battery life improvements are great. Apple really did a terrible job with the thermal management on their last few Intel laptops. My M1 Max can consume (and therefore dissipate) more power than my Intel MBP did, but the M1 thermal solution handles it quietly.
The thermal solution on those Intel MacBooks was really bad.
Apple seems to have taken that to heart when they designed the cases for the Apple Silicon MBPs and those have excellent cooling (and more ports).
You have to really, REALLY put in effort to make it operate at rated power. My M2 MBA idles at around 5 watts, my work 2019 16-inch i9 is around 30 watts in idle.
Recently my friend bought a laptop with Intel Ultra 9 185h. It roared fans even when opening Word. That was extraordinary and if it was me making the purchase I would have sent it back straight away.
My friend did fiddle a lot with settings, had to update BIOS and eventually the fan situation was somewhat contained, but man I am never going to buy Intel / AMD laptop. You don't know how annoying fan noise is until you get a laptop that is fast and doesn't make any noise. With Intel is like having a drill pointed to your head that can goes off at any moment and let's not mention phantom fan noise, where it gets imprinted in your head that your brain makes you think the fans are on, but they are not.
Apple has achieved something extraordinary. I don't like MacOS, but I am getting used to it. I hope one day this Asahi effort will let us replace it.
The thing then was it was just Apple catching up with windows computers which had had a considerable performance lead for a while. It didn't really seem magical to just see it finally matched. (Yes Intel Mac's got better then Windows computers but that was later. At launch it was just matching)
It's very different this time because you can't match the performance/battery trade off in anyway.
Apple adopted Intel chips only after Intel replaced the Pentium 4 with the much cooler running Core Solo and Core Duo chips, which were more suitable for laptops.
Apple dropped Intel for ARM for the exact same reason. The Intel chips ran too hot for laptops, and the promised improvements never shipped.
The Apple ecosystem was most popular in the publishing industry at the time, and most publishing software used floating point math on tower computers with huge cooling systems.
Since IBM originally designed the POWER architecture for scientific computing, it makes sense that floating point performance would be what they optimized for.
With how fast and impressive the improvements are coming with the M-series processors, it often feels like we're back in the early 90s. I thought the M1 Macbook Air would be the epitome of Apple's processor renaissance, but it sure feels like that was only the beginning. When we look historically at these machines in 20 years, we'll think of a specific machine as the best early Apple Silicon Mac. I don't think that machine is even out yet.
I’m only compelled to upgrade for more ram, but I only feel the pressure of 8gb in rare cases. (I do wish I could swap the ram)
I'll likely upgrade to the M4 Air when it comes out. The M4 MacBook Pro is tempting, but I value portability and they're just so chunky and heavy compared to the Air.
It’s not a server so it’s not a crime to not always be using all of it and it’s not upgradable so it needs to be right the first time. I should have got 32GB to just be sure.
Thankfully, Apple recently made 16GB the base RAM in all Macs (including the M2/M3 MacBook Airs) anyway. 8GB was becoming a bad joke and it could add 40% to the price of some models to upgrade it!
The M1 Pro was a revelation.
That would cause it to throttle even when idle! But even on battery or using the right-hand ports, under continuous load (edit-build-test cycles) it would quickly throttle.
I was going to say why not compare it to something older! 100000x faster than a pc-xt!
I'm typing this from my mid-2012 retina mac book pro. I'm on Mojave and I'm well out of support for the operating system patches. But the hardware keeps running like a champ.
That’s not accurate.
Just yesterday, my 2017 Retina 4k iMac got a security update to macOS Ventura 13.7.1 and Safari even though it’s listed as “vintage.”
Now that Apple makes their own processors and GPUs, there’s really no reason in the foreseeable future that Apple would need to stop supporting any Mac with an M-series chip.
The first M1 Macs shipped in November 2020—four years ago but they can run the latest macOS Sequoia with Apple Intelligence.
Unless Apple makes some major changes to the Mac’s architecture, I don’t expect Apple to stop supporting any M series Mac anytime soon.
Since Snapdragon X laptops caught up to Apple on battery life I might as well buy one of those when I'll need to change. I don't need the fastest mobile CPU for watching movies and browsing the internet. But I like to have a decent amount of memory to keep a hundred tabs open.
naa... Amiga had the A2500 around the same time, the Mac IIx wasn't better with regards to specs in most ways. And at about $4500 more expensive (Amiga 2500 was around $3300, Mac IIx was $7769), it was vastly overpriced as is typical for Apple products.
cough
like saying, "Back in the 70s with Paul McCartney's first band, Wings (...)"
kids? get off my lawn
I've got one and it's really not that impressive. I use it as a "desktop" though and not as a laptop (as in: it's on my desk hooked to a monitor, never on my laps).
I'm probably gonna replace it with a Mini with that M4 chip anyway but...
My AMD 7700X running Linux is simply a much better machine/OS than that MacBook M1 Air. I don't know if it's the RAM on the 7700X or the WD-SN850X SSD or Linux but everything is simply quicker, snappier, faster on the 7700X than on the M1.
I hope the M4 Mini doesn't disappoint me as much as the M1 Air.
I love my M1 Studio. It’s the Mac I always wanted - desktop Mac with no integrates peripherals, a ton of ports - although I still use a high end hub to plug in… lot more. Two big external SSDs, my input peripherals (I’m a wired mouse and keyboard kind of guy) then a bunch of audio and USB midi devices.
It’s even a surprisingly capable gaming machine for what it is. Crossover is pretty darn good these days, and there are ARM native Factorio and World of Warcraft ports that run super well.
Don’t expect to play dark souls on it, but for indies and the like it’s fine.
(Dark Souls is my favourite game/series…how did you know)
The biggest annoyance I've hit actually is that game controller support is pretty bad. Don't expect generic USB HID game controllers to work, support for that isn't baked into MacOS the way it is Windows (Via DirectInput, etc).
The happy path is pretty much specifically a bluetooth xbox controlle.
I’d like a few VMs for a media server and the associated apps. Pihole too ideally, but I keep that separate as that VM going bad is never good.
And pg server. And a few web sites server. And something running in orb stack.
It's the 8gb model and I have around 2gb free most of the time
I like it in every way except price. It just works, comes back online after a power outage, etc. I don't recall any unscheduled disconnects.
--
Additional thoughts: I think there are complaints about the fan being loud so I swapped it out when I first got it. I also have it in my basement so I don't hear anything anyway -- HDDs are loud, especially the gold ones
In fact, the form factor is why I'm leaning toward taking a pass - I don't want a Mac Mini I would have to replace every 12 months.
* or rather, Apple doesn't target low enough temperatures to keep machines healthy beyond warranty
Another could be realtime video modification. People like to stream and facetime. They might like it even more if they could change their appearance more than they already can using realtime ML based image processing. We already have some of that in the various video conferencing / facetime apps but it's possible it could jump up in usage and needed compute power with the right application.
And if you regularly use local generative AI models the Pro model is the more reasonable choice. At that point you can forget battery life either way.
You only notice throttling on the MacBook Air when doing things like video renders that use max power for an extended period of time.
That Rossman guy, the internet-famous repairman, built his youtube channel on videos about Apple's inadequate thermal management. They're probably still archived on his channel.
Hell, I haven't owned a Mac post the year 2000 that didn't regularly hit temperatures above 90 celsius.
The Gods didn't deliver specs to Apple for Intel machines locking the company to placement/grades/design/brands/sizes of chassis, fans, logic board, paste etc. Apple, in the Intel years, just prioritized small form factor, at the expense of longevity.
And Apple's priorities are likely still the same.
My concern is that, given cooler-running chips, Apple will decrease form factor until even the cooler-running chips overheat. The question, in my mind, is only whether the team at Apple who design chips can improve them to a point where the chips run so coolly that the rest of Apple can't screw it up (ie: with inadequate thermal design).
If that has happened, then... fantastic, that's good for consumers.
100% Apple Silicon is that for computers. Very rarely do my fans whizz up. It’s noticeable when someone is using an x64 and you’re working with them because you will hear their computer’s fans on.
The work Apple has done to create a computer with good thermals is outrageous. Minimising distances for charges to be induced over.
I run Linux on my box. It’s great for what it does but these laptops are just the slickest computers I have ever used.
Never gets hot. Fans only come on during heavy compilation tasks or graphic intensive workloads.
Some of the choices Apple made after SJ's death left such an unpleasant taste in my mouth that I know have knee-jerk reactions to certain Apple announcements. One of those is that I experience nausea when Apple shrinks the form factor of a product. Hopefully that has clouded my judgement here, and in fact these Mac Minis have sufficient airflow to survive several years.
Like even with Intel chips that actually died early en masse (13th and 14th gen), the issue wasn't temperature.
Insufficient airflow from blowers... not good.
110 celsius heat... not good for lead-free solder... not good for computer.
This whole thread is starting to feel surreal to me. Pretty soon everyone will have me believing I dreamt up Apple's reputation for bad thermal management.
There's nothing in a comment thread so cringeworthy and boring as a person trumpeting their own expertise, so I'll refrain, and leave off here.
I've had an M1 Mac Mini inside a hot dresser drawer with a TV on top since 2020.
It doesn't do much other than act as a media server. But it's jammed pretty tight in there with an eero wifi router, an OTA ATSC DVR, a box that records HDMI, a 4K AppleTV, a couple of external drives, and a full power strip. That's why it's hot.
So far, no problems. Except for once when I moved, it's been completely hands-off. Software updates are done over VNC.
It only gets powered off only when there's a power outage or when I do an update.
How/where are they getting 128gb of ram? I don't see that as an option for any of the pre-orders pages.
Still pretty impressive, I get 1217/10097 with dual xeon gold 6136 that doubles as a space heater in the winter.
https://browser.geekbench.com/v6/cpu/1962935 says it was running at 13.54 GHz. https://browser.geekbench.com/v6/cpu/4913899 looks... questionable.
The linked Geekbench result from August running at 7614 MT/s clearly wasn't using CUDIMMs; it was a highly-overclocked system running the memory almost 20% faster than the typical overclocked memory speeds available from reasonably-priced modules.
So it doesn’t invalidate Apple‘s chip being the fastest in single core for a production machine.
E.g. this is one of the top single core benchmark result for any Intel CPU https://browser.geekbench.com/v6/cpu/5568973 and it claims the maximum frequency was stock as well (actually 300 MHz less than thermal velocity boost limits if you count those).
And that a desktop part is going to outperform a laptop part?
Until the next Max that goes beyond Ultra!
That all said, I only have an M1 and it's still impressive to me.
Think it's still quite far away from naming conventions of PC territory.
Now I got curious on what naming scheme could be clearer for Apple's performance tiers.
That said it’s far better than any PC scheme. It used to be easy enough when everything was megahertz. But I left the Wintel world around 2006 or so and stopped paying attention.
I’ve been watching performance reviews of some video game things recently and to my ears it’s just total gobbledygook now. The 13900KS, 14900K, 7900X, 7950X3D, all sorts of random letters and numbers. I know there’s a method to the madness but if you don’t know it it’s a mess. At least AMD puts a generation in their names. Ryzen 9 is newer than Ryzen 7.
Intel has been using i3, i5, i7, and i9 forever. But the problem is you can’t tell what generation they are just from that. Making them meaningless without knowing a bunch more.
At least as far as I know they didn’t remember everything. I remember when graphics cards were easy because a higher number meant better, until the numbers got too big so they released the best new ones with a lower number for a while.
At least I find Apple’s name tractable both between generations and within a generation.
This year's Zen 5 lineup consists of R9 9950x (has 16 cores), R9 9900x (12c), R7 9800x3d (8c with 3D Cache), R7 9700x (8c) and R5 9600x (6c).
Thanks.
The Max is their best CPU. The Ultra is two of their best CPUs glued together.
The Ultra isn’t a better CPU, it’s just more.
Mac Mini with M4 Pro is the fastest Mac ever benchmarked
There’s a reason consumer CPUs aren’t slower with 1024 cores instead.
Amdahl's law is still in control. For a great mini users single threaded performance is extremely important.
Yeah but then you'd have to use Windows. I'd rather just play whatever games can be emulated and take the performance penalty.
It helps that most AAAs put me to sleep...
Why? Linux gaming has been great since Wine.
Even better now with Valve investment.
Surely leagues better than gaming with macOS.
As for Linux, I abandoned it as the main GUI OS for Macs about 10 years ago. I have linux and windows boxes but the only ones with displays/keyboards are the macs and it will stay that way.
4 if you count steamdeck.
I do the real gaming, not some subpar emulated crap or anemic macOS steam library.
Compiling Linux:
AMD 6800HS, ~4mins.
Apple M1 Pro, Linux VM, ~10 mins.
The main thing that caused differential single-core CPU performance was just throttling under load for the devices that didn't have active cooling, such as the MacBook Air and iPad Pros.
Based on this reasoning, the M4, M4 Pro and M4 Max in active cooled devices, the MacBook Pro and Mac Mini, should have the same single-core performance ratings, no?
New chips are slightly faster than previous ones. I am not incredulous about this. Were it a 2x or 3x or 4x improvement or something, sure. But it ain't - it's incremental. I note how even in the Apple marketing they compare it to generations 3 or 4 chips ago (e.g. comparing increases against i7 performance from years ago etc, not against the M3 from a year or so ago because then it is "only" 12% - still good, but not "simply incredible" in my eyes).
They want the people who are still clinging to intel mac to convert finally. And as for m1 comparisons, people are not changing laptops every year and that is the cohort of m users that is the most likely to upgrade. It's smart to do what apple did.
"New Car 2025 has a simply incredible top speed 30x greater than previous forms of transport!* (* - previous form of transport slow walk at 4mph)"
It's marketing bullshit really let's be honest. I don't accept that their highly-polished entire marketing spiel and song and dance is aimed 100% only at people who have 3 or 4 generation old Mac already. They're not spending all this time and money and effort just to try and get people to upgrade. If you believe that, then you are in the distortion field.
Our point: Apple is laser-focused on comparing with laptops that are 4-5 year old. That's usually when Mac users start thinking about upgrading. They're building their marketing for them. It causes issues when directly trying to compare with the last generation.
Your point: Apple shouldn't be glamorous and a good showman when marketing their products because they know the only true marketing is comparing directly with your very last chip. Any other type of marketing is bullshit.
- who is likely to upgrade.
- target advertising at those people.
Seems eminently sensible to me.
Just like the phone comparisons are from more than one year ago, the computer comparisons (which are even more expensive) make more sense to be from more than one year ago. I don't see why you wouldn't target the exact people you're trying to get to upgrade...
That you are distracted by it is not Apple's problem - and most other industry players don't GAF about Apple's self-comparisons either.
We're on a perpetual upgrade treadmill. Even if the latest increment means an uncharacteristically good performance or longevity improvements... I can't bring myself to care.
Apple is just marketing to the biggest buyer group (2 generation upgrades) in their marketing material?
This isn’t like iPhones where people buy them every 1-2 years (because they break or you lose it etc), laptops have a longer shelf life, you usually run to the ground over 2+ yrs and then begrudgingly upgrade.
The idea of a 6x (or whatever) performance jump is certainly tempting. Exactly as they intend it to be. If I was in charge of replacing it I would be far more likely to buy than if I had an M3.
They’re trying to entire likely buyers.
The replacement cycle may just be that long. Or maybe they chose to stick with Intel. Maybe because that’s what they were used to or maybe because had specific software needs. So they were still buying them after Apple Silicon machines had been released.
Yeah it’s not a big deal for the enthusiast crowd. But for some of their customers it’s absolutely a consideration.
It only has:
- faster memory and up to 192 GB.
- 1 ekstra Thunderbolt port.
That is not much for such a large price difference:
Mac Mini (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $2500
Mac Studio (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $5000
But it is obviously a bad time to invest in a Mac Studio.
> Mac Studio (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $5000
In those configurations, the Studio would have roughly 2x the GPU power of the Mini, with equivalent CPU power. It also has twice as many Thunderbolt ports (albeit TB4 instead of TB5), and can support more monitors.
I wish the Studio received an upgrade, with a new M4 Ultra potentially going over 1TB/s. It also offers better cooling for long computations.
I mean no Infiniband of course, but how bad would a cluster of these guys using Thunderbolt 5 for networking be? 80Gbps is not terrible…
https://browser.geekbench.com/v6/cpu/compare/8593555?baselin...
Though a MacBook Pro 16" with M4 Max(that's what achieved this geekbench score), but the same amount of memory (64GB) and the same amount of storage (4TB) as my PC, would cost 6079€. That is roughly twice as much as my whole PC Build did cost, and I'm able to expand Storage and upgrade my CPU and GPU in the future (for way less than buying a new Mac in a few years)
Anyway, they made an Ultra version of the M1 and M2 that was even better than the Max versions by having significantly more cores.
If they do that again (Mac Pro?) it will be one hell of a chip.
Everytime I paste something it lags for 1-2 seconds… so infuriating!!
Microsoft definitely didn't buy something they created themselves.
PowerPoint appearing first on Macs is not surprising because historically Macs have been focused on DTP applications, and were much more powerful for that (still true this day). It took quite a while for Windows to be comparatively capable, I think this is why Microsoft bought PowerPoint, to make up for it. Hilariously today PowerPoint is considered to be worse than Keynote, some things never change...
Single taskink OSs are long gone. Single core performance is irrelevant in the world of multitasking/multithreading/ preemtible threads.
Single-core performance is still king for UI latency and CPU-bound tasks.
Most games are combinations of the two, and so some people are going to be CPU limited and some people are going to be GPU limited. For games I play, I'm often CPU limited; I can set the graphics to low at 1280 x 720, or ultra at 3840 x 2160 and get the same FPS. That's CPU limiting.
Why not move at least some of that into the GPU as well? Lots of different branchy code paths for the in-game objects?
GPUs suck at branchy code, but is that the case with this simulation data? I can see why it’d suck for NPC simulation, but not physics.
Paired with my aging but still chugging 2080Ti, the max framerates in games I play did not significantly increase.
However I did get a significant improvement in 99-percentile framerate, and the games feel much smoother. YMMV, but it surprised me a bit.
https://www.cyberpunk.net/en/news/50947/just-announced-cyber...
2. yes, different pieces of software have different bottlenecks under different configurations... what is the point of a comment like this?
They've been in the chip designing business for a while.
basically Jim Keller happened, I think they are still riding on that architecture
What does seem to be constant is that the best CPU designs have been touched by the hands of people who can trace their roots to North Eastern US. Maybe the correlation doesn't exist and the industry is small and incestuous enough that most designs are worked on by people from everywhere, but sometimes it seems like some group at DEC or Multiflow stumbled on the holy grail of CPU design and all took a drink from the cup.
I had no idea the difference was that big. I don’t know what a normal geek bench score is, so I just sort of assumed that the top of the lung Intel part would be something like 3700 or 3800. Enough that Apple clearly took a lead but nothing crazy.
No wonder it’s such a big deal.
But secondly, that would absolutely not indicate that it is the "fastest single-core performer in consumer computing". That would indicate that it is the highest scoring Geekbench 6 CPU in consumer computing.
Whether or not that's actually a good proxy for the former statement is up to taste, but in my opinion it's not. It gives you a rough idea of where the performance stands, but what you really need to be able to compare CPUs is a healthy mix of synthetic benchmarks and real-world workloads. Things like the time it takes to compile some software, scores in video game benchmarks, running different kinds of computations, time to render videos in Premiere or scenes in Blender, etc.
In practice though, it's hard to make a good Apples-to-Intels performance comparison, since it will wind up crossing both OS boundaries and CPU architecture boundaries, which adds a lot of variables. At least real world tests will give an idea of what it would be like day-to-day even if it doesn't necessarily reveal truisms about which CPU is the absolute best design.
Of course it's reasonable to use Geekbench numbers to get an idea of where a processor stands, especially relative to similar processors, but making a strong claim like this based off of Geekbench numbers is pretty silly, all things considered.
Still... these results are truly quite excellent. It would suffice to say that if you did take the time to benchmark these processors you would find the M4 processor performs extremely well against other processors, including ones that suck up more juice for sure, but this isn't too surprising overall. Apple is already on the TSMC N3E process, whereas AMD is currently using TSMC N4P and Intel is currently using TSMC N3B on their most cutting edge chips. So on top of any advantages they might have for other reasons (like jamming the RAM onto the CPU die, or simply better processor design) they also have a process node advantage.
Traditionally, Anandtech would have been the first media outlet to publish the single core and multicore integer and floating point SPEC test results for a new architecture, but hopefully some trusted outlet will take up the burden.
For instance, Anandtech's Zen 5 laptop SKU results vs the M3 from the end of July:
> Even Apple's M3 SoC gets edged out here in terms of floating point performance, which, given that Apple is on a newer process node (TSMC N3B), is no small feat. Still, there is a sizable deficit in integer performance versus the M3, so while AMD has narrowed the gap with Apple overall, they haven't closed it with the Ryzan AI 300 series.
https://www.anandtech.com/show/21485/the-amd-ryzen-ai-hx-370...
Zen 5 beat Core Ultra, but given that Zen 5 only edged out the M3 in floating point workloads, I wouldn't be so quick to claim the M4 doesn't outperform Zen 5 single core scores before the test results come out.
The only good comparison is to judge a variety of real world programs compiled for each architecture, and run them.
Over time, RISC and CISC borrowed from each other: https://cs.stanford.edu/people/eroberts/courses/soco/project...
I'm guessing that you don't realize that you are describing SPEC?
It's been around since the days when every workstation vendor had their own bespoke CPU design and it literally takes hours to run the full set of workloads.
From the same Anandtech article linked above:
> SPEC CPU 2017 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.
More info:
> SPEC is the Standard Performance Evaluation Corporation, a non-profit organization founded in 1988 to establish standardized performance benchmarks that are objective, meaningful, clearly defined, and readily available. SPEC members include hardware and software vendors, universities, and researchers.
SPEC was founded on the realization that "An ounce of honest data is worth a pound of marketing hype".
additionally, the only updates they appear to have made in the last 5+ years involve optimizing the suite for Apple chips.
thus, it leaves out massive parts of modern computing, and the (many) additions to x86-64 that have been introduced since the 00s.
i'd encourage you to look into the advancements that have occurred in SIMD instructions since the olden days, and the way in which various programs, and compilers, are changed to take advantage of them
ARM is nice and all, but the benchmark you've linked appears to be some extremely outdated schlock that is peddled for $1000 a pop from a web page out of the history books. Really. Take a look through what the benchmarks on that page are actually using for tooling.
I'd consider the results valid if they were calculated using an up to date, and maintained, toolset, like that provided by openbenchmarking.org (the owner of which has been producing some excellent ARM64 vs Intel benchmarks on various workloads, particularly recently).
How do you theorize that generic C or C++ code that you compile using GCC has been "optimized for an Apple chip"?
Feankly, it's impossible to take any of this comment seriously.
These days it appears more that the hardware is fantastic, especially in the laptop form factor and thermal envelope, and perhaps the downside is a languishing macOS.
The only places I can see there could be features missing are:
- IT management type stuff where it looks like Apple are happy just delegating to Microsoft (eg. my workstation is managed with InTune and runs Microsoft Defender pushed by IT),
- CUDA support if you’re into AI on NVIDIA
- Gaming I hear, but I don’t have time for that anyway :)
Of course this is biased, because I also generally just _like_ the look and feel of macOS
I like how Apple got roasted on every forum for using real world workloads to compare M series processors to other processor. The moment there’s a statistic pointing to “theoretical” numbers, we’re back to using real world workload comparison
Apple didn't get roasted for presenting real world performance, they got roasted for doing the kinds of things that marketing people do: making vague blanket claims about performance that couldn't actually be reasonably validated. (Intel and AMD routinely get roasted for similar things.)
The M4 core @ 4.5 GHz has a significantly higher ST GB6 performance than Lion Cove @ 5.7 GHz or Zen 5 @ 5.7 GHz (which are almost equal at the same clock frequency, with at most a 2% advantage for Lion Cove).
Having higher GB6 scores should be representative for general-purpose computing, but there are application domains where the performance of the Apple cores has been poor in the past and M4 is unlikely to have changed anything, i.e. the computations with big integer numbers or the array operations done on the CPU cores, not on the AMX/SME accelerators.
Nevertheless, I believe that your point about the existence of higher ST GB6 scores is not weakened by the fact that those CPUs are overclocked.
For a majority of the computer users in the world, the existence of higher ST performance that can be provided by either overclocked Intel/AMD CPUs or by CPUs made by Apple is equally irrelevant, because those users would never choose any of these 2 kinds of CPUs for their work.
Still, I do think some of Apple's inherent advantages make it less surprising that they're able to win on benchmarks. Again, the process node, the RAM directly on the chip. And hell, they are probably also able to get here because of targeting AArch64 with no 32-bit ARM compatibility.
Either way, Apple appears to be a couple years ahead of the competition right now when it comes to efficient processors, just like they were with the M1.
Been extremely happy with Windows and WSL last couple years, so happy to be a node or two behind on AMD laptops.
Otherwise I use a workstation primarily anyway.
semantics rant: on another note, where is the line between "consumer" and "prosumer"/"enthusiast" hardware in terms of pricing? over 3000 usd before taxes seems well in the latter camp in my books.
on other hand, most enterprise hardware is not optimized for single-core performance. even to my knowledge, the configurations for algo-trading machines are comparable to consumer hardware, if not slower.
And Macs still can't match that with their crappy keyboards, case strength and ergonomics (no sharp edges), spill-protection and non-glossy screens (remember, these are laptops to be used out and about — though I've only used M1/M2 Airs, M1 Max Pro 14 and M2 Pros 13/14, so maybe the newer ones are better).
X1 Carbons have mostly moved to higher performing Intel chips, and thus lost those silent & cool characteristics, but other than the obvious (performance vs comfort), they are still my preferred choice in laptops.