Have to say I do enjoy all the old school style whimsy with the witch costume and whatnot.
In fact, the company is so closed that Rosenzweig is the one we should consult when we encounter aliens.
I tried googling, but trying to find the specific result I'm interested in amongst all the blog spam garbage related to powerpoint is beyond me. Even googles own AI couldn't help. Sad times!
In fairness to you I think a lot of the stuff involving hardware goes over everyone's heads :D
I've seen comments in a number of articles (and I think a few comments in this thread) saying that there are a few features in Vulcan/opengl/direct3d that were standardized ("standardized" in the D3D case?)/required that turned out to be really expensive to implement, hard to implement fast in hardware anyway, and not necessarily actually useful in practice. I think geometry shaders may have been one of those cases but I can't recall for sure.
https://youtu.be/pDsksRBLXPk?t=2895
The whole thing is worth watching to be honest, it's a privilege to watch someone share their deep knowledge and talent in such an engaging and approachable way.
...so far. The presenter is only 23 apparently. Maybe I'm speaking only for myself here, but I think career unhingedness does not go down over time as much as one might hope.
In all seriousness, she does really impressive work, so when she says this 2,000 lines of C++ is inscrutable, that gives one pause. Glad it's working nonetheless.
The original 2600+ lines of C++: https://gitlab.freedesktop.org/asahi/mesa/-/blob/main/src/ga...
The translated code: https://gitlab.freedesktop.org/asahi/mesa/-/blob/main/src/as...
The code is quite low on comments and doesn't really explain the math there. It probably makes sense if you have background in the lofty math side of computer graphics, but that's a slightly different skillset than being able to reverse-engineer and bring up exotic hardware.
A lot of the size is just because the code deals with a lot of 3/4 dim stuff and also some things are a bit more verbose in code but translate to something short in assembly
Yes, well, from the article:
>That works, but ""tessellator.cl is the most unhinged file of my career""; doing things that way was also the most unhinged thing she has done in her career ""and I'm standing up here in a witch hat for the fifth year in a row"". The character debuted in the exact same room in 2019 when she was 17 years old, she recalled.
17. That's impressive.
> Where is it appropriate to post a subscriber link?
> Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared.
> The following subscription-only content has been made available to you by an LWN subscriber.
I might be wrong but I read that as there being funding to make the previously paywalled content available, probably on an article-specific basis. Does anyone know?
> What are subscriber links
> A subscriber link is a mechanism by which LWN subscribers may grant free access to specific LWN articles to others. It takes the form of a special link which bypasses the subscription gate for that article.
The LWN paywall is unique in that all the content becomes freely available after a week. The subscriber links are there to encourage you to subscribe if you are in a position to do so.
FWIW an LWN subscription is pretty affordable and supports some of the best in-depth technical reporting about Linux and linux-related topics available.
(I am not affiliated with LWN, just a happy subscriber - I also credit some of my career success to the knowledge I've gained by reading their articles).
So n=1 it’s an effective advertising tactic even though I can read the specific article for free.
"I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Montreal for XDC."
Users here seem to not care about those "ethics"
Its a pretty absurd expectation.
In order to subscribe people need to know that LWN exists.
But mesh shaders are fairly new, will take a few years for the hardware and software to adapt.
AMD GPUs have them starting with RDNA 2.
It is also true however that advances in APIs and HW desgins allowed for some parts that were troublesome at the time of GS not to be so troublesome anymore.
It's basically all emulated. One of the reasons GPU manufacturers are unwilling to open source their drivers is because a lot of their secret sauce actually happens in software in the drivers on top of the massively parallel CUDA-like compute architecture.
EDIT: to be precise yes ofc every chip is a massively parallel array of compute units but CUDA has absolutely nothing to do with it and no not every company buries the functionality in the driver.
I have signed NDAs and don't feel comfortable going into any detail, other than saying that there is a TON going on inside GPUs that is not "basically all emulated".
I don't know why you think there's anything resembling "good stories" (I don't even know what would constitute a good story - swash buckling adventures?). It's just grimy-ass runtime/driver/firmware code interfacing with hardware features/flaws.
95% difficult, stressful and surprisingly tedious work
4% panic, despair, insecurities
1% friendly banter with very smart people
If I had to do it all over again, I wouldn't, even though it allowed me to retire early.
But, in summary, it was grinding, grinding, and more grinding, with large dollops of impostor syndrome on top of it.
Met lots of nice smart people, but none were as impressive as the folks at NVidia. I was not worthy of being there.
Calling it "all emulated" is very very far from the truth.
You can independently verify this by digging into open source graphics drivers.
If anything working, fully baked desktop Linux working on ARM Mac hardware would drive sales and Apple would still get the profits. Besides their services are still mostly available too. Tux loving folks can subscribe to AppleTV, and music etc.
Do they? I can't remember ever seeing any mention of Geometry Shader performance in a GPU review I've read/watched. The one thing I've ever heard about it was about how bad they were.
> Since this was going to be the first Linux Rust GPU kernel driver, I had a lot of work ahead! Not only did I have to write the driver itself, but I also had to write the Rust abstractions for the Linux DRM graphics subsystem. While Rust can directly call into C functions, doing that doesn’t have any of Rust’s safety guarantees. So in order to use C code safely from Rust, first you have to write wrappers that give you a safe Rust-like API. I ended up writing almost 1500 lines of code just for the abstractions, and coming up with a good and safe design took a lot of thinking and rewriting!
Also https://github.com/AsahiLinux/linux/blob/de1c5a8be/drivers/g... where "drm" is https://en.wikipedia.org/wiki/Direct_Rendering_Manager
That's incredibly arrogant. The whole industry is adopting ray tracing, and it is a very desired feature people are upgrading video cards to get working on games they play.
Ray tracing is, by comparison, the holy grail of all realtime lighting effects. Global illumination makes the current raster lighting techniques look primitive by comparison. It is not an exaggeration to say that realtime graphics research largely revolves around using hacks to imitate a fraction of ray tracing's power. They are piling on pipeline-after-pipeline for ambient occlusion, realtime reflections and shadowing, bloom and glare as well as god rays and screenspace/volumetric effects. These are all things you don't have to hack together when your scene is already path tracing the environment with the physical properties accounted for. Instead of stacking hacks, you have a coherent pipeline that can be denoised, antialiased, upscaled and post-processed in one pass. No shadowmaps, no baked lighting, all realtime.
There is a reason why even Apple quit dragging their feet here - modern real-time graphics are the gimmick, ray tracing is the production-quality alternative.
Path tracing is the gold standard for computer graphics. It is a physically based rendering model that is based on how lighting actually works. There are degrees of path tracing of varying quality, but there is nothing else that is better from a visual quality and accuracy standpoint.
Your modern AAA title does a massive amount of impressive hacks to get rasterization into the uncanny valley, as rasterization has nothing to do with how photons work. That can all be thrown out and replaced with “model photons interacting with the scene” if the path tracing hardware was powerful enough. It’d be simpler and perfectly accurate. The end result would not live in the uncanny valley, but would be indistinguishable from reality.
Assuming the hardware was fast enough. But we’ll get there.
You can scale these same principles to even less photorealistic games like the Borderlands series or a Grand Theft Auto game. Ray tracing is less about photorealism (although it is a potent side effect) and more about creating dynamic lighting conditions for an interactive scene. By simulating the behavior of light, you get realistic lighting - photoreal or not a lot of games rely on SSR and shadowmaps that can be replaced with realtime RT.
As long as traditional lighting is enough (and given that it has for the last 2 decades or so), RT remains a gimmick.
In any case, RT isn't just about getting pretty graphics. It massively lowers the artists' workload since there's no need to painstakingly go through each in-game area and add fake lights to make everything look good.
No gaming studio will pass up on the opportunity to get better graphics and better productivity. It really is just a matter of waiting a few years for the hardware to get good enough.
Yes, because "Rendering light the exact same way reality itself does it" was never the assignment, outside of nvidia desperately trying to find excuses to sell more GPUs.
Maybe some games would benefit from it in some cases... but you have to weigh that marginal improvement against the increased costs, both economic and ecological.
> It massively lowers the artists' workload since there's no need to painstakingly go through each in-game area and add fake lights to make everything look good.
This is a laughable claim, as long as you're going for anything more than fixed ambient lighting you're going to need to massage and QA that the lighting works for what you're trying to show.
---
In short, yes. Ray tracing as a headline feature is a gimmick, made to appeal to the same clueless HNers who think AI will eliminate our need to understand anything or express ourselves.
I guess I can't blame HN for courting pedestrian takes, but man you're going to be disappointed by Apple's roadmap going forward if this is your opinion. And I say this, defending Apple, as someone that treats Tim Cook and Steve Jobs as the living and buried devil respectively. Having hardware-accelerated ray tracing is no more of a "gamer" feature than the high-impedance headphone jack is an "audiophile" feature. It is intelligent design motivated by a desire to accommodate a user's potential use-case. Removing either would be a net-negative at this point.
Why waste computing power on 'physically correct' algorithms, when the 'cheap hacks' can do so much more in less time, while producing nearly the same result.
Accelerating ray tracing in-hardware is a literal no-brainer unless you deliberately want to ostracize game developers and animators on Mac. I understand the reactionary "But I don't care about dynamic shadows!" opinion, but there's practically no opportunity cost here. If you want to use traditional lighting techniques, you can still render it on an RT-enabled GPU. You just also have the option of not wanting to pull out your fingernails when rendering a preview in Blender.
The other question is how much of the raytracing features in 3D APIs need to be implemented in fixed-function hardware units, or whether this is just another area where the pendulum will swing back and forth between hardware and software (running on the GPU of course).
But then maybe in 10 years we'll have 'signed distance field units' in GPUs and nobody talks about raytracing anymore ;)
The architecture will probably improve on both Nvidia and Apple's side going forward, but in theory both GPUs should be easier to support for longer since they're not focused on accelerating obsolete junk like MSAA or HBAO. It enables a better convergence of features, it makes porting to-and-from Mac easier, and with any luck Apple might even quit being so sheepish about working with Khronos since they're not actively neglecting the Vulkan featureset anymore.
Personally I think it’s useful for a few things, but it’s not the giant game changer I think they want you to think it is.
Raytraced reflections are a very nice improvement. Using it for global illumination and shadows is also a very good improvement.
But it’s not exactly what the move to multi texturing was, or the first GPUs. Or shaders.
So I think calling it "a bit of a gimmick" is accurate for many of the games that shipped in, even if not all of them.
Replacing all that effort with raytracing and having one unified lighting system would be a _major_ time saver, and allows much more dynamic lighting than was previously possible. So yeah some current games don't look much better with RT, but the gameplay and art direction was designed without raytracing in mind in the first place, and had a _lot_ of work put into it to get those results.
Sure fully pathtraced graphics might not be 100% usable currently, but the fact that they're even 70% usable is amazing! And with another 3-5 years of algorithm development and hardware speedups, and developers and artists getting familiar with raytracing, we might start seeing games require raytracing.
Games typically take 4+ years to develop, so anything you're seeing coming out now was probably started when the best GPU you could buy for raytracing was an RTX 2080 TI.
Aside from Cyberpunk 2077 and a handful of ancient games with NVIDIA-sponsored remakes, what even offers fully path traced lighting as an option? The way it went for CP2077 makes your "70% usable" claim seem like quite an exaggeration: performance is only good if you have a current-generation GPU that cost at least $1k, the path tracing option didn't get added until years after the game originally shipped, and they had to fix a bunch of glitches resulting from the game world being built without path tracing in mind. We're clearly still years away from path tracing being broadly available among AAA games, let alone playable on any large portion of gaming PCs.
For the foreseeable future, games will still need to look good without fully path traced lighting.
And in 7-10 years when the software stack is matured, we'll be thanking ourselves for doing this in hardware the right way. I don't understand why planning for the future is considered so wasteful - this is an architecture Apple can re-use for future hardware and scale to larger GPUs. Maybe it doesn't make sense for Macs today, but in 5 years that may no longer be the case. Now people don't have to throw away a perfectly good computer made in these twilight years of Moore's law.
For non-games applications like Blender or Cinema4D, having hardware-accelerated ray tracing and denoising is already a game-changer. Instead of switching between preview and render layers, you can interact with a production-quality render in real time. Materials are properly emissive and transmissive, PBR and normal maps composite naturally instead of needing different settings, and you can count the time it takes before getting an acceptable frame in milliseconds, not minutes.
I don't often give Apple the benefit of the doubt, but hardware-accelerated ray tracing is a no-brainer here. If they aren't going to abandon Metal, and they intend to maintain their minuscule foothold in PC gaming, they have to lay the groundwork for future titles to get developed on. They have the hardware investment, they have the capital to invest in their software, and their competitors like Khronos (apparently) and Microsoft both had ray tracing APIs for years when Apple finally released theirs.
That's why I said games aren't currently designed with only pathtracing in mind, but in 3-5 years with faster hardware and better algorithms, we'll probably start to see it be more widespread. That's typically how graphics usually develop; something that's only for high end GPUs eventually becomes accessible to everyone. SSAO used to be considered extremely demanding, and now it's accessible to even the weakest phone GPU with good enough quality.
Again the fact that it's feasible at all, even if it requires a $1000 GPU, is amazing! 5 years ago real time path tracing would've been seen as impossible.
> The way it went for CP2077 makes your "70% usable" claim seem like quite an exaggeration
Based on the raw frame timing numbers and temporal stability, I don't think it is. RT GI is currently usually around ~4ms, which is at the upper edge of usable. However the temporal stability is usually the bigger issue - at current ray counts, with current algorithms, either noise or slow response times is an inevitable tradeoff. Hence, 70% usable. But with another few years of improvements, we'll probably get to the point where we can get it down to ~2.5ms with the current stability, or 4ms and much more stable. Which would be perfectly usable.
Maybe you should be saying 70% feasible rather than 70% usable. And you seem to be very optimistic about what kind of improvements we can expect for affordable, low-power GPU hardware over a mere 3-5 years. I don't think algorithmic improvements to denoisers and upscalers can get us much further unless we're using very wrong image quality metrics to give their blurriness a passing grade. Two rays per pixel is simply never going to suffice.
Right now, an RTX 4090 ($1700+) runs at less than 50 fps at 2560x1440, unless you lie to yourself about resolution using an upscaler. So the best consumer GPU at the moment is about 70-80% of what's necessary to use path tracing and hit resolution and refresh rate targets typical of high-end gaming in 2012.
Having better-than-4090 performance trickle down to a more mainstream price point of $300-400 is going to take at least two more generations of GPU hardware improvements even with the most optimistic expectations for Moore's Law, and that's the minimum necessary to do path tracing well at a modest resolution on a game that will be approaching a decade old by then. It'll take another hardware generation for that level of performance to fit in the price and power budgets of consoles and laptops.
So I guess it's just a "gimmick" in that relatively few games properly take advantage of this currently, rather than the effect not being good enough.
Unfortunately most people’s experience with raytracing is turning it on for a game that was not designed for it, but it was added through a patch, which results in worse lighting. Why? Because the rasterized image includes baked-in global illumination using more light sources than whatever was hastily put together for the raytracing patch.
WoW Burning Crusade launched in *2006* originally. The "Classic" re-release of the game uses the modern engine but with the original game art assets and content.
Does it do anything in the 'modern' WoW game? Probably! In Classic though all it did was tank my framerate.
Since then I also played the unimaginable disaster that was Cyberpunk 2077. For as "pretty" as I suppose the game looked I can't exactly say if the ray tracing improved anything
If you want to see what raytracing can do, I'd only look at very, very recent UE5 titles that are designed for raytracing from the ground-up.
The other parts of ray tracing like shading and so on, are usually just done on the general compute.
The problem with doing so, though, and with GPU physics in general is that it's too high latency to incorporate effectively into a typical game loop. It'll work fine for things that don't impact the world, like particle simulations or hair/cloth physics, but for anything interactive the latency cost tends to kill it. That and also the GPU is usually the visual bottleneck anyway so having it spend power on stuff the half-idle CPU could do adequately isn't a good use of resources.
The faster your fps, the more flickery this ends up looking as you're constantly pulling back the camera or player or whatever a frame after it collided. The lower the fps, the more sluggish and unresponsive it feels.
Alternatively you need to pipeline your entire world state and now you've got some pretty bad input latency
It's not "just math", it's data structures and algorithms for tree traversal, with a focus on memory cache hardware friendliness.
The math part is trivial. It's the memory part that's hard.
Ray tracing hardware and acceleration structures are highly specific to ray tracing and not really usable for other kinds of spatial queries. That said, ray tracing has applications outside of computer graphics. Medical imaging for example.
However, it is important to put things in context. Something can be a 'Gimmick' on several year old integrated mobile hardware that can't run most games at a reasonable FPS without it; and not a 'Gimmick' on cutting edge space heaters.
Ray tracing is also extremely heavily marketed, and at a time when GPU upgrades have become infrequent for most people due to extremely high prices and relatively weak overall performance gains year over year.
The industry is adopting ray tracing because they need to be able to offer something new when the overall performance gains generation over generation have been slowing for years. They've been leveraging marketing hard to try to generate demand for upgrades among gamers, knowing cryptocurrency won't buoy them forever.
That ray tracing is cited as a reason among gamers who do upgrade their GPUs is unsurprising and not really a strong indicator that ray tracing is a great feature.
At any rate, both the gaming industry and the wider tech industry are known to be extremely faddish at times. So is consumer behavior. Things 'the whole industry is doing' has always been a category that includes many gimmicks.
It's truly stunning that anyone could do what she did, let alone a teenager (yes I know, she's not a teenager anymore, passage of time, etc :D)
oh my.
Sounds like a win-win situation.
[1] https://www.tomshardware.com/video-games/pc-gaming/steam-lik...
You dont have to boot Linux to play PC games on Mac.
Apple already provides the tools you need to build a Mac native equivalent to Proton.
There are several options built using those tools, both paid {Crossover) and free/open source (Whisky).
Whisky combines the same ooen source Wine project that Proton leverages with Apple's Rosetta x86 emulation and Apple's DirectX emulation layer (part of the Game Porting Toolkit) into a single easy to use tool.
Proton and Alyssa's solution use Vulkan on Linux as their native graphics API.
Regardless, you have to provide a translation layer so that Windows games written to call DirectX APis use the native graphics layer of the platform you are running on.
Unless you happen to be emulating a Windows game written to use Vulkan instead of DirectX, Vulkan really doesn't matter at all on the Mac.
If you do want to emulate one of the rare Vulkan based Windows games on a Mac, the MoltenVK translation layer handles that.
I used it and found it confusing at first. Imagine your average gamer, they'd never get past dragging in the app. The Steam Deck (and Proton) have been so successful because you can just power on the Deck and use it. No tinkering or third-party tools to do what you want.
Allegedly NVidia is working on a WoA SoC, now that Qualcomm's contract with MS has ended. If NVidia puts their recent capital gains into a performant ARM chip like Apple (tapeout alone would likely run in the billion-dollar range), we can hopefully expect AMD to respond with something soon after. Once the chipsets are available, it's a matter of getting Linux to work with mixed page-size processes.
I have no idea how possible that is, but if the emulation performance is anything like Rosetta and the M1 proved possible, most users won't even notice.
I know this type of work can be challenging to say the least. My own dabble with Zig and Pinephone hardware drivers reminded me of some of the pain of poorly documented hardware, but what a reward when it works.
My own M1 was only purchased because of this project and Alyssa's efforts with OpenGL+ES. It only ever boots Asahi Linux. Thank-you very much for your efforts.
Second, since it's open source, Apple themselves are probably paying attention; I didn't read the whole thing because it's going over my head, but she discussed missing features in the chip that are being worked around.
The hardware on Thinkpad T-models should last longer than just 5 years in general.
My daily-driver laptop at home is a T420 from 2011 with a Core 2 Duo, SSD and 8GB RAM. Works fine still.
I run Linux + OpenBox, so it is a somewhat lightweight setup to be fair.
I am not sure I would be productive with that. Any Core 2 Duo is 10x slower single core and 20x slower multi-core than a current generation laptop CPU at this point.
Eg: https://browser.geekbench.com/v6/cpu/compare/8588187?baselin...
I think it would mostly be good as an SSH terminal, but doing any real work locally on it seems frankly unfeasible.
So, yes, a lot of this comes down to software and a massive waste of cycles. I remember one bug in Electron/Atom where a blinking cursor caused like 10% CPU load or something along that line. They fixed it, but it tells you way more about how broken the entire software stack was at that time and it didn't get better since then.
I mean, think about this: I used 1280x1024 on a 20" screen back in the mid 90ies on (Unix!) machines that are insanely less powerful than even this X200s. The biggest difference: Now you can move windows around visually, back then you moved the outer frame of it to the new place and then it got redrawn. And the formatting options in browsers are better, i.e. it is easier to design the layout you want. Plus there is no need for palette changes when switching windows anymore ("true color"). The overall productivity hasn't kept up with the increase in computing power, though. Do you think a machine 100x the performance will give you 100x the productivity? With some exceptions, the weak link in the chain were, are, and will always be humans, and if there are delays, we are talking almost always about badly "optimized" software (aka bloat). That was an issue back then already and, unfortunately, it didn't get better.
I do development and DevOps on it. Sure there are some intense workloads that I probably couldn’t run, but it works just fine as my daily driver.
I also have a corporate/work laptop from Dell with 32GB RAM, 16 cores @ 4.x GHz etc. - a beast - but it runs Windows (+ antivirus, group policy crap etc.) and is slower in many aspects.
Sure I can compile a single file faster and spin up more pods/containers etc. on the Dell laptop, but I am usually not constrained on my T420.
I generally don’t spend much time waiting for my machine to finish things, compared to the time I spend e.g. writing text/code/whatever.
I'm usually fairly careful with my things, so my gen8 hp elitebook still has all its bits together, but I've never really enjoyed using it. The screen, in particular, has ridiculous viewing angles, to the point it's impossible to not have any color cast on some region.
It got worse with Gen4/5 which now have an awful hump (reverse notch) like a smartphone.
The long usage of the X220 depends on the built quality but also on the five year replacement part support. New batteries and a new palm rest (cracked during a journey). It not just quality you pay for, it is this support level. And of course more memory. Apple still fails in this regard and barely does something when forced by the European Union. Anyway - Apple doesn’t officially support Linux therefore I cannot buy them for work.
This is the part wich saddens me, they do good work and the next MacBook will also not run fully with Linux. This kind of catchup things by hackers cannot be won - until the vendor decides you’re a valuable customer. Therefore, don’t buy them that you can run Linux. You maybe can. But these devices are made for macOS only.
But if you want to run Linux on a MacBook? Talk your politicians! And send „messages“ with your money to Apple. Like buying ThinkPads, Dells Developer Edition, Purism, System 76 and so on :)
Just curious, how does it feel better? My framework apparently has an aluminium lid and a magnesium base, and the mg feels “smoother” than the slightly more textured al… however my iPad is apparently aluminium too and is smooth to the touch.
Actually the magnesium bottom feels like some high quality polymer/plastic. And it is leightweight. Therefore it doesn’t feel like metal and doesn’t transport heat/cold. Aluminum is a nice material for planes, cars or bikes but I avoid skin contact because it is uncomfortable.
I guess like often it depends on how the magnesium made. Lenovo also uses for other parts high quality plastics (keyboard of course and probably the palm rest).
The X220/230 had that godawful 1366x768 panel. A shame when the 13" Air at the time had a 1440x900 panel, which while it wasn't amazing with the viewing angles and colors, it was light years ahead of the screen in something like a T430.
Lenovo should only ship the good ones! If I buy a ThinkPad I want the good one.
I struggled to get the HiDPI panel for the X13 and ordered it from the replacement parts. Same for the X220, I replaced the TN panel by drumroll IPS panel.
Apple takes ridiculous amounts for memory or disk but you always get the good stuff with the base model. This makes it simple and reliable for customers.
Except keyboards
The ThinkPads always win, convex key caps, better switches and pressure/release point, travel way, better layout and a TrackPoint. In some models you can still replace it within 30 seconds (T14). With the X13 or MacBook it is a horror, requires removal of the mainboard. Not mentioning Apples failure with the TouchBar, which “feels” like the tried to save money on expensive switches by replacing them with a cheap touchscreen and selling this horrible downgrade as feature. And the infamous butterfly switches are the reason to avoid any used older MacBook (often defective).
I have quad core W520 from 2011, it's still VERY serviceable for modern usage. Not the fastest, but it even has USB 3 and a 1080p screen which an equivalentish MBP from the time would have not.
I still have an old iPhone 8 that I test with that runs well, I’ve had numerous Android devices die in that timeframe, slow to a crawl, or at best their performance is erratic.
Memory (both system and GPU) is usually best thing to future proof a computer at buy time, especially as it's not user-replaceable anymore.
You also don’t have to get the base model. You can stay under $1000 while increasing to 24gb of ram.
I have an iPhone 3G from 2008 that is currently playing music as I work.
After 16 years, it still syncs perfectly with Apple Music on the current version of macOS.
Unfortunately, since it can't handle modern encryption, I can't get it to connect to any wifi access point.
I considered upgrading but it’s hard to care to cause my M1 is just so good for what I need it for.
When considering longevity, I will agree that Thinkpads are probably the only device that can compete with MacBooks. But there are trade-offs.
MacBooks are going to be lighter, better battery life, and have better displays. Not to mention MacOS, which is my preferred OS.
Thinkpads usually have great Linux support and swappable hardware for those who like to tinker with their devices. They also tend to be more durable, but this adds more weight.
Not going let Macs have this one, my X1 carbon is considerably lighter than a MBA.
But generally agreeing. My last X1C lasted 8 years and still works perfectly, I just wanted an upgrade. My new one is even lighter than my old one. I opt for the FHD w/o touch screen and the second best processor to balance battery life and performance. Definitely not getting 18hrs battery life but 8-10 isn't something to laugh at.
Sadly, I don't think we'll ever get a computer that good from apple (or anyone) again.
The Air is the "light" one at 2.7 lbs 13" and 3.3 lbs for the 15"
For reference, there are several 13" laptops on the market that are around 2 - 2.5 lbs and 15"+ that are less than 3 lbs
That sounds like an Apple sound bite, and it is wrong, compared to pretty much any MacBook competitor out there...
The ultimate issue was that the batteries degraded on them on incredibly fast. I don't know if that's been fixed, but the ease of replacing (at least one of) the batteries was more than canceled out by the short life compared to a Mac.
Another one managed to sort out their dead TP motherboard while remoting from a small town in SEA.
All covered under warranty, no questions asked.
So what's to be done? Buy their insanely costly "limited" extended warranty for "2 more" years? And let's say we do that, then? I am sure there is an argument for that.
I am typing this from a M1 MacBook Pro and if it dies I am really not sure whether I will even be able to get it repaired without "feeling" fleeced, or might as well move back to the normal laptop + Linux world and know that a laptop repair will never be a "minor bankruptcy" event ;-)
No, "but Apple devices last long" doesn't cut it. So do non-Apple devices, yes they do. And if they need repair you don't at all fret w/ or w/o warranty.
I am not sure how many here on HN will be able to connect with this but that hanging Damocles' sword is not a nice thing to have around when you use a costly Apple device.
Making it easy and cheap/affordable for their devices to be repaired should not be an option left for OEMs.
In case of Macbooks, it's the fact that they refuse to provide an official GPU driver for Linux and general poor support for things outside the walled garden. The Asahi stuff is cool and all, but come on, is a 3.4 trillion dollar company really going to just stand there and watch some volunteers struggling to provide support for their undocumented hardware without doing anything substantial to help? That sounds straight up insulting to me, especially for such a premium product.
For iphones, it's the fact that you are not allowed to run your own code on YOUR OWN DEVICE without paying the Apple troll toll and passing the honestly ridiculous Apple Store requirements.
And of course, in both cases, they actively sabotage third party repairs of their devices.
I know you admit right after that your opinion is biased, but it's almost ludicrous to assert that all the programmers and engineers using Macs and iPhones by choice must just not be tech literate.
> In case of Macbooks, it's the fact that they refuse to provide an official GPU driver for Linux
MBPs are so much better than any other laptop that, with a few caveats[1], running Linux in a VM on a top-of-the-line MBP is a much nicer experience than using Linux natively on any other laptop. So while it'd be nice if there were more first-party support for Linux, it's certainly not a deal-breaker for "tech-literate" people. (Not to mention the fact that there are "tech-literate" people who use macOS and not Linux, so it wouldn't matter to them at all).
> general poor support for things outside the walled garden
macOS isn't a walled garden, so I don't know what you mean. You can download any software you want from anywhere you want and run it on your laptop, and Apple doesn't do anything to try to prevent this.
> The Asahi stuff is cool and all, but come on, is a 3.4 trillion dollar company really going to just stand there and watch some volunteers struggling to provide support for their undocumented hardware without doing anything substantial to help? That sounds straight up insulting to me, especially for such a premium product.
Now it's unclear whether your point is "I don't understand why people use Macs because there are objective drawbacks" or "I don't think people should use Macs because Apple does stuff that I find annoying". You're blending the two here but they are meaningfully separate points. I've discussed the practical point already above, but as for the stuff you subjectively find annoying: surely the only real answer is that lots of other people just subjectively don't care as much as you.
> For iphones, it's the fact that you are not allowed to run your own code on YOUR OWN DEVICE without paying the Apple troll toll and passing the honestly ridiculous Apple Store requirements.
I don't care about this at all. I've never wanted to run my own code on my own iOS device except when I was working on iOS apps and had an Apple developer account through work. I, like the vast majority of people, use my phone as a browsing/messaging/media consumption device, not as a general-purpose computer.
If Apple tried to prevent me from running my own code on my own MacBook, it would be a deal-breaker, but as I already said above, they don't.
In conclusion I think you've confused "tech-literate person" and "geek who wants to be able to tweak/configure everything". Indeed there is a large overlap between those sets, but they're not quite the same thing.
I feel like there used to be a higher concentration of those people here.
Are your laptops not lasting 10 years? (battery swaps are a must though)
The only reason I switched laptops was that I wanted to do AI Art and local LLMs.
I have so many old laptops and desktops that each of my 5 kids have their own. They are even playing half-modern games on them.
Plenty of people have no need for more than 8GB. My spouse. My mom. My english-major son. Meanwhile, my M2 is saying I have 72GB in use, with IntelliJ #1 at 6.05GB. I would not be happy with 8GB. Sounds like you wouldnt be either. So don't buy one.
This is no longer true for me. I've been an Apple fan since the Apple ][ days, and reluctantly left the ecosystem last year. The hardware walled garden with soldered-on components and components tied down to specific units for ostensible privacy and security reasons (I don't buy those reasons), combined with the steadily degrading OS polish in fine attention to detail, for me personally, meant I could no longer justify the cognitive load to continue with a Mac laptop as my daily driver. While others might point to a cost or/or value differential, I'm in the highly privileged position to be insensitive to those factors.
Last straw was an board-soldered SSD that quit well before I was willing to upgrade, and even Louis Rossman's shop said it would cost way more to desolder and solder a new one on than the entire laptop is worth. Bought a Framework the same day, when it arrived I restored my data files to it and been running it as my daily driver ever since. The Mac laptop is still sitting here, as I keep hoping to figure out when to find time to develop my wave soldering skills to try my hand at saving it from the landfill, or break down and unsustainably pay for the repair (I do what I can to avoid perpetuating dark patterns, but it is a Sisyphean effort).
I found myself in a position of having to think increasingly more about working around the Mac ecosystem instead of working invisibly within it (like a fish in water not having to think about water), that it no longer made sense to stick with it. It has definitively lost the "It Just Works" polish that bound me so tightly to the ecosystem in the past. I see no functional difference in my daily work patterns using a Mac laptop versus a Framework running Fedora.
To be sure, there are a lot of areas I have to work around on the Framework-Fedora daily driver, but for my personal work patterns, in my personal needs, I evaluated them to be roughly the same amount of time and cognitive load I spent on the Mac. Maybe Framework-Fedora is slightly worse, but close enough that I'd rather throw my hat into the more open ring than the increasingly closed walled garden Apple's direction definitely is taking us, that does not align with my vision for our computing future. It does not hurt that experimenting with local LLM's and various DevOps tooling for my work's Linux-based infrastructure is way easier and frictionless on Fedora for me, though YMMV for certain. It has already been and will be an interesting journey, it has been fun so far and brought back some fond memories of my early Apple ][, Macintosh 128K, and Mac OS X days.
The jump from an i9 to an M1 moved a lot of tasks from group 3 into 2, some tasks from group 2 into group 1, and was the biggest perceived single-machine performance leap for me in my professional career. I have an M1 Max or Ultra on my work machine and an M3 Ultra in my personal machine - after two years of the work machine being so visibly faster, I caved and upgraded my personal. The M3 Ultra moves a handful of things from group 2 to group 1 but it's not enough to move anything out of group 3.
Anecdotal but I have a White Macbook from 2010. It's sitting on a shelf not because it doesn't work (minus the battery), but because it's too outdated to be of much use. And there's a small crack in the case.
I have a Macbook Pro from 2016. I still whip it out from time to time when I don't want to use my latest one for whatever reason (say, network troubleshooting). It works fine. Even the battery still holds charge. If those two had USB-C (and charging over that port) I'd probably use them more often. Their keyboards is also pleasant (since they are before the scissor key nonsense).
My company has thousands of Macbooks. It's rare that I see anyone at all with issues. They aren't perfect, but the build quality is far better than most PC laptops and so is the attention to detail. The price premium kinda blows, but the M line made them way better.
as someone who's been coding for more than 20 years, the happiest and the most depressed moments in my career both came during a hardware project I participated only for 4 months.
Well.... (from the article):
>"frankly, I think ray tracing is a bit of a gimmick feature"
I couldn't agree more, on both counts.
With Apple knowledge of internal documents they are the best positioned to produce an even better low level implementation.
At this point the main blockroad is the opinionated point that Metal porting is the only official supported way to go.
If Valve pull up a witch-crafted way to run AAA games on Mac without Apple support that would be an interesting landscape. And maybe would force Apple to re-consider their approach if they don't want to be cornered on their own platform...
Right, except that Game Porting Toolkit and D3DMetal was an exact response to this scenario. Whether it's the right approach, time will tell, but Apple definitely already headed this one off at the pass.
Apple already provides its own native implementation of a DirectX to Metal emulation layer.
And this is especially true for existing game that "aren't worth porting" but are still fun to play. (Is there a Vulkan to Metal / OpenGL to Metal in Gaming toolkit? is it the NexStep?)
There is actually a sweet spot here for Valve that could benefit everyone:
- Valve as a necessary third party
- Gamers to access a wider catalog
- Apple so they don't have to bother developing a porting process for old games
The Steam Deck is also just emulating native Windows APIs, but on Linux.
https://www.wikipedia.org/wiki/Proton_(software)
Game compatibility isn't 100% with either, and both have community maintained lists with compatibility information.
This also touches on a broader question about the future of open-source efforts on platforms that are inherently restrictive. While it's inspiring to see games like Control running at 45fps on an M1 MAX with open-source drivers, it begs the question: Should the community continue to invest significant resources into making closed systems more open, or should efforts be redirected toward promoting more open hardware standards?
Apple's approach to hardware design warrants criticism. By creating GPUs with limitations that hinder standard functionalities like tessellation shaders and using non-standard page sizes, Apple places unnecessary obstacles in the path of developers. This closed ecosystem not only complicates the work for open-source contributors but also stifles innovation that could benefit all users.
In her initial announcement, she mentions VM memory overhead as the reason that 16 Gigs of RAM will be the minimum requirement to emulate most Windows games.
It's a question of whether they're _not_ investing resources to maintain standard behavior or they are actively investing resources to diverge from it. If it's the former, I don't find any fault in it, personally speaking.
False dichotomy. Do both!
Apple designs its hardware to suit its own ends, and its own ends only. It's obvious to everyone here that this supports their closed business model, which actually works for them very well - they make excellent hardware and (some software flakiness more recently notwithstanding) the median user will generally have an excellent time with their hardware + software as a result.
So they're not placing "unnecessary obstacles in the path of developers" at all by designing their hardware as they do - they're just focused on designing hardware to suit their own needs.
(Also note that if their hardware wasn't excellent, there wouldn't be such interest in using it in other, non-Apple-intended ways.)
from their perspective, motivated devs are doing all the heavy-lifting for them. from this side of ecosystem, they would mainly care about native app compatibility and comparable AI (inference) experience.
both of the above seem to be taken care of, sometimes through combined efforts. other than this, they are happy to lock things down as much as they can get away with. the community unfortunately gravitates towards overall appeal rather than good open initiatives.