An Update on Apple M1/M2 GPU Drivers
622 points by MrBuddyCasino 6 days ago | 265 comments
  • renewiltord 6 days ago |
    The work by Alyssa R and Asahi Lina is great stuff. I have to say that a lot of this is really inscrutable unless you’re used to driver code. I wish it were much easier to write this stuff but hardware stuff is so idiosyncratic.

    Have to say I do enjoy all the old school style whimsy with the witch costume and whatnot.

    • dylan604 6 days ago |
      what they do is just this side of magic, so maybe it's not really a costume?
      • amelius 6 days ago |
        Any sufficiently closed technology is indistinguishable from magic.

        In fact, the company is so closed that Rosenzweig is the one we should consult when we encounter aliens.

        • dylan604 6 days ago |
          I don't think the work they do is what one could call closed though is it?
    • hi_hi 6 days ago |
      I've just been watching her recent talk. I noticed she appears to change slides with a wave of her wand. Is there an known piece of hardware one can purchase to do this?

      I tried googling, but trying to find the specific result I'm interested in amongst all the blog spam garbage related to powerpoint is beyond me. Even googles own AI couldn't help. Sad times!

      • sen 6 days ago |
        She might be doing some OpenCV/MedaPipe gesture tracking? There's lots of tutorials out there and it's not super difficult to get some basic gesture tracking going on your laptop.
      • bryanlarsen 6 days ago |
        Purchasing one has been illegal since 1863, but the technology typically used is a human assistant, although the signal is usually more subtle.
      • sroussey 6 days ago |
        Ug… a person is watching and clicking for her when she does that?
      • Onavo 6 days ago |
        Is Leap Motion still a thing?
      • m4rtink 6 days ago |
        Maybe some ESP32 or other MCU and accelerometer ? That should comfortably fit into a magic wand.
      • jeroenhd 6 days ago |
        In this case there's probably a human doing the slides, but smart watches have apps that can control slideshows in a variety of ways. There's at least one WearOS app (WowMouse) that does some gesture based presentation control.
  • UncleOxidant 6 days ago |
    Will M3/M4 need completely different drivers?
    • wmf 6 days ago |
      Probably. Apple made major changes to the GPU in M3.
      • a_wild_dandan 6 days ago |
        What were the major differences?
      • coldtea 6 days ago |
        Major base changes, or just added more stuff on top of the same base?
        • wtallis 6 days ago |
          https://forum.beyond3d.com/threads/apple-dynamic-caching-on-... Changing the register file into a cache sounds like a major base change. Raytracing is a major feature added on top of existing functionality. So I'd say the answer is: plenty of both.
        • ferbivore 6 days ago |
          M3 has mesh shader support. The geometry pipeline they inherited from PowerVR fundamentally doesn't support them, for reasons that go way over my head. They probably changed a good chunk of it.
          • olliej 6 days ago |
            > for reasons that go way over my head

            In fairness to you I think a lot of the stuff involving hardware goes over everyone's heads :D

            I've seen comments in a number of articles (and I think a few comments in this thread) saying that there are a few features in Vulcan/opengl/direct3d that were standardized ("standardized" in the D3D case?)/required that turned out to be really expensive to implement, hard to implement fast in hardware anyway, and not necessarily actually useful in practice. I think geometry shaders may have been one of those cases but I can't recall for sure.

            • ferbivore 6 days ago |
              Mesh shaders are actually useful. Or at least game engine people love them, which was not the case for geometry shaders or even tessellation really. They are extremely painful to add support for though. Aside from Apple I don't think any mobile IHVs have a working implementation.
              • olliej 5 days ago |
                I think (prior to this article) I had assumed they were synonymous :D
        • sroussey 6 days ago |
          How it handles memory and registers is quite different.
          • TylerE 6 days ago |
            How much will this matter to (somewhat graphics demanding) end users? I'm somewhat eagerly awaiting swapping out my M1 Studio with the M4 Studio that is all but confirmed to be coming at some point next year... More GPU grunt would certainly make me happy. Even the M1 is a far more competent gaming machine than I expected but I came from an i9/3080 machine so, well, more is more, as long as they can keep it near silent and relatively cool running.
            • sroussey 5 days ago |
              The GPU speed, like the number of GPU cores, hasn’t moved a lot. Clock speed has moved up a bit.
    • hi_hi 6 days ago |
      I think the answer is yes. I'm making assumptions based on this part of Alyssas talk a couple of weeks ago where she talks about M3 having specific driver support for raytracing which doesn't exist in previous versions.

      https://youtu.be/pDsksRBLXPk?t=2895

      The whole thing is worth watching to be honest, it's a privilege to watch someone share their deep knowledge and talent in such an engaging and approachable way.

      • olliej 6 days ago |
        ooh, I missed this, thanks for the link!
    • hellavapid 6 days ago |
      'twould be very apple
  • scottlamb 6 days ago |
    > tessellator.cl is the most unhinged file of my career

    ...so far. The presenter is only 23 apparently. Maybe I'm speaking only for myself here, but I think career unhingedness does not go down over time as much as one might hope.

    In all seriousness, she does really impressive work, so when she says this 2,000 lines of C++ is inscrutable, that gives one pause. Glad it's working nonetheless.

    • dividuum 6 days ago |
      • whatever1 6 days ago |
        She is not doing CI/CD so the PMs would fire her in any modern company. "Where are the unit tests?!? How is it possible to write code without tests and 100% coverage!"
        • wmf 6 days ago |
          OpenGL and Vulkan have tons of tests and Alyssa runs them. I don't know if it's automated through CI.
          • manmal 6 days ago |
            Reportedly, the lack of M3 support so far is because there is no M3 Mac Mini which they can use for CI.
            • fl0id 6 days ago |
              That was one reason to delay it, but another more important they said was to implement gpu things like this and other missing features.
            • spockz 6 days ago |
              I’m sure if that is the issue we can together find a way to sponsor a MacBook with an M3? Or does it specifically have to be a Mac mini?
              • manmal 5 days ago |
                I can only guess, but it might be a requirement to have the machine hosted by a cloud platform.
      • worstspotgain 6 days ago |
        Is there a blog post about what specifically she found to be inscrutable? The C++ doesn't look all that terse at a syntactic level, and has plenty of comments. Are the problems at the domain level?
        • raggi 6 days ago |
        • samus 6 days ago |
          I can imagine she was glad it took only some gentle massaging to make it work. If she had encountered road blocks there, she would have had to dig in for real.

          The code is quite low on comments and doesn't really explain the math there. It probably makes sense if you have background in the lofty math side of computer graphics, but that's a slightly different skillset than being able to reverse-engineer and bring up exotic hardware.

      • raverbashing 6 days ago |
        Honestly it doesn't look too bad

        A lot of the size is just because the code deals with a lot of 3/4 dim stuff and also some things are a bit more verbose in code but translate to something short in assembly

    • nwallin 6 days ago |
      It's unfathomable that she's 23 and wrote an open source graphics driver for a closed-source graphics card. At least Einstein had the decency to wait until he was 26 to invent special relativity.
      • TheCycoONE 6 days ago |
        Multiple, before this she wrote panfrost for ARM mali GPUs starting in 2018.
    • jjmarr 6 days ago |
      I didn't even know this until now. I looked at her resume and I can't grasp how she's been doing an undergraduate degree from 2019-2023, meaning that writing an entire GPU driver is the side project she made during her degree.
      • I_AM_A_SMURF 6 days ago |
        For gifted people undergraduate degrees are easy (unless you want to go through it really fast). Some people I know barely studied for their exams and spent all their time in research projects / their own thing.
      • saagarjha 6 days ago |
        There’s a lot of free time when you’re in college.
        • _zoltan_ 5 days ago |
          So much this. I wish I had a fraction of the free time I had when I was doing my degrees...
        • jjmarr 5 days ago |
          Depends on major. I'm in engineering so it's more work than a full time job for whatever reason.
    • Dah00n 6 days ago |
      >The presenter is only 23 apparently

      Yes, well, from the article:

      >That works, but ""tessellator.cl is the most unhinged file of my career""; doing things that way was also the most unhinged thing she has done in her career ""and I'm standing up here in a witch hat for the fifth year in a row"". The character debuted in the exact same room in 2019 when she was 17 years old, she recalled.

      17. That's impressive.

  • kachapopopow 6 days ago |
    I always wondered about these /SubscriberLink/ links. Is sharing them considered unethical?
    • anamexis 6 days ago |
      From the LWN FAQ:

      > Where is it appropriate to post a subscriber link?

      > Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared.

    • vintagedave 6 days ago |
      It says,

      > The following subscription-only content has been made available to you by an LWN subscriber.

      I might be wrong but I read that as there being funding to make the previously paywalled content available, probably on an article-specific basis. Does anyone know?

      • anamexis 6 days ago |
        Also from the LWN FAQ:

        > What are subscriber links

        > A subscriber link is a mechanism by which LWN subscribers may grant free access to specific LWN articles to others. It takes the form of a special link which bypasses the subscription gate for that article.

    • cbhl 6 days ago |
      The purpose of these links is to be shared, see the FAQ: https://lwn.net/op/FAQ.lwn#slinks

      The LWN paywall is unique in that all the content becomes freely available after a week. The subscriber links are there to encourage you to subscribe if you are in a position to do so.

    • sophacles 6 days ago |
      Representatives of LWN have posted here before saying they are OK with it, along with a polite request not to have it go overboard since they need to fund the site and writers, editors, etc. That funding comes from subscriptions only IIUC.

      FWIW an LWN subscription is pretty affordable and supports some of the best in-depth technical reporting about Linux and linux-related topics available.

      (I am not affiliated with LWN, just a happy subscriber - I also credit some of my career success to the knowledge I've gained by reading their articles).

      • ewoodrich 6 days ago |
        A couple times now I’ve remembered to refill my subscription after my old one ran out of months thanks to an LWN free article being posted to HN.

        So n=1 it’s an effective advertising tactic even though I can read the specific article for free.

      • awsrhawrhawrh 6 days ago |
        Funding also comes from sponsors, as the article obviously states:

        "I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Montreal for XDC."

    • flykespice 6 days ago |
      It's not different from shared articles here from subscriber-only newsletters being accompanied by the archived link in the topmost comment.

      Users here seem to not care about those "ethics"

      • wtallis 6 days ago |
        There's nothing ethically wrong about sharing LWN subscriber links on HN when the LWN editor has repeatedly posted on HN that doing so is fine.
      • rstat1 6 days ago |
        There's a difference between posting a sub-only link to that's intended to be shared in moderation, and posting a link to pay walled article as if to expect the clicker of that link to pay for a sub to that website just so they can read that article.

        Its a pretty absurd expectation.

      • samus 6 days ago |
        It's really not such a big deal since everything on LWN becomes public after some weeks. Rather the opposite, it ensures that the site remains well-known.
    • zarzavat 6 days ago |
      I'm sure that they are very, very happy that HN is directing a firehose of traffic at their site on the regular.

      In order to subscribe people need to know that LWN exists.

      • jlarocco 6 days ago |
        I know this is off on a tangent, but I recently signed up for LWN and it's well worth the price. The articles and news items alone are worth it, but the archive is just amazing.
        • pimeys 6 days ago |
          Same here. I originally found LWN through a HN post. I've been a subscriber for a few years now, and I'm reading almost everything they publish, even if I'm not always understanding everything they talk about. The quality of writing is very high.
  • gigatexal 6 days ago |
    Is anyone else astonished at how much is missing in the hardware and how much is emulated?
    • jsheard 6 days ago |
      The things being emulated are mostly legacy features that are barely used in modern software, if at all, so the overhead of emulating them for backward compatibility isn't the end of the world. I can't blame Apple for not supporting geometry shaders in hardware, when they're widely considered to be a mistake that never should have been standardized in the first place, and Metal never supported them at all so they could only ever come up in old OpenGL code on macOS.

      https://x.com/pointinpolygon/status/1270695113967181827

      • parl_match 6 days ago |
        I wouldn't go so far as to say "mistake that should never have been standardized". Their intended use was always pretty limited, though. There's zero reason for anything built in recent memory to use them.
        • jsheard 6 days ago |
          They had limited uses and turned out to be incredibly hard to implement efficiently in hardware, so in practice it was nearly always faster to just keep using the proven techniques that GS was supposed to replace.

          http://www.joshbarczak.com/blog/?p=667

          • RantyDave 6 days ago |
            So why does instancing suck? I would have thought it would be heavily optimised in the driver...
            • p_l 6 days ago |
              It seems Device Generated Commands might be better case for instancing these days?
          • comex 6 days ago |
            And yet one of the fancy new features being advertised in recent years (in multiple APIs including Metal) is support for mesh shaders – which seem to have a lot in common with geometry shaders, including the output ordering property that that post blames for geometry shaders’ bad performance. I’m not a graphics programmer myself, but this makes me suspect there’s more to the story.
            • dgfitz 6 days ago |
              If you’re not a graphics programmer, how did you learn of this? I’d love to read about it.
            • HeuristicsCG 6 days ago |
              if your hardware supports mesh shaders properly, it won't be very hard for it to also support these other features emulated in software (geometry shaders, tessellation, lines etc).

              But mesh shaders are fairly new, will take a few years for the hardware and software to adapt.

              • adrian_b 6 days ago |
                They are fairly new at the other GPU makers, but the NVIDIA GPUs have them starting with Turing, 6 years ago.

                AMD GPUs have them starting with RDNA 2.

            • winterismute 6 days ago |
              I am not an expert on geometry processing pipelines, however Mesh Shaders are specced differently from GS, essentially one of the big problems with GS is that it's basically impossible for the HW, even after all the render state is set and a shader is bound and compiled (and "searched"), to understand how much memory and compute the execution will take, which breaks a lot of the assumptions that allow SIMD machines to work well. In fact, the main advertised feature of GS was to create geometry out of nothing (unbounded particle effects), while the main advertised feature of Mesh Shaders is GPU-driven and efficient culling of geometry (see for example the recent mesh shader pipeline talk from Remedy on Alan Wake 2). It is true that Mesh Shaders are designed also for amplification, and that word has been chosen specifically to hint that you will be able to "multiply" your primitives but not generating random sequences out of thin air.

              It is also true however that advances in APIs and HW desgins allowed for some parts that were troublesome at the time of GS not to be so troublesome anymore.

      • raverbashing 6 days ago |
        Tessellation does not seem to fit this description though
    • dtquad 6 days ago |
      The graphics pipeline in modern GPUs are mostly a thin low-level Vulkan/Metal-like layer on top of a massively parallel CUDA-like compute architecture.

      It's basically all emulated. One of the reasons GPU manufacturers are unwilling to open source their drivers is because a lot of their secret sauce actually happens in software in the drivers on top of the massively parallel CUDA-like compute architecture.

      • almostgotcaught 6 days ago |
        I didn't read the article and don't know about Apple but that's definitely not true for everyone. Source: see amdgpu built on top of HSA.

        EDIT: to be precise yes ofc every chip is a massively parallel array of compute units but CUDA has absolutely nothing to do with it and no not every company buries the functionality in the driver.

      • david-gpu 6 days ago |
        As a former insider, this is NOT remotely how I would describe reality.

        I have signed NDAs and don't feel comfortable going into any detail, other than saying that there is a TON going on inside GPUs that is not "basically all emulated".

        • user_7832 6 days ago |
          Maybe a tall ask, but have you considered writing down all your experiences into a book, and release it after all NDAs expire? I’d love to read more about low level hardware and the behind the scenes stuff involved. I’m sure you’ve got a lot of good stories.
          • almostgotcaught 6 days ago |
            > I’d love to read more about low level hardware and the behind the scenes stuff involved. I’m sure you’ve got a lot of good stories.

            I don't know why you think there's anything resembling "good stories" (I don't even know what would constitute a good story - swash buckling adventures?). It's just grimy-ass runtime/driver/firmware code interfacing with hardware features/flaws.

            • david-gpu 5 days ago |
              Exactly right.

              95% difficult, stressful and surprisingly tedious work

              4% panic, despair, insecurities

              1% friendly banter with very smart people

              If I had to do it all over again, I wouldn't, even though it allowed me to retire early.

              • almostgotcaught 5 days ago |
                My experience to a T, down to the 1% friendly banter (because while there are lots of smart people working on hardware, almost none of them know how to banter lol).
          • snewman 6 days ago |
            NDAs don't generally have an expiration date. (As opposed to non-competes and non-solicitation agreements, which generally do.) An NDA typically ends only if the information in question becomes public, and then only for that particular information.
          • david-gpu 5 days ago |
            Nope in all accounts, I'm afraid.

            But, in summary, it was grinding, grinding, and more grinding, with large dollops of impostor syndrome on top of it.

            Met lots of nice smart people, but none were as impressive as the folks at NVidia. I was not worthy of being there.

      • exDM69 6 days ago |
        This statement isn't true at all. There are tons of fixed function hardware units for graphics in GPUs in addition to compute cores: triangle rasterizers, texture units, raytracing units, blitters, copy engines, video codecs, etc. They interact with the shader/compute cores and it's getting more common that the shader core is driving the rasterizer etc than vice versa (mesh shaders and ray tracing for example).

        Calling it "all emulated" is very very far from the truth.

        You can independently verify this by digging into open source graphics drivers.

        • samtheprogram 6 days ago |
          How is verifying against a worse performing, open source GPU driver any sort of verification that proprietary drivers do not do special things you wouldn’t expect?
          • anonymous_user9 5 days ago |
            Because it would reveal the lack of emulation in the open-source driver, demonstrating that there are specialized hardware blocks performing those functions.
      • samus 6 days ago |
        A lot of it is software, but not necessarily in the driver. Nouveau folks pretty much gave up and use NVidia's firmware blob going forward. While that's mostly due to NVidia not cooperating in making crucial capabilities of the GPU available to non-signed firmware blobs, the upside is that it will hopefully significantly reduce the effort connected with wiring up new hardware components on the GPU.
      • gigatexal 5 days ago |
        You know it’s not like Apple has a lot to lose in terms of market share if they opened even the specs of their hardware up to make folks like the Asahi team’s lives’ easier. What does Apple have in terms of the desktop market? 5%?

        If anything working, fully baked desktop Linux working on ARM Mac hardware would drive sales and Apple would still get the profits. Besides their services are still mostly available too. Tux loving folks can subscribe to AppleTV, and music etc.

    • tedunangst 6 days ago |
      Is this really so different from any other mobile derived GPU?
      • refulgentis 6 days ago |
        Idk, TIL, had a career in mobile for 15 years running and I didn't know this was a distinctive quality of mobile GPUs. (makes sense! but all that to say, I'm very interested to hear more, and I'll trade you an answer to that question: "maybe not! sounds like you got some smart stuff to share :)")
      • ferbivore 6 days ago |
        Yes. Apple have their own graphics API. They were able to decide that, say, geometry shaders aren't worth the chip area or engineering effort to support. Other IHVs don't get that choice; for geometry shaders, for instance, they're part of both Vulkan and OpenGLES, there are games and benchmarks that use them, and customers (end-users, gamedevs, review/benchmark sites, SoC vendors) will evaluate GPUs based, in some small part, on how good their geometry shader support is. Same story for tessellation, transform feedback, and whatever else Apple dropped.
        • fingerlocks 6 days ago |
          I don’t think it’s accurate to say Apple “dropped” these features. Tessellation is done with general purpose compute shaders. You can do the same with “geometry shaders” if the need arises as well
        • dagmx 6 days ago |
          Other hardware vendors absolutely do the same. Geometry shaders are poor performers on several other vendors for precisely the same reason
          • ferbivore 6 days ago |
            At least some Qualcomm, Imagination and Broadcom GPUs support geometry shaders in hardware. Not entirely sure about Arm. To be fair, it could be the support isn't very good.
            • dagmx 6 days ago |
              Some do, but a lot will have their firmware actually do the translation into other compute types that align better with the hardware.
        • rstat1 6 days ago |
          >>(end-users, gamedevs, review/benchmark sites, SoC vendors) will evaluate GPUs based, in some small part, on how good their geometry shader support is

          Do they? I can't remember ever seeing any mention of Geometry Shader performance in a GPU review I've read/watched. The one thing I've ever heard about it was about how bad they were.

          • lewispollard 6 days ago |
            Yeah, geometry shaders aren't even widely used, they weren't great in the first place and devs have moved on.
  • egwor 6 days ago |
    Really impressive. Well done (and thanks for the laughs. Starting in French would be so funny)
  • recvonline 6 days ago |
    Any link to the fact that the drivers are written in Rust?
    • hoherd 6 days ago |
      https://asahilinux.org/2022/11/tales-of-the-m1-gpu/

      > Since this was going to be the first Linux Rust GPU kernel driver, I had a lot of work ahead! Not only did I have to write the driver itself, but I also had to write the Rust abstractions for the Linux DRM graphics subsystem. While Rust can directly call into C functions, doing that doesn’t have any of Rust’s safety guarantees. So in order to use C code safely from Rust, first you have to write wrappers that give you a safe Rust-like API. I ended up writing almost 1500 lines of code just for the abstractions, and coming up with a good and safe design took a lot of thinking and rewriting!

      Also https://github.com/AsahiLinux/linux/blob/de1c5a8be/drivers/g... where "drm" is https://en.wikipedia.org/wiki/Direct_Rendering_Manager

      • karunamurti 6 days ago |
        I particularly like line 1322 and 1344
  • computersuck 6 days ago |
    That's not even a costume because she's definitely a wizard
    • indrora 6 days ago |
      (:
  • adastra22 6 days ago |
    > frankly, I think ray tracing is a bit of a gimmick feature

    That's incredibly arrogant. The whole industry is adopting ray tracing, and it is a very desired feature people are upgrading video cards to get working on games they play.

    • madeofpalk 6 days ago |
      The same was true about 3D TVs. Also a gimmick.
      • talldayo 6 days ago |
        3D TVs failed before they existed as a product category. You cannot encode enough information through stereoscopy to create a convincing or "realistic" 3D effect. The conditions for making it work have always been fickle and as 6DOF VR/AR is proving, it was never enough to be convincing anyways. Encoding two slightly different versions of a video and layering them on top of each other is a dirty hack. It is not a holistic solution to a problem they seriously intended to solve in the first place. It is a novelty designed from the bottom-up as a scarcely-usable toy.

        Ray tracing is, by comparison, the holy grail of all realtime lighting effects. Global illumination makes the current raster lighting techniques look primitive by comparison. It is not an exaggeration to say that realtime graphics research largely revolves around using hacks to imitate a fraction of ray tracing's power. They are piling on pipeline-after-pipeline for ambient occlusion, realtime reflections and shadowing, bloom and glare as well as god rays and screenspace/volumetric effects. These are all things you don't have to hack together when your scene is already path tracing the environment with the physical properties accounted for. Instead of stacking hacks, you have a coherent pipeline that can be denoised, antialiased, upscaled and post-processed in one pass. No shadowmaps, no baked lighting, all realtime.

        There is a reason why even Apple quit dragging their feet here - modern real-time graphics are the gimmick, ray tracing is the production-quality alternative.

        • adastra22 6 days ago |
          Thank you. This guy gets it.

          Path tracing is the gold standard for computer graphics. It is a physically based rendering model that is based on how lighting actually works. There are degrees of path tracing of varying quality, but there is nothing else that is better from a visual quality and accuracy standpoint.

          Your modern AAA title does a massive amount of impressive hacks to get rasterization into the uncanny valley, as rasterization has nothing to do with how photons work. That can all be thrown out and replaced with “model photons interacting with the scene” if the path tracing hardware was powerful enough. It’d be simpler and perfectly accurate. The end result would not live in the uncanny valley, but would be indistinguishable from reality.

          Assuming the hardware was fast enough. But we’ll get there.

          • Nullabillity 6 days ago |
            This is all built on the, frankly, nonsensical assumption that photorealism is the, let alone a, primary goal.
            • adastra22 6 days ago |
              In most cases of high end / AAA game titles, it is. Not all, but certainly most.
            • talldayo 6 days ago |
              Not really. Tiny Glade is a recent ray-traced title that has no goal of photorealism whatsoever, and it looks gorgeous with RT lighting and shadows: https://youtu.be/QAUSBxxgIbQ

              You can scale these same principles to even less photorealistic games like the Borderlands series or a Grand Theft Auto game. Ray tracing is less about photorealism (although it is a potent side effect) and more about creating dynamic lighting conditions for an interactive scene. By simulating the behavior of light, you get realistic lighting - photoreal or not a lot of games rely on SSR and shadowmaps that can be replaced with realtime RT.

              • Nullabillity 5 days ago |
                The question isn't "can you achieve your look with RT?" but "do you need RT to achieve your look?".

                As long as traditional lighting is enough (and given that it has for the last 2 decades or so), RT remains a gimmick.

                • Mawr 5 days ago |
                  Let me get this 100% straight, rendering light the exact same way reality itself does it is a gimmick?

                  In any case, RT isn't just about getting pretty graphics. It massively lowers the artists' workload since there's no need to painstakingly go through each in-game area and add fake lights to make everything look good.

                  No gaming studio will pass up on the opportunity to get better graphics and better productivity. It really is just a matter of waiting a few years for the hardware to get good enough.

                  • Nullabillity 5 days ago |
                    > Let me get this 100% straight, rendering light the exact same way reality itself does it is a gimmick?

                    Yes, because "Rendering light the exact same way reality itself does it" was never the assignment, outside of nvidia desperately trying to find excuses to sell more GPUs.

                    Maybe some games would benefit from it in some cases... but you have to weigh that marginal improvement against the increased costs, both economic and ecological.

                    > It massively lowers the artists' workload since there's no need to painstakingly go through each in-game area and add fake lights to make everything look good.

                    This is a laughable claim, as long as you're going for anything more than fixed ambient lighting you're going to need to massage and QA that the lighting works for what you're trying to show.

                    ---

                    In short, yes. Ray tracing as a headline feature is a gimmick, made to appeal to the same clueless HNers who think AI will eliminate our need to understand anything or express ourselves.

                    • talldayo 5 days ago |
                      Apple wishes ray tracing was AI acceleration, because then they could at least ride the Nvidia valuation pump. But the truth is even less glamorous than that - this is a feature for the future, something for posterity. It's a well-researched field of fixed-function acceleration with multiple competitive implementations, unlike AI architectures. What's else, in 3-5 years you can bet your bottom dollar Apple will announce their CUDA alternative alongside a new Mac Pro. You can't even confidently say I'm joking since they've already done this (and subsequently depreciated it) once before: https://en.wikipedia.org/wiki/OpenCL#OpenCL_1.0

                      I guess I can't blame HN for courting pedestrian takes, but man you're going to be disappointed by Apple's roadmap going forward if this is your opinion. And I say this, defending Apple, as someone that treats Tim Cook and Steve Jobs as the living and buried devil respectively. Having hardware-accelerated ray tracing is no more of a "gamer" feature than the high-impedance headphone jack is an "audiophile" feature. It is intelligent design motivated by a desire to accommodate a user's potential use-case. Removing either would be a net-negative at this point.

          • flohofwoe 6 days ago |
            Think of the 'impressive hacks' used in realtime 3D rendering as compression techniques to reduce bandwidth (and this goes all the way back to 8-bit home computers with their tricky video memory encoding, this was essentially hardware image compression to reduce memory bandwidth).

            Why waste computing power on 'physically correct' algorithms, when the 'cheap hacks' can do so much more in less time, while producing nearly the same result.

            • talldayo 6 days ago |
              Because a lot of the "cheap hacks" aren't even cheap. Famously, using real-time SSR to simulate ray-traced reflections can end up slower than ray tracing them from the jump, the same goes for high-res shadowmaps. Ambient occlusion is a lighting pass you can skip entirely if you render shadows the right way with RT from the start. If you keep stacking these hacks until you have every feature that a RT scene does, you're probably taking longer to render each frame than a globally illuminated scene would.

              Accelerating ray tracing in-hardware is a literal no-brainer unless you deliberately want to ostracize game developers and animators on Mac. I understand the reactionary "But I don't care about dynamic shadows!" opinion, but there's practically no opportunity cost here. If you want to use traditional lighting techniques, you can still render it on an RT-enabled GPU. You just also have the option of not wanting to pull out your fingernails when rendering a preview in Blender.

              • flohofwoe 6 days ago |
                Yes, for some specific problems, raytracing definitely makes a lot of sense, but rasterization and 'geometry compression' via triangles also makes a lot of sense for other situations. I think the best approach will always be a hybrid approach instead of a 100% pure raytracing pipeline.

                The other question is how much of the raytracing features in 3D APIs need to be implemented in fixed-function hardware units, or whether this is just another area where the pendulum will swing back and forth between hardware and software (running on the GPU of course).

                But then maybe in 10 years we'll have 'signed distance field units' in GPUs and nobody talks about raytracing anymore ;)

                • talldayo 5 days ago |
                  From everything I've seen, hardware acceleration seems to be the only option. Software-accelerated RT existed for years and was mainly the reason why it was disregarded as a technique in the first place. Here's the OG M1 Pro, manufactured on TSMC 5nm, getting blown the fuck out by a 3070 Ti on Samsung 8nm: https://youtu.be/NgDTCNPm0vo

                  The architecture will probably improve on both Nvidia and Apple's side going forward, but in theory both GPUs should be easier to support for longer since they're not focused on accelerating obsolete junk like MSAA or HBAO. It enables a better convergence of features, it makes porting to-and-from Mac easier, and with any luck Apple might even quit being so sheepish about working with Khronos since they're not actively neglecting the Vulkan featureset anymore.

    • MBCook 6 days ago |
      That doesn’t necessarily mean it’s important, just that it’s the only thing they can think of to sell. Much like some AI stuff we’re seeing right now.

      Personally I think it’s useful for a few things, but it’s not the giant game changer I think they want you to think it is.

      Raytraced reflections are a very nice improvement. Using it for global illumination and shadows is also a very good improvement.

      But it’s not exactly what the move to multi texturing was, or the first GPUs. Or shaders.

    • nfriedly 6 days ago |
      I don't know, Hardware Unboxed recently did a review of 30-some games comparing the visuals with and without ray tracing, and the conclusion for the majority of them was that it wasn't worth it. There were only maybe half a dozen games where the ray tracing was an unambiguous improvement.

      So I think calling it "a bit of a gimmick" is accurate for many of the games that shipped in, even if not all of them.

      • wtallis 6 days ago |
        It's also a pretty large performance hit, even when the quality improvement is subtle. A major and unfortunate consequence is that it has driven even more games to rely on upscaling and heavy post-processing, all of which tend to blur the output image and introduce artifacts and add latency.
      • jms55 6 days ago |
        You're looking at years of careful artist and engineer work to get something that looks almost as good as pathtraced visuals. The fact that it's so good without any raytracing is a credit to the developers.

        Replacing all that effort with raytracing and having one unified lighting system would be a _major_ time saver, and allows much more dynamic lighting than was previously possible. So yeah some current games don't look much better with RT, but the gameplay and art direction was designed without raytracing in mind in the first place, and had a _lot_ of work put into it to get those results.

        Sure fully pathtraced graphics might not be 100% usable currently, but the fact that they're even 70% usable is amazing! And with another 3-5 years of algorithm development and hardware speedups, and developers and artists getting familiar with raytracing, we might start seeing games require raytracing.

        Games typically take 4+ years to develop, so anything you're seeing coming out now was probably started when the best GPU you could buy for raytracing was an RTX 2080 TI.

        • wtallis 6 days ago |
          > Sure fully pathtraced graphics might not be 100% usable currently, but the fact that they're even 70% usable is amazing!

          Aside from Cyberpunk 2077 and a handful of ancient games with NVIDIA-sponsored remakes, what even offers fully path traced lighting as an option? The way it went for CP2077 makes your "70% usable" claim seem like quite an exaggeration: performance is only good if you have a current-generation GPU that cost at least $1k, the path tracing option didn't get added until years after the game originally shipped, and they had to fix a bunch of glitches resulting from the game world being built without path tracing in mind. We're clearly still years away from path tracing being broadly available among AAA games, let alone playable on any large portion of gaming PCs.

          For the foreseeable future, games will still need to look good without fully path traced lighting.

          • talldayo 6 days ago |
            > For the foreseeable future, games will still need to look good without fully path traced lighting.

            And in 7-10 years when the software stack is matured, we'll be thanking ourselves for doing this in hardware the right way. I don't understand why planning for the future is considered so wasteful - this is an architecture Apple can re-use for future hardware and scale to larger GPUs. Maybe it doesn't make sense for Macs today, but in 5 years that may no longer be the case. Now people don't have to throw away a perfectly good computer made in these twilight years of Moore's law.

            For non-games applications like Blender or Cinema4D, having hardware-accelerated ray tracing and denoising is already a game-changer. Instead of switching between preview and render layers, you can interact with a production-quality render in real time. Materials are properly emissive and transmissive, PBR and normal maps composite naturally instead of needing different settings, and you can count the time it takes before getting an acceptable frame in milliseconds, not minutes.

            I don't often give Apple the benefit of the doubt, but hardware-accelerated ray tracing is a no-brainer here. If they aren't going to abandon Metal, and they intend to maintain their minuscule foothold in PC gaming, they have to lay the groundwork for future titles to get developed on. They have the hardware investment, they have the capital to invest in their software, and their competitors like Khronos (apparently) and Microsoft both had ray tracing APIs for years when Apple finally released theirs.

            • rowanG077 6 days ago |
              I think you have the wrong impression. Apple M2 does not have hw ray tracing. M3 and M4 do.
          • porphyra 6 days ago |
            UE5 supports fully path traced lighting and Black Myth Wukong is a recent example.
          • jms55 6 days ago |
            > performance is only good if you have a current-generation GPU that cost at least $1k

            That's why I said games aren't currently designed with only pathtracing in mind, but in 3-5 years with faster hardware and better algorithms, we'll probably start to see it be more widespread. That's typically how graphics usually develop; something that's only for high end GPUs eventually becomes accessible to everyone. SSAO used to be considered extremely demanding, and now it's accessible to even the weakest phone GPU with good enough quality.

            Again the fact that it's feasible at all, even if it requires a $1000 GPU, is amazing! 5 years ago real time path tracing would've been seen as impossible.

            > The way it went for CP2077 makes your "70% usable" claim seem like quite an exaggeration

            Based on the raw frame timing numbers and temporal stability, I don't think it is. RT GI is currently usually around ~4ms, which is at the upper edge of usable. However the temporal stability is usually the bigger issue - at current ray counts, with current algorithms, either noise or slow response times is an inevitable tradeoff. Hence, 70% usable. But with another few years of improvements, we'll probably get to the point where we can get it down to ~2.5ms with the current stability, or 4ms and much more stable. Which would be perfectly usable.

            • wtallis 6 days ago |
              > RT GI is currently usually around ~4ms, which is at the upper edge of usable. However the temporal stability is usually the bigger issue - at current ray counts, with current algorithms, either noise or slow response times is an inevitable tradeoff. Hence, 70% usable. But with another few years of improvements, we'll probably get to the point where we can get it down to ~2.5ms with the current stability, or 4ms and much more stable. Which would be perfectly usable.

              Maybe you should be saying 70% feasible rather than 70% usable. And you seem to be very optimistic about what kind of improvements we can expect for affordable, low-power GPU hardware over a mere 3-5 years. I don't think algorithmic improvements to denoisers and upscalers can get us much further unless we're using very wrong image quality metrics to give their blurriness a passing grade. Two rays per pixel is simply never going to suffice.

              Right now, an RTX 4090 ($1700+) runs at less than 50 fps at 2560x1440, unless you lie to yourself about resolution using an upscaler. So the best consumer GPU at the moment is about 70-80% of what's necessary to use path tracing and hit resolution and refresh rate targets typical of high-end gaming in 2012.

              Having better-than-4090 performance trickle down to a more mainstream price point of $300-400 is going to take at least two more generations of GPU hardware improvements even with the most optimistic expectations for Moore's Law, and that's the minimum necessary to do path tracing well at a modest resolution on a game that will be approaching a decade old by then. It'll take another hardware generation for that level of performance to fit in the price and power budgets of consoles and laptops.

          • paavohtl 6 days ago |
            Alan Wake 2 has had path tracing from day 1, and it looks excellent with (and without) it.
      • porphyra 6 days ago |
        Hmm I watched the video [1] and in the games where it had an unambiguous improvement, the improvement in reflection quality is extremely stark and noticeable. Anecdotally, when I was playing Black Myth Wukong recently, there were some shiny environments where full path tracing really made a huge difference.

        So I guess it's just a "gimmick" in that relatively few games properly take advantage of this currently, rather than the effect not being good enough.

        [1] https://www.youtube.com/watch?v=DBNH0NyN8K8

        • adastra22 6 days ago |
          Using raytracing on a very recent, high end game that has been designed for raytracing has an extremely noticeable positive improvement on visual quality. Wukong is a perfect example.

          Unfortunately most people’s experience with raytracing is turning it on for a game that was not designed for it, but it was added through a patch, which results in worse lighting. Why? Because the rasterized image includes baked-in global illumination using more light sources than whatever was hastily put together for the raytracing patch.

          • IntelMiner 6 days ago |
            The first game I encountered with RayTracing support was World of Warcraft "Classic"'s Burning Crusade launch a couple years back.

            WoW Burning Crusade launched in *2006* originally. The "Classic" re-release of the game uses the modern engine but with the original game art assets and content.

            Does it do anything in the 'modern' WoW game? Probably! In Classic though all it did was tank my framerate.

            Since then I also played the unimaginable disaster that was Cyberpunk 2077. For as "pretty" as I suppose the game looked I can't exactly say if the ray tracing improved anything

            • adastra22 6 days ago |
              What's perhaps not obvious is that if you take a 2006 game and "add" raytracing, there's no relevant information available for the raytracer. Whatever original light sources were used to bake in global illumination does not exist in the game assets used for rasterization. In the best case the original art assets can be used to create raytracing scenes, but it's not as straightforward as it might seem, and most conversions are pretty sloppy.

              If you want to see what raytracing can do, I'd only look at very, very recent UE5 titles that are designed for raytracing from the ground-up.

    • kllrnohj 6 days ago |
      The M1-M4 GPUs are also nowhere close to fast enough for it to be useful. Just like Snapdragon having ray tracing support is ridiculous.
    • musictubes 6 days ago |
      Can the hardware for ray tracing be used for anything else? Ray tracing is just math, is that math applicable for any non graphics use?
      • boulos 6 days ago |
        You'll have to see what the instruction set / features are capable of, but most likely the "hardware ray tracing" support means it can do ray-BVH and ray-triangle intersection in hardware. You can reuse ray-box and ray-triangle intersection for collision detection.

        The other parts of ray tracing like shading and so on, are usually just done on the general compute.

        • kllrnohj 6 days ago |
          > You can reuse ray-box and ray-triangle intersection for collision detection.

          The problem with doing so, though, and with GPU physics in general is that it's too high latency to incorporate effectively into a typical game loop. It'll work fine for things that don't impact the world, like particle simulations or hair/cloth physics, but for anything interactive the latency cost tends to kill it. That and also the GPU is usually the visual bottleneck anyway so having it spend power on stuff the half-idle CPU could do adequately isn't a good use of resources.

          • adastra22 5 days ago |
            I don't follow. You would do intersection tests every frame, right? Run the intersection tests, feed the results into the next frame calculation. Results may be delayed 1 frame, but at 60Hz or 120Hz that isn't really a problem.
            • kllrnohj 5 days ago |
              It is a problem. You've generated an entire impossible world state and shown it to the user before discovering it's bad and needing to rewind it.

              The faster your fps, the more flickery this ends up looking as you're constantly pulling back the camera or player or whatever a frame after it collided. The lower the fps, the more sluggish and unresponsive it feels.

              Alternatively you need to pipeline your entire world state and now you've got some pretty bad input latency

      • flohofwoe 6 days ago |
        It should be useful for anything that involves stabbing checks on large triangle soups. Whether it's worth "wasting" die space for this is debatable of course.
      • exDM69 6 days ago |
        > Ray tracing is just math, is that math applicable for any non graphics use?

        It's not "just math", it's data structures and algorithms for tree traversal, with a focus on memory cache hardware friendliness.

        The math part is trivial. It's the memory part that's hard.

        Ray tracing hardware and acceleration structures are highly specific to ray tracing and not really usable for other kinds of spatial queries. That said, ray tracing has applications outside of computer graphics. Medical imaging for example.

    • zamadatix 6 days ago |
      While it may eventually be a non-gimmick in the industry as the compute needed for it advances more it's always going to be quite the gimmick in terms of what the M1/M2 GPU can do with it.
    • rowanG077 6 days ago |
      Wow! if the industry is pushing it must be good right? Just like the megapixels wars for cameras.
    • flohofwoe 6 days ago |
      ...and yet the most popular use case for raytracing seems to be more correct reflections in puddles ;)
    • ZiiS 6 days ago |
      Not sure if I can process "incredibly arrogant" in relation to the Herculean work and insight demonstrated.

      However, it is important to put things in context. Something can be a 'Gimmick' on several year old integrated mobile hardware that can't run most games at a reasonable FPS without it; and not a 'Gimmick' on cutting edge space heaters.

    • pxc 6 days ago |
      > it is a very desired feature people are upgrading video cards to get working on games they play.

      Ray tracing is also extremely heavily marketed, and at a time when GPU upgrades have become infrequent for most people due to extremely high prices and relatively weak overall performance gains year over year.

      The industry is adopting ray tracing because they need to be able to offer something new when the overall performance gains generation over generation have been slowing for years. They've been leveraging marketing hard to try to generate demand for upgrades among gamers, knowing cryptocurrency won't buoy them forever.

      That ray tracing is cited as a reason among gamers who do upgrade their GPUs is unsurprising and not really a strong indicator that ray tracing is a great feature.

      At any rate, both the gaming industry and the wider tech industry are known to be extremely faddish at times. So is consumer behavior. Things 'the whole industry is doing' has always been a category that includes many gimmicks.

    • ozgrakkurt 6 days ago |
      It is marketing bs, it was not needed and is not needed imo, and many other people’s opinions. Expressing opinion can’t be arrogant imho
  • olliej 6 days ago |
    Alyssa is amazing, I remember the first article about the GPU work, and then learning she was only 17 and poof mind blown.

    It's truly stunning that anyone could do what she did, let alone a teenager (yes I know, she's not a teenager anymore, passage of time, etc :D)

  • gcr 6 days ago |
    I’ve been trained to expect articles with this headline to say something like “we’re dropping support and are getting acqui-hired.”
  • m463 6 days ago |
    "our target hardware is running literally none of those things". What is needed is to somehow translate DirectX to Vulkan, Windows to Linux, x86 to Arm64, and 4KB pages to 16KB pages.

    oh my.

  • kristianp 6 days ago |
    I was going going to say she should work for Valve to help get Steam working on Linux on Macs, but it seems she already does? [1]

    [1] https://en.wikipedia.org/wiki/Alyssa_Rosenzweig#Career

    • jillesvangurp 6 days ago |
      Nice; that raises some interesting questions as to what Valve is planning here. Steam/Proton on macs would make a lot of sense for them hard as it may be. People booting linux on their macs to play games would really annoy Apple probably.
      • Dah00n 6 days ago |
        >Steam/Proton on macs would make a lot of sense ..... would really annoy Apple probably

        Sounds like a win-win situation.

      • galleywest200 6 days ago |
        There was a Proton for ARM test build [1] that got talked about recently, your guess is entirely possible.

        [1] https://www.tomshardware.com/video-games/pc-gaming/steam-lik...

      • GeekyBear 6 days ago |
        > People booting linux on their macs to play games would really annoy Apple probably.

        You dont have to boot Linux to play PC games on Mac.

        Apple already provides the tools you need to build a Mac native equivalent to Proton.

        There are several options built using those tools, both paid {Crossover) and free/open source (Whisky).

        Whisky combines the same ooen source Wine project that Proton leverages with Apple's Rosetta x86 emulation and Apple's DirectX emulation layer (part of the Game Porting Toolkit) into a single easy to use tool.

        https://getwhisky.app/

        • flyingpenguin 6 days ago |
          The problem here is that apple only provides Metal as the graphics driver. This solution instead creates a native Vulkan driver which has solutions to hardware<->vulkan incompatibilities built in at the driver level.
          • GeekyBear 5 days ago |
            MacOS provides Metal as its native Graphics API.

            Proton and Alyssa's solution use Vulkan on Linux as their native graphics API.

            Regardless, you have to provide a translation layer so that Windows games written to call DirectX APis use the native graphics layer of the platform you are running on.

            Unless you happen to be emulating a Windows game written to use Vulkan instead of DirectX, Vulkan really doesn't matter at all on the Mac.

            If you do want to emulate one of the rare Vulkan based Windows games on a Mac, the MoltenVK translation layer handles that.

        • a_vanderbilt 5 days ago |
          The problem is that Whisky is not for the casual user.

          I used it and found it confusing at first. Imagine your average gamer, they'd never get past dragging in the app. The Steam Deck (and Proton) have been so successful because you can just power on the Deck and use it. No tinkering or third-party tools to do what you want.

          • GeekyBear 5 days ago |
            That's certainly not what the reviews of Whisky had to say, although I have seen it said that buying a Crossover license gets you more hand holding than a free copy of Whisky does.
      • a_vanderbilt 5 days ago |
        I have a bet going that their next Steam Deck is going to have an ARM processor.

        Allegedly NVidia is working on a WoA SoC, now that Qualcomm's contract with MS has ended. If NVidia puts their recent capital gains into a performant ARM chip like Apple (tapeout alone would likely run in the billion-dollar range), we can hopefully expect AMD to respond with something soon after. Once the chipsets are available, it's a matter of getting Linux to work with mixed page-size processes.

        I have no idea how possible that is, but if the emulation performance is anything like Rosetta and the M1 proved possible, most users won't even notice.

  • whitehexagon 6 days ago |
    What I really respect is the dedication to completing support for the M1/M2. Too many projects get dropped the moment the next shiny ray-traced toy comes along.

    I know this type of work can be challenging to say the least. My own dabble with Zig and Pinephone hardware drivers reminded me of some of the pain of poorly documented hardware, but what a reward when it works.

    My own M1 was only purchased because of this project and Alyssa's efforts with OpenGL+ES. It only ever boots Asahi Linux. Thank-you very much for your efforts.

    • Cthulhu_ 6 days ago |
      One thing I noticed in the M4 macbook announcement comments was how many people were happy with their M1 laptop, and second, how many people kept their Macbooks for nearly a decade; these devices are built to last, and I applaud long-term support from Apple itself and the Linux community.

      Second, since it's open source, Apple themselves are probably paying attention; I didn't read the whole thing because it's going over my head, but she discussed missing features in the chip that are being worked around.

      • TrainedMonkey 6 days ago |
        Underrated point, maybe it's aluminum unibody or more stable OS, but in my experience average MBP lifetime is meaningfully higher compared to a windows machine. My longest lasting windows machine was T400 Thinkpad which lasted 5 years before Core 2 Duo architecture stopped being able to keep up. It got replaced with HP Envy with great specs but made out of plastic which barely lasted 1.5 years before screen fell off (literally). Replaced with 17" 2014 MBP which is still alive after SSD replacement.
        • throw88888 6 days ago |
          If you replaced the T400 because it felt slow, maybe it’s just a software/OS issue.

          The hardware on Thinkpad T-models should last longer than just 5 years in general.

          My daily-driver laptop at home is a T420 from 2011 with a Core 2 Duo, SSD and 8GB RAM. Works fine still.

          I run Linux + OpenBox, so it is a somewhat lightweight setup to be fair.

          • dm319 6 days ago |
            My daughter just asked me for the 'tiny' laptop. She has taken my Thinkpad X60 which runs linux mint. It's getting on for 20 years old soon!
          • bhouston 6 days ago |
            > My daily-driver laptop at home is a T420 from 2011 with a Core 2 Duo, SSD and 8GB RAM. Works fine still.

            I am not sure I would be productive with that. Any Core 2 Duo is 10x slower single core and 20x slower multi-core than a current generation laptop CPU at this point.

            Eg: https://browser.geekbench.com/v6/cpu/compare/8588187?baselin...

            I think it would mostly be good as an SSH terminal, but doing any real work locally on it seems frankly unfeasible.

            • pixelfarmer 6 days ago |
              The problem is software, though. I have a X200s with 4 GiB RAM from 2009. It was interesting to see how Firefox got slower and slower over the years. Granted, it not only is Firefox but also retard websites which use loads and loads of JS to display static content in the end. In return, it is not like JS didn't exist back then: The XhtmlRequest thingy for dynamic website updates or whatever the name for that was has been added years prior to that.

              So, yes, a lot of this comes down to software and a massive waste of cycles. I remember one bug in Electron/Atom where a blinking cursor caused like 10% CPU load or something along that line. They fixed it, but it tells you way more about how broken the entire software stack was at that time and it didn't get better since then.

              I mean, think about this: I used 1280x1024 on a 20" screen back in the mid 90ies on (Unix!) machines that are insanely less powerful than even this X200s. The biggest difference: Now you can move windows around visually, back then you moved the outer frame of it to the new place and then it got redrawn. And the formatting options in browsers are better, i.e. it is easier to design the layout you want. Plus there is no need for palette changes when switching windows anymore ("true color"). The overall productivity hasn't kept up with the increase in computing power, though. Do you think a machine 100x the performance will give you 100x the productivity? With some exceptions, the weak link in the chain were, are, and will always be humans, and if there are delays, we are talking almost always about badly "optimized" software (aka bloat). That was an issue back then already and, unfortunately, it didn't get better.

              • TrainedMonkey 5 days ago |
                This depends on the workflow heavily. For working with text, listening to music, or even doing some light paint work my museum 75mhz K5 running windows 2000 is enough. For building a multi-platform python package embedding a compiler you really want lots of cores. At this point we are talking about 20x+ difference between Core 2 Duo and a modern part. For modern day web experience you want something in between.
            • throw88888 6 days ago |
              Horses for courses ¯\_(ツ)_/¯

              I do development and DevOps on it. Sure there are some intense workloads that I probably couldn’t run, but it works just fine as my daily driver.

              I also have a corporate/work laptop from Dell with 32GB RAM, 16 cores @ 4.x GHz etc. - a beast - but it runs Windows (+ antivirus, group policy crap etc.) and is slower in many aspects.

              Sure I can compile a single file faster and spin up more pods/containers etc. on the Dell laptop, but I am usually not constrained on my T420.

              I generally don’t spend much time waiting for my machine to finish things, compared to the time I spend e.g. writing text/code/whatever.

        • vladvasiliu 6 days ago |
          There's also the fact that the quality is all-round higher, making them more enjoyable to use. The current HP Elitebooks have much crappier screens than my 2013 MBP. Touchpads have improved, but they're still leagues behind that 11-year-old machine.

          I'm usually fairly careful with my things, so my gen8 hp elitebook still has all its bits together, but I've never really enjoyed using it. The screen, in particular, has ridiculous viewing angles, to the point it's impossible to not have any color cast on some region.

        • ho_schi 6 days ago |
          ThinkPad X220 here. From late 2012 until late 2023 in service with Linux. It is still usable but finally replaced it by an X13 Gen3 AMD with Linux. The magnesium body is a bless and feels a lot better to the skin than aluminium. The HiDPI display is the biggest upgrade. The six row keyboard is sturdy but a downgrade from the seven row keyboard. I miss the notch to open the lid.

          It got worse with Gen4/5 which now have an awful hump (reverse notch) like a smartphone.

          The long usage of the X220 depends on the built quality but also on the five year replacement part support. New batteries and a new palm rest (cracked during a journey). It not just quality you pay for, it is this support level. And of course more memory. Apple still fails in this regard and barely does something when forced by the European Union. Anyway - Apple doesn’t officially support Linux therefore I cannot buy them for work.

          This is the part wich saddens me, they do good work and the next MacBook will also not run fully with Linux. This kind of catchup things by hackers cannot be won - until the vendor decides you’re a valuable customer. Therefore, don’t buy them that you can run Linux. You maybe can. But these devices are made for macOS only.

          But if you want to run Linux on a MacBook? Talk your politicians! And send „messages“ with your money to Apple. Like buying ThinkPads, Dells Developer Edition, Purism, System 76 and so on :)

          • user_7832 6 days ago |
            > The magnesium body is a bless and feels a lot better to the skin than aluminium.

            Just curious, how does it feel better? My framework apparently has an aluminium lid and a magnesium base, and the mg feels “smoother” than the slightly more textured al… however my iPad is apparently aluminium too and is smooth to the touch.

            • ho_schi 5 days ago |
              It is not about smoothness.

              Actually the magnesium bottom feels like some high quality polymer/plastic. And it is leightweight. Therefore it doesn’t feel like metal and doesn’t transport heat/cold. Aluminum is a nice material for planes, cars or bikes but I avoid skin contact because it is uncomfortable.

              I guess like often it depends on how the magnesium made. Lenovo also uses for other parts high quality plastics (keyboard of course and probably the palm rest).

          • bluedino 6 days ago |
            I never kept my ThinkPads around long because of the screens (which they have fixed, somewhere around the T450 timeframe.

            The X220/230 had that godawful 1366x768 panel. A shame when the 13" Air at the time had a 1440x900 panel, which while it wasn't amazing with the viewing angles and colors, it was light years ahead of the screen in something like a T430.

            • ho_schi 5 days ago |
              ThinkPads come usually with two panel options: The cheap one and the good one. Often even a third or fourth.

              Lenovo should only ship the good ones! If I buy a ThinkPad I want the good one.

              I struggled to get the HiDPI panel for the X13 and ordered it from the replacement parts. Same for the X220, I replaced the TN panel by drumroll IPS panel.

              Apple takes ridiculous amounts for memory or disk but you always get the good stuff with the base model. This makes it simple and reliable for customers.

              Except keyboards

              The ThinkPads always win, convex key caps, better switches and pressure/release point, travel way, better layout and a TrackPoint. In some models you can still replace it within 30 seconds (T14). With the X13 or MacBook it is a horror, requires removal of the mainboard. Not mentioning Apples failure with the TouchBar, which “feels” like the tried to save money on expensive switches by replacing them with a cheap touchscreen and selling this horrible downgrade as feature. And the infamous butterfly switches are the reason to avoid any used older MacBook (often defective).

              • bluedino 5 days ago |
                The Carbon X1 2560x1440 screen was _fantastic_.
        • Sakos 6 days ago |
          I have a T420 with a Core i5 2500 in great condition which still runs great on Windows 11. Core 2 Duo just didn't have the performance to have longevity. Sandy Bridge and later really do. Windows devices last for ages and it's really weird pretending this is an Apple only thing.
        • goosedragons 6 days ago |
          Wouldn't an equivalent Core 2 Duo Mac be just as bad if not worse in the same time frame due to Apple's constant OS updates?

          I have quad core W520 from 2011, it's still VERY serviceable for modern usage. Not the fastest, but it even has USB 3 and a 1080p screen which an equivalentish MBP from the time would have not.

      • duxup 6 days ago |
        People talk about Apple prices being higher but the longevity of their devices really erases that price difference for me.

        I still have an old iPhone 8 that I test with that runs well, I’ve had numerous Android devices die in that timeframe, slow to a crawl, or at best their performance is erratic.

        • exe34 6 days ago |
          my mid 2012 MacBook air refuses to die.
        • BirAdam 6 days ago |
          I am not so sure. I had a Pentium4 eMachine that made it all the way to around 2012. The same kind of life for my first Athlon64 machine from the same company. In both cases, the machines ran Linux, and they still work today. They were only retired because no meaningful software could run on them any longer. My PowerBook G4 got no where near that longevity, and owners of late era Intel MacBooks are probably going to have similar issues.
          • bluGill 6 days ago |
            Those machines should have been retired long ago because modern machines use much less power. (the Pentium4 was really bad) Of course there is a difference between a machine that you turn on for an hour once a week and a machine you leave on 24x7. The later use would pay for a new computer in just a few years just from savings on the electric bill (assuming same workload, if you then load the new computer with more things to do you can't see that)
            • bityard 6 days ago |
              The non-trivial environmental damage of building the new computer AND disposing of the old one should not be ignored. There are plenty of us who would rather pay a few extra bucks on our power bill rather than send e-waste to the landfill before it needs to be.
              • bluGill 6 days ago |
                You are instead sending CO2 into the atmosphere. (depending on how your get your power, but for most of us burning things is how we get our power. Renewables are coming online fast though)
                • BirAdam 5 days ago |
                  Ewaste will create CO2 and physical environmental pollution. Much of it is just dumped on the third world where some parts will be burnt, some melted down, and some subjected to chemical baths for metals extraction. Those chemicals will then be dumped.
          • duxup 6 days ago |
            Longevity (and certainly not performance over time) is not something I've ever herd associated with an eMachine. I think that machine might be a outlier.
        • BolexNOLA 6 days ago |
          I also point people to the Mac mini line whenever they talk about price. The new M4 at ~$700 is a steal, but they have been affordable since the m1 refresh. Real “bang for your buck” computer IMO
          • opan 6 days ago |
            With soldered RAM and storage it seems quite risky to get the lowest spec version of any new Mac, so I don't see much point in discussing the entry-level price point. Do they still start at 8GB? I recall hearing that negatively impacted the SSD's performance significantly and people were recommending 16GB minimum for M1 stuff.
            • cdelsolar 6 days ago |
              I have an 8GB M1 MacBook Pro and it’s the perfect computer. I always have lots of tabs and Docker open etc. It works great. I want to upgrade at some point but probably don’t need to anytime soon.
              • hylaride 6 days ago |
                8GB is fine (on macos anyways) for what you're doing, but like it or not more and more AI is being (unnecessarily) shoved into applications and it needs memory for the GPU.

                Memory (both system and GPU) is usually best thing to future proof a computer at buy time, especially as it's not user-replaceable anymore.

                • BolexNOLA 6 days ago |
                  You can get a 512g/24gb for $1104.00 USD. Still seems pretty great to me. If you’re disciplined with your internal/external storage you could also stay at a 256gb SSD and get 32gb of ram.
            • dadbod80 6 days ago |
            • BolexNOLA 6 days ago |
              Base model is 16gb as of the M4 release.

              You also don’t have to get the base model. You can stay under $1000 while increasing to 24gb of ram.

        • pjmlp 6 days ago |
          I still manage PC that are about 15 years old, working happily for what their users care about.
        • reaperducer 6 days ago |
          I still have an old iPhone 8 that I test with that runs well

          I have an iPhone 3G from 2008 that is currently playing music as I work.

          After 16 years, it still syncs perfectly with Apple Music on the current version of macOS.

          Unfortunately, since it can't handle modern encryption, I can't get it to connect to any wifi access point.

      • mjangle1985 6 days ago |
        I still have an M1 Air. I love it.

        I considered upgrading but it’s hard to care to cause my M1 is just so good for what I need it for.

      • attendant3446 6 days ago |
        I have the opposite experience. Apple is incredibly difficult and expensive to repair. But I have been pleasantly surprised by the longevity and repairability of ThinkPads. I like those Apple M processors, but I know where I'm spending my money.
        • jhenkens 6 days ago |
          I agree on modern Mac's being difficult to repair. I also will say that back a decade or two ago, it was likely you'd need to repair your computer after four years. Now, a four year old Macbook Air still feels brand new to me.
        • windowsrookie 6 days ago |
          Yes, MacBooks are generally more expensive to repair, but they also tend to not need repairs. It’s quite normal to hear from people who are using their 7+ year old MacBook without any issues and are still perfectly happy with it. I myself still use my 2018 MacBook Pro as my main device.

          When considering longevity, I will agree that Thinkpads are probably the only device that can compete with MacBooks. But there are trade-offs.

          MacBooks are going to be lighter, better battery life, and have better displays. Not to mention MacOS, which is my preferred OS.

          Thinkpads usually have great Linux support and swappable hardware for those who like to tinker with their devices. They also tend to be more durable, but this adds more weight.

          • throwaway24124 6 days ago |
            I still use a 2014 macbook air as my home server. It was my daily driver for 7 years. No complaints, still works perfectly. Haven't been careful with it either.
          • MSFT_Edging 6 days ago |
            > MacBooks are going to be lighter

            Not going let Macs have this one, my X1 carbon is considerably lighter than a MBA.

            But generally agreeing. My last X1C lasted 8 years and still works perfectly, I just wanted an upgrade. My new one is even lighter than my old one. I opt for the FHD w/o touch screen and the second best processor to balance battery life and performance. Definitely not getting 18hrs battery life but 8-10 isn't something to laugh at.

            • windowsrookie 6 days ago |
              I admit I was assuming they would be heavier, I didn't consider the X1 Carbon. When I think of Thinkpads I still picture the traditional IBM-style Thinkpad. A quick look at the specs shows the 14" X1 Carbon at 2.42lbs, 13" MacBook Air at 2.7 lbs, and a 14" Thinkpad E series at 3.17 lbs.
            • PittleyDunkin 5 days ago |
              The 12-inch macbook was probably the best computer I've ever used in my life. I don't think anyone has come close to its weight. Apparently it weighs in at 2.03 pounds.

              Sadly, I don't think we'll ever get a computer that good from apple (or anyone) again.

          • FuriouslyAdrift 6 days ago |
            MacBooks are some of the heaviest laptps on the market.

            The Air is the "light" one at 2.7 lbs 13" and 3.3 lbs for the 15"

            For reference, there are several 13" laptops on the market that are around 2 - 2.5 lbs and 15"+ that are less than 3 lbs

            • windowsrookie 6 days ago |
              Do any of those lighter laptops match the battery life and performance of the MacBook Air, while also being completely silent? I suppose I should have been more specific and stated I don't believe there are any laptops that can match the MacBook in all categories while being lighter.
          • Dah00n 6 days ago |
            >MacBooks are going to be lighter

            That sounds like an Apple sound bite, and it is wrong, compared to pretty much any MacBook competitor out there...

          • majormajor 6 days ago |
            I haven't gotten a new Thinkpad since the 25th anniversary one but that was the last I bought while using a few as daily drivers for a decade since 2008.

            The ultimate issue was that the batteries degraded on them on incredibly fast. I don't know if that's been fixed, but the ease of replacing (at least one of) the batteries was more than canceled out by the short life compared to a Mac.

          • 3np 5 days ago |
            It's also hard to beat the global network of on-site warranty service. One colleague had their broken TP keyboard replaced over an hour right next to us and got a fresh battery replacement >2y in.

            Another one managed to sort out their dead TP motherboard while remoting from a small town in SEA.

            All covered under warranty, no questions asked.

          • lostlogin 3 days ago |
            Please mention the trackpad. Apple’s ones are hard to beat.
        • javier2 6 days ago |
          Yes, modern macs are stupidly hard to repair. But I am never using any Lenovo product again since the whole rootkit incident.
          • fsflover 5 days ago |
            How about Purism and System76?
            • commandersaki 4 days ago |
              So the choice is now mediocre and high quality?
              • fsflover 4 days ago |
                Mediocre hardware and high-quality software or vice versa.
        • jrochkind1 6 days ago |
          Agreed, hard to repair (which is a problem), but my experience is owning one for 8 years, and another for 4 years now (with hopes to keep it another 6), which never once needed a repair.
          • TylerE 5 days ago |
            The only Mac hardware failure I’ve experienced in over 25 years of ownership (My first Mac was black and had a G3 in it and ran OS 8), I’ve experienced a grand total of one hardware failure, and that was 15+ years ago.
            • jrochkind1 5 days ago |
              Oh you know what, I realized I wans't accurate. The 8-year MacBook I had _did_ have a failure at one point. They replaced the entire keyboard/trackpad for me for free when it did.
        • crossroadsguy 6 days ago |
          Indeed. However once you need the repair it's so daunting. Now factor in non-developed nations (prices are usually same or more there as well for both parts and service) and it's just insane. I had a 7-8 year old macbook air that I had bought for ~60K INR and I had to replace its SSD. Just because it was do be done for an Apple device even the outside repairperson charged ~16K (Apple "authorised" service centre quoted 28K + taxes with a straight face). That outside repair was way too costly in late 2022 for a 128GB SSD. Same goes for their other devices.

          So what's to be done? Buy their insanely costly "limited" extended warranty for "2 more" years? And let's say we do that, then? I am sure there is an argument for that.

          I am typing this from a M1 MacBook Pro and if it dies I am really not sure whether I will even be able to get it repaired without "feeling" fleeced, or might as well move back to the normal laptop + Linux world and know that a laptop repair will never be a "minor bankruptcy" event ;-)

          No, "but Apple devices last long" doesn't cut it. So do non-Apple devices, yes they do. And if they need repair you don't at all fret w/ or w/o warranty.

          I am not sure how many here on HN will be able to connect with this but that hanging Damocles' sword is not a nice thing to have around when you use a costly Apple device.

          Making it easy and cheap/affordable for their devices to be repaired should not be an option left for OEMs.

        • PittleyDunkin 5 days ago |
          I found the thinkpads were too easily warped.
      • hurutparittya 6 days ago |
        I'm always surprised when people speak highly of Apple devices here. While they do have certain advantages, there are some issues that should be dealbreakers for tech literate people. (in my own, possibly biased opinion at least)

        In case of Macbooks, it's the fact that they refuse to provide an official GPU driver for Linux and general poor support for things outside the walled garden. The Asahi stuff is cool and all, but come on, is a 3.4 trillion dollar company really going to just stand there and watch some volunteers struggling to provide support for their undocumented hardware without doing anything substantial to help? That sounds straight up insulting to me, especially for such a premium product.

        For iphones, it's the fact that you are not allowed to run your own code on YOUR OWN DEVICE without paying the Apple troll toll and passing the honestly ridiculous Apple Store requirements.

        And of course, in both cases, they actively sabotage third party repairs of their devices.

        • umanwizard 5 days ago |
          > there are some issues that should be dealbreakers for tech literate people. (in my own, possibly biased opinion at least)

          I know you admit right after that your opinion is biased, but it's almost ludicrous to assert that all the programmers and engineers using Macs and iPhones by choice must just not be tech literate.

          > In case of Macbooks, it's the fact that they refuse to provide an official GPU driver for Linux

          MBPs are so much better than any other laptop that, with a few caveats[1], running Linux in a VM on a top-of-the-line MBP is a much nicer experience than using Linux natively on any other laptop. So while it'd be nice if there were more first-party support for Linux, it's certainly not a deal-breaker for "tech-literate" people. (Not to mention the fact that there are "tech-literate" people who use macOS and not Linux, so it wouldn't matter to them at all).

          > general poor support for things outside the walled garden

          macOS isn't a walled garden, so I don't know what you mean. You can download any software you want from anywhere you want and run it on your laptop, and Apple doesn't do anything to try to prevent this.

          > The Asahi stuff is cool and all, but come on, is a 3.4 trillion dollar company really going to just stand there and watch some volunteers struggling to provide support for their undocumented hardware without doing anything substantial to help? That sounds straight up insulting to me, especially for such a premium product.

          Now it's unclear whether your point is "I don't understand why people use Macs because there are objective drawbacks" or "I don't think people should use Macs because Apple does stuff that I find annoying". You're blending the two here but they are meaningfully separate points. I've discussed the practical point already above, but as for the stuff you subjectively find annoying: surely the only real answer is that lots of other people just subjectively don't care as much as you.

          > For iphones, it's the fact that you are not allowed to run your own code on YOUR OWN DEVICE without paying the Apple troll toll and passing the honestly ridiculous Apple Store requirements.

          I don't care about this at all. I've never wanted to run my own code on my own iOS device except when I was working on iOS apps and had an Apple developer account through work. I, like the vast majority of people, use my phone as a browsing/messaging/media consumption device, not as a general-purpose computer.

          If Apple tried to prevent me from running my own code on my own MacBook, it would be a deal-breaker, but as I already said above, they don't.

          In conclusion I think you've confused "tech-literate person" and "geek who wants to be able to tweak/configure everything". Indeed there is a large overlap between those sets, but they're not quite the same thing.

          [1] https://news.ycombinator.com/item?id=41997107

          • hurutparittya 5 days ago |
            I agree, "tech-literate" was a poor choice of words on my part. Tech enthusiast or tinkerer would have been much better options to convey my opinion.

            I feel like there used to be a higher concentration of those people here.

      • resource_waste 6 days ago |
        >how many people kept their Macbooks for nearly a decade

        Are your laptops not lasting 10 years? (battery swaps are a must though)

        The only reason I switched laptops was that I wanted to do AI Art and local LLMs.

        I have so many old laptops and desktops that each of my 5 kids have their own. They are even playing half-modern games on them.

      • Neonlicht 6 days ago |
        Using a laptop with 8gb RAM for a decade is an exercise in frustration
        • bzzzt 6 days ago |
          Only if you insist on running software that needs more RAM, in which case you shouldn't have bought it.
          • Der_Einzige 6 days ago |
            Apple should not have sold devices with 8GB or less back in 2018. Them doing it in 2024 is a sign that they think their users are idiots.
            • lowbloodsugar 5 days ago |
              I just bought the original M1 Air 8GB for my spouse from walmart for $650, to replace an aging intel Air 8GB. The difference is stark. The M1 Air is a pleasure to use, even with the inevitable library's worth of browser tabs.

              Plenty of people have no need for more than 8GB. My spouse. My mom. My english-major son. Meanwhile, my M2 is saying I have 72GB in use, with IntelliJ #1 at 6.05GB. I would not be happy with 8GB. Sounds like you wouldnt be either. So don't buy one.

      • TigerAndDragonB 6 days ago |
        > ...how many people kept their Macbooks for nearly a decade; these devices are built to last...

        This is no longer true for me. I've been an Apple fan since the Apple ][ days, and reluctantly left the ecosystem last year. The hardware walled garden with soldered-on components and components tied down to specific units for ostensible privacy and security reasons (I don't buy those reasons), combined with the steadily degrading OS polish in fine attention to detail, for me personally, meant I could no longer justify the cognitive load to continue with a Mac laptop as my daily driver. While others might point to a cost or/or value differential, I'm in the highly privileged position to be insensitive to those factors.

        Last straw was an board-soldered SSD that quit well before I was willing to upgrade, and even Louis Rossman's shop said it would cost way more to desolder and solder a new one on than the entire laptop is worth. Bought a Framework the same day, when it arrived I restored my data files to it and been running it as my daily driver ever since. The Mac laptop is still sitting here, as I keep hoping to figure out when to find time to develop my wave soldering skills to try my hand at saving it from the landfill, or break down and unsustainably pay for the repair (I do what I can to avoid perpetuating dark patterns, but it is a Sisyphean effort).

        I found myself in a position of having to think increasingly more about working around the Mac ecosystem instead of working invisibly within it (like a fish in water not having to think about water), that it no longer made sense to stick with it. It has definitively lost the "It Just Works" polish that bound me so tightly to the ecosystem in the past. I see no functional difference in my daily work patterns using a Mac laptop versus a Framework running Fedora.

        To be sure, there are a lot of areas I have to work around on the Framework-Fedora daily driver, but for my personal work patterns, in my personal needs, I evaluated them to be roughly the same amount of time and cognitive load I spent on the Mac. Maybe Framework-Fedora is slightly worse, but close enough that I'd rather throw my hat into the more open ring than the increasingly closed walled garden Apple's direction definitely is taking us, that does not align with my vision for our computing future. It does not hurt that experimenting with local LLM's and various DevOps tooling for my work's Linux-based infrastructure is way easier and frictionless on Fedora for me, though YMMV for certain. It has already been and will be an interesting journey, it has been fun so far and brought back some fond memories of my early Apple ][, Macintosh 128K, and Mac OS X days.

      • bluedino 6 days ago |
        I have a personal M1 13" Air, and a work M3 16" Pro, and other than the silly 8GB limitation, I don't notice much of a difference in what I do when using the Air.
        • oorza 6 days ago |
          There's three buckets of performance in interactive software: so fast it doesn't matter to the user, slow enough the user notices but doesn't lose focus, and slow enough the user has time to lose focus. The lines are obviously different for each person, which is why some people feel that software is "fast enough" well before others do.

          The jump from an i9 to an M1 moved a lot of tasks from group 3 into 2, some tasks from group 2 into group 1, and was the biggest perceived single-machine performance leap for me in my professional career. I have an M1 Max or Ultra on my work machine and an M3 Ultra in my personal machine - after two years of the work machine being so visibly faster, I caved and upgraded my personal. The M3 Ultra moves a handful of things from group 2 to group 1 but it's not enough to move anything out of group 3.

      • outworlder 5 days ago |
        > how many people kept their Macbooks for nearly a decade; these devices are built to last, and I applaud long-term support from Apple itself and the Linux community.

        Anecdotal but I have a White Macbook from 2010. It's sitting on a shelf not because it doesn't work (minus the battery), but because it's too outdated to be of much use. And there's a small crack in the case.

        I have a Macbook Pro from 2016. I still whip it out from time to time when I don't want to use my latest one for whatever reason (say, network troubleshooting). It works fine. Even the battery still holds charge. If those two had USB-C (and charging over that port) I'd probably use them more often. Their keyboards is also pleasant (since they are before the scissor key nonsense).

        My company has thousands of Macbooks. It's rare that I see anyone at all with issues. They aren't perfect, but the build quality is far better than most PC laptops and so is the attention to detail. The price premium kinda blows, but the M line made them way better.

        • jpurnell 5 days ago |
          I power up my Titanium Powerbook at least once a year or so. 23 years old and works just fine. I last opened it July 4, 2023, and there was a software update waiting for me.
    • patates 6 days ago |
      > what a reward when it works

      as someone who's been coding for more than 20 years, the happiest and the most depressed moments in my career both came during a hardware project I participated only for 4 months.

    • Dah00n 6 days ago |
      >Too many projects get dropped the moment the next shiny ray-traced toy comes along.

      Well.... (from the article):

      >"frankly, I think ray tracing is a bit of a gimmick feature"

      I couldn't agree more, on both counts.

  • skoczko 6 days ago |
    Since bringing modern OpenGL and Vulkan onto Apple Silicon is impossible without an emulation layer anyway, could, theoretically, a native Metal API for Linux be created? Or is Metal too ingrained in macOS SDKs? MoltenVK is attempting to solve the same issues Alyssa was talking about in her talk [1, the last comment on the issue is hers]

    [1] https://github.com/KhronosGroup/MoltenVK/issues/1524

    • Twisell 6 days ago |
      Nothing is barring Apple from supporting Vulkan natively on MacOS. This is essentially the closing statement of Alyssa Rosenzweig´s talk.

      With Apple knowledge of internal documents they are the best positioned to produce an even better low level implementation.

      At this point the main blockroad is the opinionated point that Metal porting is the only official supported way to go.

      If Valve pull up a witch-crafted way to run AAA games on Mac without Apple support that would be an interesting landscape. And maybe would force Apple to re-consider their approach if they don't want to be cornered on their own platform...

      • bri3d 6 days ago |
        > If Valve pull up a witch-crafted way to run AAA games on Mac without Apple support that would be an interesting landscape. And maybe would force Apple to re-consider their approach if they don't want to be cornered on their own platform...

        Right, except that Game Porting Toolkit and D3DMetal was an exact response to this scenario. Whether it's the right approach, time will tell, but Apple definitely already headed this one off at the pass.

        • 6SixTy 5 days ago |
          Game Porting Toolkit isn't a response in this scenario. All advertising for GPTK is aimed squarely at publishers, and even Whiskey has to pull a binary out of a back alley for D3DMetal. Apple is essentially doing nothing and hoping it works.
      • GeekyBear 6 days ago |
        > Metal porting is the only official supported way to go.

        Apple already provides its own native implementation of a DirectX to Metal emulation layer.

        • Twisell 5 days ago |
          And yet I see more Game available for steamdeck than for apple Silicon... Maybe because porting as opposed to emulating requires action on developer side.

          And this is especially true for existing game that "aren't worth porting" but are still fun to play. (Is there a Vulkan to Metal / OpenGL to Metal in Gaming toolkit? is it the NexStep?)

          There is actually a sweet spot here for Valve that could benefit everyone:

            - Valve as a necessary third party
          
            - Gamers to access a wider catalog
          
            - Apple so they don't have to bother developing a porting process for old games
          • GeekyBear 5 days ago |
            > Maybe because porting as opposed to emulating requires action on developer side.

            The Steam Deck is also just emulating native Windows APIs, but on Linux.

            https://www.wikipedia.org/wiki/Proton_(software)

            Game compatibility isn't 100% with either, and both have community maintained lists with compatibility information.

    • aseipp 6 days ago |
      I don't see why not. There are, after all, implementations of DirectX for Linux too, which is how Proton works. But I'm not sure if it would be better to build that API as a layer on top of Vulkan (completely "client side", like MoltenVK or dxvk do) or actually integrate it more deeply into Mesa. The first is certainly easier to start with, I guess.
  • wwalexander 6 days ago |
    Alyssa Rosenzweig deserves a Turing Award!
  • helsinki 6 days ago |
    Alyssa's solution to the 4KB vs. 16KB page size discrepancy by running everything in a virtual machine feels like both a clever hack and a potential performance bottleneck. It makes me wonder about the long-term implications of such workarounds. Are we reaching a point where the complexity of bridging these gaps outweighs the benefits, especially when dealing with proprietary hardware designed to be a closed ecosystem?

    This also touches on a broader question about the future of open-source efforts on platforms that are inherently restrictive. While it's inspiring to see games like Control running at 45fps on an M1 MAX with open-source drivers, it begs the question: Should the community continue to invest significant resources into making closed systems more open, or should efforts be redirected toward promoting more open hardware standards?

    Apple's approach to hardware design warrants criticism. By creating GPUs with limitations that hinder standard functionalities like tessellation shaders and using non-standard page sizes, Apple places unnecessary obstacles in the path of developers. This closed ecosystem not only complicates the work for open-source contributors but also stifles innovation that could benefit all users.

    • i000 6 days ago |
      What do you mean by “should the community do X” ? The Community is not some form of organisation with a mission and objective, it is a loose collection of individuals free to put their talents and explore intersts into whatever they please. You imply that this creative and inspiring work somehow stifles innovation and hurts users which is frankly absurd.
      • helsinki 6 days ago |
        You should re-read the last paragraph of my comment and note that it is directed towards Apple.
        • i000 6 days ago |
          I said ‘imply’ because your response is not to an article on apples walled garden but work of open source developers. I agree with you 100% on criticizing apple but not on whether someone should put effort in making it more open.
        • umanwizard 6 days ago |
          OP was talking about the middle paragraph of your comment, not the last one. I.e. the one in which you're talking about what "the community" should do, which is not really a meaningful question.
    • GeekyBear 6 days ago |
      > Alyssa's solution to the 4KB vs. 16KB page size discrepancy by running everything in a virtual machine feels like both a clever hack and a potential performance bottleneck.

      In her initial announcement, she mentions VM memory overhead as the reason that 16 Gigs of RAM will be the minimum requirement to emulate most Windows games.

    • oorza 6 days ago |
      I am genuinely curious if those barriers have technical justifications. There's a pretty stark difference (to me, at least) between ignoring standards in order to reinvent better wheels and intentionally diverging from standards to prevent compatibility.

      It's a question of whether they're _not_ investing resources to maintain standard behavior or they are actively investing resources to diverge from it. If it's the former, I don't find any fault in it, personally speaking.

    • jahewson 5 days ago |
      > Should the community continue to invest significant resources into making closed systems more open, or should efforts be redirected toward promoting more open hardware standards?

      False dichotomy. Do both!

    • mft_ 5 days ago |
      > Apple's approach to hardware design warrants criticism. By creating GPUs with limitations that hinder standard functionalities like tessellation shaders and using non-standard page sizes, Apple places unnecessary obstacles in the path of developers. This closed ecosystem not only complicates the work for open-source contributors but also stifles innovation that could benefit all users.

      Apple designs its hardware to suit its own ends, and its own ends only. It's obvious to everyone here that this supports their closed business model, which actually works for them very well - they make excellent hardware and (some software flakiness more recently notwithstanding) the median user will generally have an excellent time with their hardware + software as a result.

      So they're not placing "unnecessary obstacles in the path of developers" at all by designing their hardware as they do - they're just focused on designing hardware to suit their own needs.

      (Also note that if their hardware wasn't excellent, there wouldn't be such interest in using it in other, non-Apple-intended ways.)

    • rldjbpin 5 days ago |
      > This closed ecosystem not only complicates the work for open-source contributors but also stifles innovation that could benefit all users.

      from their perspective, motivated devs are doing all the heavy-lifting for them. from this side of ecosystem, they would mainly care about native app compatibility and comparable AI (inference) experience.

      both of the above seem to be taken care of, sometimes through combined efforts. other than this, they are happy to lock things down as much as they can get away with. the community unfortunately gravitates towards overall appeal rather than good open initiatives.

  • flkenosad 4 days ago |
    I love her. Keep up the incredible work Alyssa.