The Eternal Mainframe (2013)
75 points by w3ll_w3ll_w3ll a day ago | 12 comments
  • jmclnx a day ago |
    Interesting article and makes a lot of sense to me.
  • worewood a day ago |
    At least in my experience, when I've heard that the "mainframe is going to die", it's specifically referencing the IBM ecosystem - COBOL and friends.

    To me it looks like the author is grasping for straws when saying "well, a server rack is just like a mainframe, so the mainframe is not dying!".

    To me it's the opposite: yes, a server rack is just like a mainframe. That's why the mainframe is dying - a bunch of servers can do the same work much more cheaply.

    • sillywalk a day ago |
      A mainframe is also a rack server now. Newer IBM mainframes are available to fit in standard 19" racks.
      • dapperdrake a day ago |
        It seems to be more about the data tables, data models and use cases, rather than the hardware.

        Maybe we can build a CSV file that only fits into a 19-inch rack. Someone would buy it for owning their own vendor-lockin story…

    • dapperdrake a day ago |
      Well, in a rather simplistic world view the argument boils down to the following, which does seem to happen:

      (A) There are two dominating uses for large computers:

      (Use 1) HPC a.k.a. Floating point numbers, Fortran, LLM, CERN, NASA, GPGPU, numerical analysis, etc. These examples all fall into the same bucket at this level of (coarse) granularity.

      Dominating use number (2) is accounting. Yes accounting. Append-only logs, git, Merkle trees, PostgreSQL MVCC (vacuum is necessary, because it boils down to an append-only log that is cleaned up after the fact — see also Reddit’s two giant key-value tables), CRDTs, credit cards, insurance, and accounting ledgers.

      (Use 2) is dominated by the CAP theorem and favoring consistency over availability during network partitions, because it has to provide and enforce a central coherent view of the world. Even Bitcoin cannot fork too hard for account balances to still be meaningful. (Philosophical nit picking: How does this relate to General Relativity and differential geometry? Can this then only ever be "local" in the sense of general relativity?)

      This is where mainframe style hardware redundancy always enters the picture (or your system sucks). Examples: (i) RAIM (RAID for RAM), (ii) basically ZFS, and (iii) Running VMs/Docker containers concurrently in two data centers in the same availability zone (old style: two mainframes within 50 miles and a “coupling facility”)

      (B) All other uses like playing Minecraft or Factorio, smartphones, game consoles and running MS Excel locally are rounding error in the grand scheme of things.

      Note: Even oxide computers seems to be going down this route. IBM ended up there, because everybody who can pay for the problem to be fixed makes you fix your hardware and processes. Period.

      In the end all processes and hardware are locked down and redundant/HA/failure-resistant/anti-fragile. It is a mainframe in all but name by being isomorphic in every conceivable aspect. This is Hyrum’s law for our physical and geometric environment. The other systems die out.

      Even Linux user space ABI, JVM, SQLite, cURL, and (partially) JavaScript are so focused on backwards compatability. Everything else breaks and is abandoned. Effectively every filesystem that isn’t at least as good as ZFS is a waste of time.

      (C) https://datademythed.com/posts/3-tier_data_solution/

      (D) Look at what MongoDB promised in the beginning and what they actually ended up delivering and what problems they had to solve and how much work that ended up being.

      EDIT: Add points (C) and (D).

    • brazzy a day ago |
      Did you stop reading somewhere in the middle? I cannot fathom how else you could miss the point so completely.

      The point is not about technological similarities at all. It's about who controls the hardware and thus ultimately has the power over its use.

      "The mainframe" which the author is talking about is not characterized by COBOL, but by having huge corporations control the hardware which everyone is using in their daily lives, giving them power over everyone.

      • dapperdrake 13 hours ago |
        Luckily you both missed the point.

        You missed this one here:

        "The data is most valuable when it is in the mainrack. Your Facebook data isn't nearly as useful without the ability to post to the pages of your friends. Your Google Docs files aren't as useful without the ability to collaborate with others. Dynamic state matters; it's the whole point of having computers because it allows automation and communication."

        Data tables, data tables, and data tables. Data tables over flow-charts . Fred Brooks all the way down.

    • rbanffy 14 hours ago |
      > a bunch of servers can do the same work much more cheaply.

      If you want to think like that, you need to factor in the reliability aspect as well. The fact mainframes are still selling well indicates it's very expensive to engineer the 99.999% reliability out of a rack of 1u servers.

  • nayuki 21 hours ago |
    Great article reflecting on who controls computational resources - the user or the company.

    I want to respond to one point mentioned:

    > Those who continue to do significant work offline will become the exception; meaning they will be an electoral minority. With so much effort being put into web apps, they may even be seen as eccentrics; meaning there may not be much sympathy for their needs in the halls of power.

    What I find scary is that developers see web apps (and hence the open web platform) as no longer fashionable, and instead focus on developing mobile apps (iOS and Android).

    There are various services that are only available to mobile users, not PC / web browser users. One I can recall off the top of my head is Snapchat a decade ago. Other examples today include various bike sharing apps, and possibly some banking apps too. Often, the web app and mobile app don't reach feature parity. Often, the company pushes people to download the mobile app and discourages visiting the website (e.g. Reddit).

    • privong 18 hours ago |
      > What I find scary is that developers see web apps (and hence the open web platform) as no longer fashionable, and instead focus on developing mobile apps (iOS and Android).

      Based on my reading of the original article, I think your response here might be a bit tangential to the point about offline work? Web apps and mobile apps are both in practice largely the same in terms of who controls the computing resources -- they generally only work by speaking to the remote servers of the company.

      But in detail, mobile apps seem like they would have a slight edge for promoting offline work. While they in practice "phone home" (or even store data remotely), they could in principle be written without needing to contact external servers and in a way that all data is stored locally.

      I do agree that apps often attempt to silo users and can do this more effectively than webapps, but as used now, both are mostly online tools and so don't really address the question of tools that are usable offline.

    • rbanffy 14 hours ago |
      > What I find scary is that developers see web apps (and hence the open web platform) as no longer fashionable, and instead focus on developing mobile apps (iOS and Android).

      Few of those apps are standalone mobile apps. More frequently than not, the mobile app is a thin layer on top of the mobile app.

  • ggm 18 hours ago |
    I think "looks like" is very ephemeral to what being a mainframe IS. it's true that MP systems can look like a single entity, but they are highly asynchronous. The prism architecture meant a single clock state propagating cleanly across the CPU(s) provided a consistent, time managed framework to be all things to all people. (thats how I understand it)

    A Sun E10000 is to my mind, as close as a mainframe gets in the post-sparc era. The Tandem non-stops had some of it, the final Dec cluster model was getting there but in a distributed async clocked manner.

    In the end the point of the mainframe was the TPC. The sustained rate it could process edge device request-response sequences inside the bounds of your choices in the CAP theorem. Distributed systems solve the same problem, but with other consequences. It still freaks me out that IBM had so many irons in the fire they had room to do this, AND to make unix work inside this, and manage legacy, and scale out to whole-of-government or SAGE or SABRE or you-name-it.

    I hated using AIX as a sysadmin from Dec world view btw. I'm not an IBM fanboi. I lived in the competition.