• OtomotO 8 days ago |
    Right.

    Depending on my client's needs we do it oldschool and just rent a beefy server.

    Using your brain to actually assess a situation and decide without any emotional or monetary attachment to a specific solution actually works like a charm most of the time.

    I also had customers who run their own cloud based on k8s.

    And I heard some people have customers that are on a public cloud ;-)

    Choose the right solution for the problem at hand.

  • lkrubner 8 days ago |
    Interesting that the mania for over-investment in devops is beginning to abate. Here on Hacker News I was a steady critic of both Docker and Kubernetes, going to at least 2017, but most of these posts were unpopular. I have to go back to 2019 to find one that sparked a conversation:

    https://news.ycombinator.com/item?id=20371961

    The stuff I posted about Kubernetes did not draw a conversation, but I was simply documenting what I was seeing: vast over-investment in devops even at tiny startups that were just getting going and could have easily dumped everything on a single server, exactly as we used to do things back in 2005.

    • pclmulqdq 8 days ago |
      The attraction of this stuff is mostly the ability to keep your infrastructure configurations as code. However, I have previously checked in my systemd cofig files for projects and set up a script to pull them on new systems.

      It's not clear that docker-compose or even kubernetes* is that much more complicated if you are only running 3 things.

      * if you are an experienced user

      • honkycat 8 days ago |
        Having done both: running a small Kubernetes cluster is simpler than managing a bunch of systemd files.
        • worldsayshi 7 days ago |
          Yeah this is my impression as well which makes me not understand the k8s hate.
          • pclmulqdq 7 days ago |
            The complexity of k8s comes the moment you need to hold state of some kind. Now instead of one systemd entry, we have to worry about persistent volume claims and other such nonsense. When you are doing things that are completely stateless, it's simpler than systemd.
            • p_l 7 days ago |
              If you need to care about state with systemd you still have the "nonsense" of persistent volume claims, they are just something you keep in notes somewhere, in my experience usually in heads of the sysadmins or an excel sheet or a text file that tries to track which server has what data connected how.
              • pclmulqdq 7 days ago |
                Understand that in the hypothetical system we are discussing, there are something like 1-2 servers. In that case the "volume claim" is just "it's a file on the obvious filesystem" and does not actually need to be spelled out they way you need to spell it out in k8s. The file path you give in environment variables is where the most up-to-date version of the volume claim is. And that file is free to expand to hundreds of GB without bothering you.
                • p_l 7 days ago |
                  Things get iffier when you start doing things like running multiple instances of something (maybe you're sticking two test environments for your developers), or suddenly you grew a bit or no longer fit on the server and start migrating around.

                  The complexity of PVCs in my experience isn't really that big compared to this, possibly lower, and I did stuff both ways.

    • OtomotO 8 days ago |
      It's just the hype moving on.

      Every generation has to make similar mistakes again and again.

      I am sure if we had the opportunity and the hype was there we would've used k8s in 2005 as well.

      The same thing is true for e.g. JavaScript on the frontend.

      I am currently migrating a project from React to HTMX.

      Suddenly there is no build step anymore.

      Some people were like: "That's possible?"

      Yes, yes it is and it turns out for that project it increases stability and makes everything less complex while adding the exact same business value.

      Does that mean that React is always the wrong choice?

      Well, yes, React sucks, but solutions like React? No! It depends on what you need, on the project!

      Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job. (Albeit it's less clear than for the carpenter, granted)

      • augbog 8 days ago |
        It's actually kinda hilarious how RSC (React Server Components) is pretty much going back to what PHP was but yeah proves your point as hype moves on people begin to realize why certain things were good vs not
      • ajayvk 8 days ago |
        Along those lines, I am building https://github.com/claceio/clace for teams to deploy internal tools. It provides a Cloud Run type interface to run containers, including scaling down to zero. It implements an application server than runs containerized apps.

        Since HTMX was mentioned, Clace also makes it easy to build Hypermedia driven apps.

        • MortyWaves 7 days ago |
          Would you be open to non Python support as well? This tool seems useful, very useful in fact, but I mainly use .NET (which yes can run very well in containers).
          • ajayvk 7 days ago |
            Starlark (python like config language) is used to configure Clace. For containerized apps, python frameworks are supported without a Dockerfile being required. All other languages currently require a user provided Dockerfile, the `container` spec can be used.

            I do plan to add specs for other languages. New specs have to be added here https://github.com/claceio/appspecs. New specs can be created locally also in the config, see https://clace.io/docs/develop/#building-apps-from-spec

      • esperent 8 days ago |
        > Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job

        I think this is a gross misunderstanding of the complexity of tools available to carpenters. Use a saw. Sure, electric, hand powered? Bandsaw, chop saw, jigsaw, scrollsaw? What about using CAD to control the saw?

        > Suddenly there is no build step anymore

        How do you handle making sure the JS you write works on all the browsers you want to support? Likewise for CSS: do you use something like autoprefixer? Or do you just memorize all the vendor prefixes?

        • OtomotO 8 days ago |
          Htmx works on all browsers I want to support.

          I don't use any prefixed CSS and haven't for many years.

          Last time I did knowingly and voluntarily was about a decade ago.

        • creesch 7 days ago |
          As far as browser prefixes go, you know that browser vendors have largely stopped using those? Not even recently, that process started already way back in 2016. Chances are that if you are using prefixes in 2024 you are supporting browsers versions who, by all logic, should no longer have internet access because of all the security implications....
      • fud101 8 days ago |
        where does tailwind stand on this? you can use it without a build step but it's strongly recommended in production
        • fer 8 days ago |
          A build step in your pipeline is fine because, chances are, you already have a build step in there.
          • fud101 4 days ago |
            no, having a build step kills the magic of interactivity when developing for the web.
            • fer 2 days ago |
              And that's why you can have the giant tailwind css file instead of "building" when you're developing.
      • sarchertech 7 days ago |
        >Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job. (Albeit it's less clear than for the carpenter, granted)

        The problem is that most devs don’t view themselves as carpenters. They view themselves as hammer capenters or saw carpenters etc…

        It’s not entirely their fault, some of the tools are so complex that you really need to devote most of your time to 1 of them.

        I realize that this kind of tool specialization is sometimes required, but I that it’s overused by at the very least an order of magnitude.

        The vast majority of companies that are running k8s, react, kafka etc… with a team of 40+, would be better off running rails (or similar) on heroku (or similar), or a VPS, or a couple servers in the basement. Most of these companies could easily replace their enormous teams of hammer carpenters and saw carpenters with 3-4 carpenters.

        But devs have their own gravity. The more devs you have the faster you draw in new ones, so it’s unclear to me if a setup like the above is sustainable long term outside of very specific circumstances.

        But if it were simpler there wouldn’t be nearly many jobs, so I really shouldn’t complain. And it’s not like every other department isn’t also bloated.

    • valenterry 8 days ago |
      So, let's say you want to deploy server instances. Let's keep it simple and say you want to have 2 instances running. You want to have zero-downtime-deployment. And you want to have these 2 instances be able to access configuration (that contains secrets). You want load balancing, with the option to integrate an external load balancer. And, last, you want to be able to run this setup both locally and also on at least 2 cloud providers. (EDIT: I meant to be able to run it on 2 cloud providers. Meaning, one at a time, not both at the same time. The idea is that it's easy to migrate if necessary)

      This is certainly a small subset of what kubernetes offers, but I'm curious, what would be your goto-solution for those requirements?

      • shrubble 8 days ago |
        Do you know of actual (not hypothetical) cases, where you could "flip a switch" and run the exact same Kubernetes setups on 2 different cloud providers?
        • hi_hi 8 days ago |
          Yes, but it would involve first setting up a server instance and then installing k3s :-)
          • valenterry 8 days ago |
            I actually also think that k3s probably comes closest to that. But I have never used it, and ultimately it also uses k8s.
        • threeseed 8 days ago |
          Yes. I've worked on a number of very large banking and telco Kubernetes platforms.

          All used multi-cloud and it was about 95% common code with the other 5% being driver style components for underlying storage, networking, IAM etc. Also using Kind/k3d for local development.

        • devops99 8 days ago |
          Both EKS (Amazon) and GKE (Google Cloud) run Cilium for the networking part of their managed Kubernetes offerings. That's the only real "hard part". From the users' point of view, the S3 buckets, the network-attached block devices, and compute (CRIO container runtime) are all the same.

          You are using some other cloud provider or want uniformity there's https://Talos.dev

        • InvaderFizz 8 days ago |
          I run clusters on OKE, EKS, and GKE. Code overlap is like 99% with the only real differences all around ingress load balancers.

          Kubernetes is what has provided us the abstraction layer to do multicloud in our SaaS. Once you are outside the k8s control plane, it is wildly different, but inside is very consistent.

        • brodo 7 days ago |
          If you are located in germany and run critial IT infrastructure (banks, insurance companies, energy companies) you have to be able to deal with a cloud provider completely going down in 24 houres. Not everyone who has to can really do it, but the big players can.
        • stitched2gethr 7 days ago |
          I'm just happy to see the tl;dr at the TOP of the document.
      • CharlieDigital 8 days ago |
        Serverless containers.

        Effectively using Google and Azure managed K8s. (Full GKE > GKE Autopilot > Google Cloud Run). The same containers will run locally, in Azure, or AWS.

        It's fantastic for projects but and small. The free monthly grant makes it perfect for weekend projects.

      • caseyohara 8 days ago |
        I think you are proving the point; there are very, very few applications that need to run on two cloud providers. If you do, sure, use Kubernetes if that makes your job easier. For the other 99% of applications, it’s overkill.

        Apart from that requirement, all of this is very doable with EC2 instances behind an ALB, each running nginx as a reverse proxy to an application server with hot restarting (e.g. Puma) launched with a systemd unit.

        • osigurdson 8 days ago |
          To me that sounds harder than just using EKS. Also, other people are more likely to understand how it works, can run it in other environments (e.g. locally), etc.
        • valenterry 8 days ago |
          Sorry, that was a misunderstanding. I meant that I want to be able to run it on two cloud providers, but one at a time is fine. It just means that it would be easy to migrate/switch over if necessary.
        • globular-toast 7 days ago |
          Hmm, let's see, so you've got to know: EC2, ALB, Nginx, Puma, Systemd, then presumably something like Terraform and Ansible to deploy those configs, or write a custom set of bash scripts. And all of that and you're tied to one cloud provider.

          Or, instead of reinventing the same wheels for Nth time, I could just use a set of abstractions that work for 99% of network services out there, on any cloud or bare metal. That set of abstractions is k8s.

      • whatever1 8 days ago |
        Why does a startup need zero-downtime-deployment? Who cares if your site is down for 5 seconds? (This is how long it takes to restart my Django instance after updates).
        • valenterry 8 days ago |
          Because it increases development speed. It's maybe okay to be down for 5 seconds. But if I screw up, I might be down until I fix it. With zero-downtime deployment, if I screw up, then the old instances are still running and I can take my time to fix it.
        • everfrustrated 7 days ago |
          If you're doing CD where every push is an automated deploy a small company might easily have a hundred deploys a day.

          So you need seamless deployments.

          • xdennis 7 days ago |
            I think it's a bit of an exaggeration to say a "small" company easily does 100 deployments a day.
            • valenterry 7 days ago |
              Not necessarily. Some companies prefer to have a "push to master -> auto deploy" workstyle.
      • osigurdson 8 days ago |
        "Imagine you are in a rubber raft, you are surrounded by sharks, and the raft just sprung a massive leak - what do you do?". The answer, of course, is to stop imagining.

        Most people on the "just use bash scripts and duct tape" side of things assume that you really don't need these features, that your customers are ok with downtime and generally that the project that you are working on is just your personal cat photo catalog anyway and don't need such features. So, stop pretending that you need anything at all and get a job at the local grocery store.

        The bottom line is there are use cases, that involve real customers, with real money that do need to scale, do need uptime guarantees, do require diverse deployment environments, etc.

        • QuiDortDine 8 days ago |
          Yep. I'm one of 2 Devops at an R&D company with about 100 employees. They need these services for development, if an important service goes down you can multiply that downtime by 100, turning hours into man-days and days into man-months. K8 is simply the easiest way to reduce the risk of having to plead for your job.

          I guess most businesses are smaller than this, but at what size do you start to need reliability for your internal services?

        • ozim 8 days ago |
          You know that you can scale servers just as well, you can use good practices with scripts and deployments in bash and having them documented and in version control.

          Equating bash scripts and running servers to duct taping and poor engineering vs k8s yaml being „proper engineering„ is well wrong.

      • tootubular 8 days ago |
        My personal goto-solution for those requirements -- well 1 cloud provider, I'll follow up on that in a second -- would be using ECS or an equivalent service. I see the OP was a critic of Docker as well, but for me, ECS hits a sweet spot. I know the compute is at a premium, but at least in my use-cases, it's so far been a sensible trade.

        About the 2 cloud providers bit. Is that a common thing? I get wanting migrate away from one for another, but having a need for running on more than 1 cloud simultaneously just seems alien to me.

        • valenterry 8 days ago |
          Actually, I totally agree. ECS (in combination with secret manager) is basically fulfilling all needs, except being not so easy to reproduce/simulate locally and of course with the vendor lock-in.
        • mkesper 7 days ago |
          Last time I checked ECS was even more expensive than using Lambda but without the ability of fast starting your container, so I really don't get the niche it fits into, compared to Lambda on one side and self-hosting docker on minimal EC2 instances on the other side.
          • tootubular 7 days ago |
            I may need to look at Lambda closer! At least way back, I thought it was a no-go since the main runtime I work with is Ruby. As for minimal EC2 instances, definitely, I do that for environments where it makes sense and that's the case fairly often.
      • bruce511 8 days ago |
        That's an interesting set of requirements though. If that is indeed your set of requirements then perhaps Kubernetes is a good choice.

        But the set seems somewhat arbitrary. Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

        Indeed given that you have 4 machines (2 instances, x 2 providers) could a human manage this? Is Kubernetes overkill?

        I ask this merely to wonder. Naturally if you are rolling out hundreds of machines you should, and no doubt by then you have significant revenue (and thus able to pay for dedicated staff) , but where is the cross-over?

        Because to be honest most startups don't have enough traction to need 2 servers, never mind 4, never mind 100.

        I get the aspiration to be large. I get the need to spend that VC cash. But I wonder if Devops is often just premature and that focus would be better spent getting paying customers.

        • valenterry 8 days ago |
          > Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

          I think the "2 cloud providers" criteria is maybe negotiable. Also, maybe there was a misunderstanding: I didn't mean to say I want to run it on two cloud providers. But rather that I run it on one of them but I could easily migrate to the other one if necessary.

          The zero-downtime one isn't. It's not necessarily so much about actually having zero-downtime. It's about that I don't want to think about it. Anything besides zero-downtime actually adds additional complexity to the development process. It has nothing to do with trying to be large actually.

          • AznHisoka 8 days ago |
            I disagree with that last part. By default, having a few seconds downtime is not complex. The easiest thing you could do to a server is restart it. Its literally just a restart!
            • valenterry 8 days ago |
              It's not. Imagine there is a bug that stops the app from starting. It could be anything, from a configuration error (e.g. against the database) to a problem with warmup (if necessary) or any kind of other bug like an exception that only triggers in production for whatever reasons.

              EDIT: and worse, it could be something that just started and would even happen when trying to deploy the old version of the code. Imagine a database configuration change that allows the old connections to stay open until they are closed but prevents new connections from being created. In that case, even an automatic roll back to the previous code version would not resolve the downtime. This is not theory, I had those cases quite a few times in my career.

            • globular-toast 7 days ago |
              I managed a few production services like this and it added a lot of overhead to my work. On the one hand I'd get developers asking me why their stuff hasn't been deployed yet. But then I'd also have to think carefully about when to deploy and actually watch it to make sure it came back up again. I would often miss deployment windows because I was doing something else (my real job).

              I'm sure there are many solutions but K8s gives us both fully declarative infrastructure configs and zero downtime deployment out of the box (well, assuming you set appropriate readiness probes etc)

              So now I (a developer) don't have to worry about server restarts or anything for normal day to day work. We don't have a dedicated DevOps/platforms/SRE team or whatnot. Now if something needs attention, whatever it is, I put my k8s hat on and look at it. Previously it was like "hmm... how does this service deployment work again..?"

      • amluto 8 days ago |
        For something this simple, multi-cloud seems almost irrelevant to the complexity. If I’m understanding your requirements right, a deployment consists of two instances and a load balancer (which could be another instance or something cloud-soecific). Does this really need to have fancy orchestration to launch everything? It could be done by literally clicking the UI to create the instances on a cloud and by literally running three programs to deploy locally.
      • kccqzy 8 days ago |
        I've worked at tiny startups before. Tiny startups don't need zero-downtime-deployment. They don't have enough traffic to need load balancing. Especially when you are running locally, you don't need any of these.
        • anon7000 8 days ago |
          Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

          Tiny startups are rarely trying to build projects for small customer bases (eg little scaling required.) They’re trying to be the next unicorn. So they should probably make sure they can easily scale away from tossing everything on the same server

          • lmm 8 days ago |
            > Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

            Having too many (or too big) customers to handle is a nice problem to have, and one you can generally solve when you get there. There are a handful of giant customers that would want you to be giant from day 1, but those customers are very difficult to land and probably not worth the effort.

          • jdlshore 8 days ago |
            Startups need product-market fit before they need scale. It’s incredibly hard to come by and most won’t get it. Their number one priority should be to run as many customer acquisition experiments as possible for as little as possible. Every hour they spend on scale before they need it is an hour less of runway.
            • lkjdsklf 8 days ago |
              while true, zero downtime deployments is... trivial... even for a tiny startup.. So you might as well do it.
              • supersixirene 7 days ago |
                Zero downtime deployments were a thing long before K8S
        • p_l 7 days ago |
          Tiny startups don't have money to spend on too much PaaS or too many VMs or faff around with custom scripts for all sorts of work.

          Admittedly, if you don't know k8s, it might be non-starter... but if you some knowledge, k3s plus cheap server is a wonderful combo

      • lkjdsklf 8 days ago |
        We’ve been deploying software like this for a long ass time before kubernetes.

        There’s shitloads of solutions.

        It’s like minutes of clicking in a ui of any cloud provider to do any of that. So doing it multiple times is a non issue.

        Or automate it with like 30 lines of bash. Or chef. Or puppet. Or salt. Or ansible. Or terraform. Or or or or or.

        Kubernetes brings in a lot of nonsense that isn’t worth the tradeoff for most software.

        If you feel it makes your life better, then great!

        But there’s way simpler solutions that work for most things

        • valenterry 8 days ago |
          I'm actually not using kubernetes because I find it too complex. But I'm looking for a solution for that problem and I haven't found one, so I was wondering what OP uses.

          Sorry, but I don't want to "click in a UI". And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.

          • lkjdsklf 8 days ago |
            > And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.

            Maybe not literally 30.. I didn't bother actually writing it. Also bash was just a single example. It's way less terraform code to do the same thing. You just need an ELB backed by an autoscaling group. That's not all that much to setup. That gets you the two loadbalanced servers and zero downtime deploys. When you want to deploy, you just create a new scaling group and launch configuration and attach to the ELB and ramp down the old one.. Easy peasy. For the secrets, you need at least KMS and maybe secret manager if you're feeling fancy.. That's not much to setup. I know for sure AWS and azure provide nice CLIs that would let you do this in not that many commands. or just use terraform

            Personally if I really cared about multi cloud support, I'd go terraform (or whatever it's called now).

            • valenterry 8 days ago |
              > You just need an ELB backed by an autoscaling group

              Sure, and then you can neither 1.) test your setup locally nor 2.) easily move to another cloud provider. So that doesn't really fit what I asked.

              If they answer is "there is nothing, just accept the vendor lock-in" then fine, but please don't reply with "30 lines of bash" and make me have expectations. :-(

      • rozap 8 days ago |
        A script that installs some dependencies on an Ubuntu vm. A script that rsyncs the build artifact to the machine. The script can drain connections and restart the service using the new build, then onto the next VM. The cloud load balancer points at those VMs and has a health check. It's very simple. Nothing fancy.

        Our small company uses this setup. We migrated from GCP to AWS when our free GCP credits from YC ran out and then we used our free AWS credits. That migration took me about a day of rejiggering scripts and another of stumbling around in the horrible AWS UI and API. Still seems far, far easier than paying the kubernetes tax.

        • valenterry 8 days ago |
          I guess the cloud load balancer is the most custom part. Do you use the alb from aws?
      • wordofx 7 days ago |
        0 downtime. Jesus Christ. Nginx and HAProxy solved this shit decades ago. You can drop out a server or group. Deploy it. Add it back in. With a single telnet command. You don’t need junk containers to solve things like “0 downtime deployments”. That was a solved problem.
        • valenterry 7 days ago |
          Calm down my friend!

          You are not wrong, but that only covers a part of what I was asking. How about the rest? How do you actually bring your services to production? I'm curious.

          And, PS, I don't use k8s. Just saying.

      • gizzlon 7 days ago |
        Cloud Run. Did you read the article?

        Migrating to another cloud should be quite easy. There are many PaaS solutions. The hard parts will be things like migrating the data, make sure there's no downtime AND no drift/diff in the underlying data when some clients write to Cloud-A and some write to CLoud-B, etc. But k8 do not fix these problems, so..

        • htgb 7 days ago |
          Came here to say the same thing: PaaS. Intriguing that none of the other 12 sibling comments mention this… each in their bubble I guess (including me). We use Azure App Service at my day job and it just works. Not multi-cloud obviously, but the other stuff: zero downtime deploys, scale-out with load balancing… and not having to handle OS updates etc. And containers are optional, you can just drop your binaries and it runs.
    • honkycat 8 days ago |
      Start-ups that don't need to scale will quickly go away, because how else are you going to make a profit?

      How have you been going since 2005 and still not understand the economics of software?

      • ndriscoll 8 days ago |
        CPUs are ~300x more powerful and storage offers ~10,000x more IOPS than 2005 hardware. More efficient server code exists today. You can scale very far on one server. If you were bootstrapping a startup, you could probably plan to use a pair of gaming PCs until at least the first 1-10M users.
        • shakiXBT 7 days ago |
          10 million users on a pair of gaming PCs is ridiculous. What's your product, a website that tells the current time?
          • ndriscoll 7 days ago |
            How many requests do you expect users actually do? Especially if you're serving a B2B market; not everything is centered around addiction/"engagement". My 8 year old PC can do over 10k page requests/second for a reddit or myspace clone (without getting into caching). A modern high end gaming PC should be around 10x more capable (in terms of both CPU and storage IOPS). The limit in terms of needing to upgrade to "unusual" hardware for a PC would likely be the NIC. Networking is one place where typical consumer gear is stuck in 2005.

            Webapps might make it hard to tell, but a modern computer (or even an old computer like mine) is mindbogglingly fast.

      • Vespasian 8 days ago |
        Just to make it clear: There are a million use cases that don't involve scaling fast.

        For example B2B businesses where you have very few but extremely high value customers for specialized use cases.

        Another one is building bully hardware. Your software infrastructure does not need to grow any faster than your shop floor is building it.

        Whether you want to call that a "startup" is up for debate (and mostly semanticist if you ask me) but at one point they were all a zero employee company and needed to survive their first 5 years.

        In general you won't find their products on the app store.

      • infecto 7 days ago |
        It's disappointing to see how tone deaf some users like yourself are. Such a immature way to speak.
    • sobellian 8 days ago |
      I've worked at a few tiny startups, and I've both manually administered a single server and run small k8s clusters. k8s is way easier. I think I've spent 1, maybe 2 hours on devops this year. It's not a full-time job, it's not a part-time job, it's not even an unpaid internship. Perhaps at a bigger company with more resources and odd requirements...
      • nicce 8 days ago |
        But how much this costs extra? Sounds like you are using cloud-provided k8s.
        • sobellian 8 days ago |
          EKS is priced at $876 / yr / cluster at current rates.

          Negligible for me personally, it's much less than either our EC2 or RDS costs.

          • fer 7 days ago |
            Yeah, using EKS isn't the same thing as "administering k8s", unless I misread you above. Actual administration is already done for you, it's batteries included, turn-key, and integrated with everything AWS.

            A job ago we had our own k8s cluster in our own DC, and it required a couple of teams to keep running and reasonably integrated with everything else in the rest of the company. It was probably cheaper overall than cloud given the compute capacity we had, but also probably not by much given the amount of people dedicated to it.

            Even my 3-node k3s at home requires more attention than what you described.

            • sobellian 7 days ago |
              You did misread me, I never said I administered k8s. The quoted phrase does not exist :)
        • p_l 7 days ago |
          I currently use k8s to control bunch of servers.

          The amount of work/cost of using k8s for handling them in comparison to doing it "old style" is probably negative by now.

    • santoshalper 8 days ago |
      As an industry, we spent so much time sharpening our saw that we nearly forgot to cut down the tree.
    • rozap 8 days ago |
      ZIRP is over.
    • harrall 8 days ago |
      People gravely miss-understand containerization and Docker.

      All it lets you do is put shell commands into a text file and be able to run it self-contained anywhere. What is there to hate?

      You still use the same local filesystem, the same host networking, still rsync your data dir, still use the same external MySQL server even if you want -- nothing has changed.

      You do NOT need a load balancer, a control plane, networked storage, Kubernetes or any of that. You ADD ON those things when you want them like you add on optional heated seats to your car.

      • skydhash 7 days ago |
        Why would you want to run it anywhere. People mostly select an OS and just update that. It may be great when distributing applications for others to host, but not when it’s the only strategy. I have to reverse engineer dockerfiles when the developer wouldn’t provide a proper documentation.
        • ftmch 7 days ago |
          OS upgrades are a pain. Even just package updates could break everything. Having everything in containers makes migrating to another system much easier.
    • sunshine-o 7 days ago |
      Kubernetes, as an industry standard that a lot of people complain about is just a sitting duck waiting to be disrupted.

      Anybody who doesn't have the money, time or engineering resources will jump on whatever appear as a decent alternative.

      My intuition is that alternative already exist but I can't see it...

      A bit like Spring emerged as an alternative to J2EE or what HTMX is to React & co.

      Is it k3s or something more radical?

      Is it on a chinese Github?

      • ftmch 7 days ago |
        I wish Docker Swarm would get more attention. It could be the perfect Kubernetes lightweight alternative. Instead it seems like it could get deprecated any day now.
  • dhorthy 8 days ago |
    i mean control loops are good but you don't need hundreds of them
  • gavindean90 8 days ago |
    K8s always seems like the tool that people choose to avoid cloud vendor lock in but there is something to be said for C k8s lock in as well as the article points out.

    If you end up with exotic networking or file system mounts you can just be stuck maintaining k8s forever and some updates aren’t so stable so you have to be more vigilant that windows updates.

    • osigurdson 8 days ago |
      I don't think it makes sense to conflate vendor lock-in with taking a dependency on a given technology. Do we then have "Linux lock-in" and "Postgres lock-in"? The term "lock-in" shouldn't be stretched to cover this concept imo.
      • gavindean90 7 days ago |
        It’s not quite the same thing as a commercial vendor lock in but it’s close.

        You can have Postgres lock in as much as Wordpress has MySql lock in.

        I agree that you have less Linux lock in but Docker still requires a Linux kernel everywhere it goes. BSD need not apply.

  • osigurdson 8 days ago |
    The "You Probably Don't Either" is a little presumptuous. Many projects probably don't need cloud run either. Certainly, many projects shouldn't even be started in the first place.
    • stitched2gethr 8 days ago |
      I work with Kubernetes enough that I would answer to the title "kubernetes developer" and I would recommend you don't use kubernetes. In the same way I would recommend you don't use a car when you can walk.

      Your friend lives 1/8 miles away. You go to see them every day so why wouldn't you drive? Well, cars are expensive and you should avoid them if you don't need them. There are a TON of downsides to driving a car 1/4 of a mile every day. And there are a TON of benefits to using a car to drive 25 miles every day.

      I hate to quash a good debate but this all falls under the predictable but often forgotten "it depends". Usually do you need kubernetes == do you have a lot of shit to run.

  • semitones 8 days ago |
    Kubernetes has a steep learning curve, and certainly a lot of complexity, but when used appropriately for the right case, by god it's glorious
    • threeseed 8 days ago |
      Kubernetes has a proportional learning curve.

      If you're used to managing platforms e.g. networking, load balancers, security etc. then it's intuitive and easy.

      If you're used to everything being managed for you then it will feel steep.

      • alienchow 8 days ago |
        That's pretty much it. I think the main issue nowadays is that companies think full stack engineering means OG(FE BE DB) + CICD + Infra + security compliance + SRE.

        If a team of 5-10 SWEs have to do all of that while only graded on feature releases, k8s would massively suck.

        I also agree that experienced platform/infra engineers tend to whine less about k8s.

      • ikiris 8 days ago |
        Nah the difference between managing k8 and the system it was based on is VASTLY different. K8 is much harder than it needs to be because there wasn't tooling for a long time to manage it well. Going from google internal to K8 is incredibly painful.
      • t-writescode 8 days ago |
        I think this is only true if the original k8s cluster you're operating against was written by an expert and laid out as such.

        If you're entering into k8s land with someone else's very complicated mess across hundreds of files, you're going to be in for a bad time.

        A big problem, I feel, is that if you don't have an expert design the k8s system from the start, it's just going to be a horrible time; and, many people, when they're asked to set up a k8s setup for their startup or whatever, aren't already experts, so the thing that's produced is not maintainable.

        And then everyone is cursed.

        • threeseed 7 days ago |
          The exact can be said for your Terraform, Pulumi, Shell scripts etc. Not to mention unique config for every component and piece of infrastructure.

          At least Kubernetes is all YAML, consistent and can be tested locally.

          • t-writescode 7 days ago |
            My experience either k8s is with terraform building the k8s environment for me :D
        • p_l 7 days ago |
          Thanks to kubernetes "flattening" that mess into somewhat flat object map (I like to call it a blackboard system :D) it can be reasonably easy to figure out what's the desired state and what's the current state for given cluster, even if the files are a mess.

          However...

          Talking with people who started using kubernetes later than me[1], it seems like a lot of confusion starts by trying to start with somewhat complete example like using a Deployment + Ingress + Services to deploy, well, a typical web application. The stuff that would be trivial to run in typical PaaS.

          The problem is that then you do not know what a lot of those magic incantations mean, and the actually very, very simple mechanism of how things work in k8s are lost, and you can't find your way in a running cluster.

          [1] I started learning around 1.0, went with dev deployment with 1.3, graduated it to prod with 1.4. Smooth sailing since[2]

          [2] The worst issues since involved dealing with what was actually global GCP networking outage that we were extra sensitive to due to extensive DNS use in kubernetes, and once naively assuming that the people before me set sensible sizes for various nodes, only to find a combination of too small to live EC2 instances choking till control plane death, and outdated etcd (because the rest of the company twas too conservative in updating) getting into rare but possible bug that corrupted data which was triggered by the flapping caused by too small instances. Neither I count as k8s issue, would have killed anything else I could setup given the same constraints.

    • jauntywundrkind 8 days ago |
      And there's very few investment points below it.

      You can cobble together your own unique special combination of services to run apps on! It's an open ended adventure into itself!

      I'm all for folks doing less, if it makes sense! But there's basically nothing except slapping together the bits yourself & convincing yourself your unique home-made system is fine. You'll be going it alone, & figuring out on the fly, all to save yourself from getting good at the one effort that has a broad community, practitioners, and massive extensibility via CRD & operators.

    • winwang 7 days ago |
      Using it to run easily-scalable Spark clusters. Previously used it for large distributed builds. It's been pretty great (even if annoying at times).
  • sneak 8 days ago |
    Except that all of this subjects you and all of your workloads to warrantless US government surveillance due to running in the big public clouds.

    I personally don’t want the federal government being able to peek into my files and data at any time, even though I’ve done nothing wrong. It’s the innocent people who have most to lose from government intrusion.

    It seems insane to me to just throw up one’s hands and store private data in Google or Amazon clouds, especially when not doing so is so much cheaper.

  • ChrisArchitect 8 days ago |
    Related:

    Dear friend, you have built a Kubernetes

    https://news.ycombinator.com/item?id=42226005

    • doctorpangloss 8 days ago |
      People don’t want solutions, they want holistic experiences. Kubernetes feels bad and a pile of scripts feels good. Proudly declaring how much you buy into feelings feels even better!
      • Timber-6539 7 days ago |
        Those pile of scripts are the solutions on which Kubernetes was built on. And the fact that they never break is why greybeards still use them over the complexity of shiny new things.
        • p_l 7 days ago |
          The script break so hard is why one of my first times i felt a "grey beard moment" (with actual grey beard even) was telling a junior the "running ansible from their laptop is not acceptable solution"
  • honkycat 8 days ago |
    are we really still doing this lol?

    >> Kubernetes is feature-rich, yet these “enterprise” capabilities turned even simple tasks into protracted processes.

    I don't agree. After learning the basics I would never go back. It doesn't turn simple tasks into a long process. It massively simplifies a ton functionality. And you really only need to learn 4 or 5 new concepts to get it off the ground.

    If you have a simple website you don't need Kubernetes, but 99% of devs are working in medium sized shops where they have multiple teams working across multiple functionalities and Kubernetes helps this out.

    Karpenter is not hard to set up at all. It solves the problem about over-provisioning out of the box and has for almost 5 years.

    It's like writing an article: "I didn't need redis, and you probably don't either" and then talking about how Redis isn't good for relational data.

    • osigurdson 8 days ago |
      The "...You probably don't either" part is where the argument loses all of its weight. How do they know what I or anyone else needs?
  • threeseed 8 days ago |
    Kubernetes lock-in: bad.

    Google CloudRun, Database, PubSub, Cloud Storage, VPC, IAM, Artifact Registry etc lock-in: good.

  • hinkley 8 days ago |
    I’m having to learn kubernetes to improve my job hunt. It’s really too much for most projects. Cosplaying doesn’t make you money.
  • devops99 8 days ago |
    These anti-Kubernetes articles are a major signal that the competency crisis is very real.
  • gatnoodle 8 days ago |
    "In practice, few companies switch providers unless politics are involved, as the differences between major cloud services are minimal."

    This is not always true.

  • ants_everywhere 8 days ago |
    People talk about Kubernetes as container orchestration, but I think that's kind of backwards.

    Kubernetes is a tool for creating computer clusters. Hence the name "Borg" (Kubernetes's grandpa) referring to assimilating heterogeneous hardware into a collective entity. Containers are an implementation detail.

    Do you need a computer cluster? If so k8s is pretty great. If you don't care about redundancy and can get all the compute you need out of a single machine, then you may not need a cluster.

    Once you're using containers on a bunch of VMs in different geographical regions, then you effectively have hacked together a virtual cluster. You can get by without k8s. You just have to write a lot of glue code to manage VMs, networking, load balancing, etc on the cloud provider you use. The overhead of that is probably larger than just learning Kubernetes in the long run, but it's reasonable to take on that technical debt if you're just trying to move fast and aren't concerned about the long run.

    • politelemon 8 days ago |
      I like to describe it similarly, but as a way of building platforms.
    • Spivak 8 days ago |
      This has got to be the most out there k8s take I've read in a while. k8s doesn't save you from learning your cloud providers infrastructure, you have to learn k8s in addition to your cloud provider's infrastructure. It's all ALBs, ASGs, Security Groups, EBS Volumbes and IAM policy underneath and k8s, while very clever, isn't so clever as to abstract much of any of it away from you. On EKS you get to enjoy more odd limitations with your nodes than EC2 would give you on its own.

      You're already building on a cluster, your cloud provider's hypervisor. They'll literally build virtual compute of any size and shape for you on demand out of heterogeneous hardware and the security guarantees are much stronger than colocated containers on k8s nodes.

      There are quite a few steps between single server and k8s.

      • psini 7 days ago |
        You can self host Kubernetes on "dumb" VMs from Hetzner or OVH.
      • p_l 7 days ago |
        K8s was designed around deployment on premise on bare metal hardware.

        The cloud extensions were always just a convenience.

      • ants_everywhere 4 days ago |
        The same argument can be made about Borg. Someone at Google needs to know about things like Juniper switches and hard drive bays. Someone needs to repair or replace defective compute nodes and power switches. Before they had software load balancing, someone had to manage the hardware load balancers.

        But the idea of Borg is that all of that's abstracted away for the typical developer. It's the same with k8s. The infrastructure team in your org needs to understand the implementation details, but really only a few.

        You can also configure load balancers, IAM groups & policy etc as k8s CRDs. So all that stuff can be in one place in the code base alongside the rest of your infrastructure. So in that sense it does abstract those concepts. You still need to know something about them, but you don't have to configure them programmatically yourself since k8s will do that.

    • stickfigure 8 days ago |
      K8s doesn't help you solve your geographical region problem, because the geographical region problem is not running appserver instances in multiple regions. Almost any PaaS will do that for you out of the box, with way less fuss than k8s. The hard part is distributing your data.

      Less overhead than writing your own glue code, less overhead than learning Kubernetes, is just use a PaaS like Google App Engine, Amazon Elastic Beanstalk, Digital Ocean App Platform, or Heroku. You have access to the same distributed databases you would with k8s.

      Cloud Run is PaaS for people that like Docker. If you don't even want to climb that learning curve, try one of the others.

      • vrosas 8 days ago |
        PaaS get such a bad rap from devs in my experience, even though they would solve so many problems. They'd rather keep their k8s clusters scaled to max traffic and spend their nights dealing with odd networking and configuration issues than just throw their app on Cloud Run and call it a day.
      • photonthug 8 days ago |
        > just use a PaaS like Google App Engine, Amazon Elastic Beanstalk, Digital Ocean App Platform, or Heroku.

        This is the right way for web most of the time, but most places will choose k8s anyway. It’s perplexing until you come to terms with the dirty secret of resume driven development, which is that it’s not just junior engs but lots of seniors too and some management that’s all conspiring to basically defraud business owners. I think the unspoken agreement is that Hard work sucks, but easy work that helps you learn no transferable skills might be worse. The way you evaluate this tradeoff predictably depends how close you are to retirement age. Still, since engineers are often disrespected/discarded by business owners and have no job security, oaths of office, professional guilds, or fiduciary responsibility.. it’s no wonder things are pretty mercenary out there.

        Pipelines are as important as web these days but of course there are many options for pipelines as a service also.

        K8s is the obviously correct choice for teams that really must build new kinds of platforms that have many diverse kinds of components, or have lots of components with unique requirements for coupling (like say “scale this thing based on that other thing”, but where you’d have real perf penalties for leaving the k8s ecosystem to parse events or whatever).

        The often mentioned concern about platform lock in is going to happen to you no matter what, and switching clouds completely rarely happens anyway. If you do switch, it will be hard and time consuming no matter what.

        To be fair, k8s also enables brand new architectural possibilities that may or may not be beautiful. But it’s engineering, not art, and beautiful is not the same as cheap, easy, maintainable, etc.

    • _flux 8 days ago |
      What is the container orchestration tool of choice beyond docker swarm, then?
      • rixed 8 days ago |
        Is nomad still around?
        • _flux 7 days ago |
          Thanks, hadn't heard of that.

          Seems pretty active per its commit activity: https://github.com/hashicorp/nomad/graphs/commit-activity

          But the fact that I hadn't heard of it before makes it sound not very popular, at least not for the bubble I live in :).

          Does anyone have any practical experiences to share about it?

          • ChocolateGod 7 days ago |
            Yes have a few Nomad clusters in production and it's been great.

            You'll certainly want to combine it with Consul and use Consul templates and service discovery though.

            I'd say the difficulty and complexity level is between Kubernetes and Docker Swarm, not having to use YML too is a big benefit imho.

          • rixed 5 days ago |
            If that's practical experience that you need, I remember reading some studies comparing the reliability and efficiency of nomad vs k8s under load (spoiler: they do not look very good for k8s).

            If that's popularity that you need, then sure, nobody ever got fired for choosing kubernetes.

    • ashishmax31 8 days ago |
      Exactly. I've come to describe k8s as a distributed operating system for servers.

      K8s tries to abstract away individual "servers" and gives you an API to interact with all the compute/storage in the cluster.

    • davidgl 7 days ago |
      Yep, it's a cluster OS. If you need to run a cluster, you need to explore and understand the trade offs of k8s versus other approaches. Personally I run a small cluster on k3s, for internal tools, and love it. Yes it's a load of new abstractions to learn, but once learnt if really helps in designing large scalable systems. I manage lots of pet machines and VMs for clients, and it would be soooo much easier on k8.
    • otabdeveloper4 7 days ago |
      > Containers are an implementation detail.

      They really aren't.

      Personally I have a big Nix derivation to deploy my (heterogeneous) cluster to bare metal.

      None of the k8s concepts or ideas apply here.

      • pas 7 days ago |
        how does it work? how many nodes? what do you use for consensus, request routing, etc...?
  • czhu12 8 days ago |
    I'm not sure google cloud run can be considered a fair comparison to Kubernetes. It would be like saying AWS Lambda is a lot easier to use than EC2. I've used both Kubernetes and GCR at the current company I cofounded, and theres pros and cons to both. (Team of about 10 engineers)

    GCR was simple to run simple workloads, but, an out of the box Postgres database can't just handle unlimited connections and so connecting to it from GCR without having a DB connection proxy like PG bouncer risks exhausting the connection pool. For a traditional web app at any moderate scale, you typically need some fine grained control over per process, per server and per DB connection pools, which you'd lose with GCR.

    Also, to take advantage of GCR's fine grained CPU pricing, you'd have to have an app that boots extremely quickly, so it can be turned off during periods of inactivity, and rescheduled when a request comes in.

    Most of our heaviest workloads run on Kubernetes for those reasons.

    The other thing thats changed since this author probably explored Kubernetes is that there are a ton of providers now that offer a Kubernetes control plane for no cost. The ones that I know of are Digital Ocean and Linode, where the pricing for a Kubernetes cluster is the same as their droplet pricing for the same resources. That didn't use to be the case. [1] The cheapest you can get is a $12 / month, fully featured cluster on Linode.

    I've been building, in my spare time, a platform that tries to make Kubernetes more usable for single developers: https://canine.sh, based on my learnings that the good parts of Kubernetes are actually quite simple to use and manage.

    [1] Digital oceans pricing page references its free control plane offering https://www.digitalocean.com/pricing

    • igor47 8 days ago |
      Why are GCR and pgbouncer incompatible? Could you run a pgbouncer instance in GCR?
      • czhu12 8 days ago |
        I’m not an expert, but from what I understand, the standard set up is like:

        4x(Web processes) -> 1x(pgbouncer) -> database

        This ensures that the pgbouncer instance is effectively multiplexing all the connections across your whole fleet.

        In each individual web process, you can have another shared connection pool.

        This is how we set it up

      • seabrookmx 8 days ago |
        GCR assumes it's workload is HTTP, or a "job" (container that exits once it's task has completed). It scales on request volume and CPU, and the load balancer is integrated into the service. It's not obvious to me how you'd even run a "raw" TCP service like pgbouncer on it.
    • gizzlon 7 days ago |
      > an out of the box Postgres database can't just handle unlimited connections and so connecting to it from GCR without having a DB connection proxy like PG bouncer risks exhausting the connection pool.

      Good point. How many connections can it handle? Seems like it's up to 262142 in theory? Or am I reading this wrong: https://cloud.google.com/sql/docs/postgres/flags#postgres-m ??

      But even 1000 seems ok? 1 per container, so 1000 running containers? Quite a lot in my world, especially since they can be quite beefy. Would be very worried about the cost way before 1000 simultaneously running containers :)

      • czhu12 7 days ago |
        This assumes your Postgres has a an infinite amount of memory which would also be very expensive. I think you'd probably want to assume each connection takes ~10-20MB of memory.
  • jeswin 8 days ago |
    Docker swarm would have worked for 98.5% of all users (how k8s won over swarm should be a case study). And kamal, or some thing like it, would work for 88.25% of all users.
  • paxys 8 days ago |
    > Kubernetes comes with substantial infrastructure costs that go beyond DevOps and management time. The high cost arises from needing to provision a bare-bones cluster with redundant management nodes.

    That's your problem right there. You really don't want to be setting up and managing a cluster from scratch for anything less than a datacenter-scale operation. If you are already on a cloud provider just use their managed Kubernetes offering instead. It will come with a free control plane and abstract away most of the painful parts for you (like etcd, networking, load balancing, ACLs, node provisioning, kubelets, proxies). That way you just bring your own nodes/VMs and can still enjoy the deployment standardization and other powerful features without the operational burden.

    • dikei 8 days ago |
      Even for on-prem scenario, I'd rather maintain a K8S control plane and let developer teams manage their own apps deployment in their own little namespace, than provisioning a bunch of new VMs each time a team need some services deployed.
      • spockz 8 days ago |
        I can imagine. Do you have complete automation setup around maintaining the cluster?

        We are now on-prem using “pet” clusters with namespace as a service automated on it. This causes all kinds of issues with different workloads with different performance characteristics and requirements. They also share ingress and egress nodes so impact on those has a large blast radius. This leads to more rules and requirements.

        Having dedicated and managed clusters where everyone can determine their sizing and granularity of workloads to deploy to which cluster is paradise compared to that.

        • solatic 8 days ago |
          > This causes all kinds of issues with different workloads with different performance characteristics and requirements.

          Most of these issues can be fixed by setting resource requests equal to limits and using integer CPU values to guarantee QoS. You should also have an interface with developers explaining which nodes in your datacenter have which characteristics, using node labels and taints, and force developers to pick specific node groups as such by specifying node affinity and tolerations, by not bringing online nodes without taints.

          > They also share ingress and egress nodes so impact on those has a large blast radius.

          This is true regardless of whether or not you use Kubernetes.

          • spockz 5 days ago |
            For the different workloads it is more that all the nodes in the cluster are the same and mixing memory with cpu intensive or io intensive workloads is hard to schedule or to get to a proper utilisation rate. Next to that when indeed setting request and limit properly it means that our Java apps use/reserve multiple cores even when handling little traffic and basically idling. Golang apps scale better there, especially towards 0.

            When running on “bare” VMs each VM is its own member in the network. The pods in the cluster use an overlay network and egress is limited to egress nodes which are now shared by all workloads.

            Having dedicated K8s clusters would reduce the sharing of network ingress and egress as well as choose the vm size for my workloads.

      • rtpg 8 days ago |
        Even as a K8s hater, this is a pretty salient point.

        If you are serious about minimizing ops work, you can make sure people are deploying things in very simple ways, and in that world you are looking at _very easy_ deployment strategies relative to having to wire up VMs over and over again.

        Just feels like lots of devs will take whatever random configs they find online and throw them over the fence, so now you just have a big tangled mess for your CRUD app.

        • guitarbill 8 days ago |
          > Just feels like lots of devs will take whatever random configs they find online

          Well it usually isn't a mystery. Requiring a developer team to learn k8s likely with no resources, time, or help is not a recipe for success. You might have minimised someone else's ops work, but at what cost?

          • rtpg 8 days ago |
            I am partly sympathetic to that (and am a person who does this) but I think too many devs are very nihilistic and use this as an excuse to stop thinking. Everyone in a company is busy doing stuff!

            There's a lot of nuance here. I think ops teams are comfortable with what I consider "config spaghetti". Some companies are incentivised to ship stuff that's hard to configure manually. And a lot of other dynamics are involved.

            But at the end of the day if a dev copy-pastes some config into a file, taking a quick look over and asking yourself "how much of this can I actually remove?" is a valuable skill.

            Really you want the ops team to be absorbing this as well, but this is where constant atomization of teams makes things worse! Extra coordination costs + a loss of a holistic view of the system means that the iteration cycles become too high.

            But there are plenty of things where (especially if you are the one integrating something!) you should be able to look over a thing and see, like, an if statement that will always be false for your case and just remove it. So many modern ops tools are garbage and don't accept the idea of running something on your machine, but an if statement is an if statement is an if statement.

        • dikei 8 days ago |
          > Just feels like lots of devs will take whatever random configs they find online and throw them over the fence, so now you just have a big tangled mess for your CRUD app.

          Agree.

          To reduce the chance a dev pull some random configs out of nowhere, we maintain a Helm template that can be used to deploy almost all of our services in a sane way, just replace the container image and ports. The deployment is probably not optimal, but further tuning can be done after the service is up and we have gathered enough metrics.

          We've also put all our configs in one place, since we found that devs tend to copy from existing configs in the repo before searching the internet.

      • DanielHB 7 days ago |
        > than provisioning a bunch of new VMs each time a team need some services deployed.

        Back in the old days before cloud providers this was the only option. I started my career in early 2010s and got the tailend of this, it was not fun.

        I remember my IT department refusing to set up git for us (we were using SVN before) so we just asked a VM and set up a git repo in there ourselves to host our code.

      • mzhaase 7 days ago |
        This for me is THE reason for using container management. Without containers, you end up with hundreds of VMs. Then, when the time comes that you have to upgrade to a new OS, you have to go through the dance, for every service:

        - set up new VMs

        - deploy software on new VMs

        - have the team responsible give their ok

        It takes forever, and in my experience, often never completes because some snowflake exists somewhere, or something needs a lib that doesn't exist on the new OS. VMs decouple the OS from the hardware, but you should still decouple the service from the OS. So that means containers. But then managing hundreds of containers still sucks.

        With container management, I just

        - add x new nodes to cluster

        - drain x old nodes and delete them

    • sbstp 8 days ago |
      Most control planes are not free anymore, they cost like 70$/mo on AWS & GCP. Used to be a while back.
      • szszrk 7 days ago |
        That's around the cost of a single VM (cheapest 8GB ram I found quickly).

        Azure has a free tier with control plane completely free (but no SLA) - great deal for test clusters and testing infra.

        If you are that worried about costs, then public cloud may not be for you at all, or you should look at ECS/App containers or serverless.

      • dikei 7 days ago |
        GCP has $74 free credit for Zonal cluster, so you effectively have the first cluster for free.

        And even $70 is cheap, considering that a cluster should be shared by all the services from all the teams in the same environment, bar very few exceptions.

    • oofbey 8 days ago |
      If you do find yourself wanting to create a cluster by hand, it's probably because you don't actually need lots of machines in the "cluster". In my experience it's super handy to run tests on a single-node "cluster", and then k3s is super simple. It takes something like 8 seconds to install k3s on a bare CI/CD instance, and then you can install your YAML and see that it works.

      Once you're used to it, the high-level abstractions of k8 are wonderful. I run k3s on raspberry pi's because it takes care of all sorts of stuff for you, and it's easy to port code and design patterns from the big backend service to a little home project.

    • jillesvangurp 7 days ago |
      For most small setups, the cost of running an empty kubernetes cluster (managed) is typically higher than setting up a db, a couple of vms and a loadbalancer, which goes a long way for running a simple service. Add some buckets, a CDN and you are pretty much good to go.

      If you need dedicated people just to stay on top of running your services, you have a problem that's costing you hundreds of thousands per year. There's a lot of fun and easy stuff you can do with that kind of money. This is a pattern I see with a lot of teams that get sucked into using Kubernetes, micro services, terraform, etc. Once you need a few people just to stay on top of the complexity that comes from that, you are already spending a lot. I tend to keep things simple on my own projects because any amount of time I spend on that, I'm not spending on more valuable work like adding features, fixing bugs, etc.

      Of course it's not black and white and there's always a trade off between over and under engineering. But a lot of teams default to over engineering simply by using Kubernetes from day one. You don't actually need to. There's nothing wrong with a monolith running on two simple vms with a load balancer in front of it. Worked fine twenty years ago and it is still perfectly valid. And it's dead easy to setup and manage in most popular cloud environments. If you use some kind of scaling group, it will scale just fine.

      • dikei 7 days ago |
        > For most small setups, the cost of running an empty kubernetes cluster (managed) is typically higher than setting up a db, a couple of vms and a loadbalancer, which goes a long way for running a simple service.

        Not really, the cost of an empty EKS cluster is the management fee of $0.1/hour, or roughly the price of a small EC2 instance.

        • jillesvangurp 7 days ago |
          0.1 * 24 * 30 = 720$/month

          That's about 2x our monthly cloud expenses. That's not a small VM. You can buy a mac mini for that.

          • dikei 7 days ago |
            $72

            Though if you are only spending $350 monthly on VM, Database and Load Balancer, you can probably count resource instances by hand, and don't need a K8S cluster yet.

  • politelemon 8 days ago |
    If you're looking to run a few containers you may also want to look at docker swarm itself. You h get some of the benefits of orchestration and a small manageable overhead. And it's just part of docker.
  • tracerbulletx 8 days ago |
    Cool you left Kubernetes for a more locked in abstraction around Kubernetes like automation?
  • JohnMakin 8 days ago |
    I don't understand how these posts exist when much of my consulting and career for the last few years has been on companies that have basically set up a bare bones out of the manual EKS/GCP solution and just essentially let it sit for 3+ years untouched until it got to a crisis. That to me as a systems engineer is nuts and a testament to how good this stuff is when you get it even kind of right. Of course, I'm referring to managed systems. Doing Kubernetes from scratch I would not dream of doing.
  • ofrzeta 8 days ago |
    so, what do you all think about CloudFoundry? :)
  • solatic 8 days ago |
    Cloud Run is fine if you're a small startup and you're thinking about your monthly bill in three-figure or even four-figure terms.

    Like most serverless solutions, it does not permit you to control egress traffic. There are no firewall controls exposed to you, so you can't configure something along the lines of "I know my service needs to connect to a database, that's permitted, all other egress attempts are forbidden", which is a foundational component of security architecture that understands that getting attacked is a matter of time and security is something you build in layers. EDIT: apparently I'm wrong on Cloud Run not being deployable within a VPC! See below.

    GCP and other cloud providers have plenty of storage products that only work inside a VPC. Cloud SQL. Memorystore. MongoDB Atlas (excluding the expensive and unscalable serverless option). Your engineers are probably going to want to use one or some of them.

    Eventually you will need a VPC. You will need to deploy compute inside the VPC. Managed Kubernetes solutions make that much easier. But 90% of startups fail, so 95% of startups will fail before they get to this point. YMMV.

    • bspammer 8 days ago |
      I’m surprised Cloud Run doesn’t let you do this. You can put an AWS lambda in a VPC no problem.
    • jedi3335 8 days ago |
      Cloud Run has had network egress control for a while: https://cloud.google.com/run/docs/configuring/vpc-direct-vpc
      • solatic 8 days ago |
        Nice, I didn't know about this, it wasn't available last time I checked.

        With that said... there are so many limitations on that list, that seriously, I can't imagine it would really be so much easier than Kubernetes.

    • p_l 7 days ago |
      kubernetes is how I keep compute costs in 2-3 digits :V
  • oron 8 days ago |
    I just use a single k3s install on a single bare metal from Hetzner or OVH, works like charm, very clean deployments, much more stable than docker-compose and 1/10 of the cost of AWS or similar.
    • usrme 7 days ago |
      Do you have a write-up about this that you have to share, even if it's someone else's? I'd be curious to try this out.
      • fernandotakai 7 days ago |
        i was actually playing with hetzner and k3s over the weekend and found this https://github.com/vitobotta/hetzner-k3s to be super useful.
      • globular-toast 7 days ago |
        I've done this but on EC2. What would you like to know? Installing K3s on a single node is trivial and at that point you have a fully functional K8s cluster and API.

        I have an infrastructure layer that I apply to all clusters that includes things like cert-manager, an ingress controller and associated secrets. This is all cluster-independent stuff. Then some cluster-dependent stuff like storage controllers etc. I use flux to keep this stuff under version control and automatically reconciled.

        From there you just deploy your app with standard manifests or however you want to do it (helm, kubectl, flux, whatever).

        It all works wonderfully. The one downside is all the various controllers do eat up a fair amount of CPU cycles and memory. But it's not too bad.

    • p_l 7 days ago |
      Doing the same, grabbed a reasonably cheap Ryzen (zen2) server with 64GB ECC and 4x NVMe SSDs (2x 512G + 2x 1024G).

      Runs pretty much this stack:

        "Infrastructure":
      
        - NixOS with ZFS-on-Linux for as 2 mirrors on the NVMes 
        - k3s (k8s 1.31)
        - openebs-zfs provisioner (2 storage classes, one normal and one optimized for postgres)
        - cnpg (cloud native postgres) operator for handling databases
        - k3s' built-in traefik for ingress
        - tailscale operator for remote access to cluster control plane and traefik dashboard
        - External DNS controler to automate DNS
        - Certmanager to handle LetsEncrypt
        - Grafana cloud stack for monitoring. (metrics, logs, tracing)
      
        Deployed stuff:
        - Essentially 4 tenants right now
        - 2x Keycloak + Postgres (2 diff. tenants)
        - 2x headscale instances with postgres (2 diff. tenants, connected to keycloak for SSO)
        - 1 Gitea with Postgres and memcached (for 1 tenant)
        - 3 postfix instances providing simple email forwarding to sendgrid (3 diff. tenants)
        - 2x dashy as homepage behind SSO for end users (2 tenants)
        - 1x Zitadel with Postgres (1 tenant, going to migrate keycloaks to it as shared service)
        - Youtrack server (1 tenant)
        - Nextcloud with postgres and redis (1 tenant)
        - tailscale-based proxy to bridge gitea and some machines that have issues getting through broken networks
      
      Plus few random things that are musings on future deployments for now.

      The server is barely loaded and I can easily clone services around (in fact a lot of the services above? instantiated from jsonnet templates).

      Deploying some stuff was more annoying than doing it by hand from shell (specifically nextcloud) but now I have replicable setup, for example if I decide to move from host to host.

      Biggest downtime ever was dealing with not well documented systemd-boot behaviour which caused the server to revert to older configuration and not apply newer ones.

  • tombert 8 days ago |
    I’ve come to the conclusion that I hate “cloud shit”, and a small part of me is convinced that literally no one actually likes it, and everyone is playing a joke on me.

    I have set up about a dozen rack mount servers in my life, installing basically every flavor of Unix and Linux and message busses under the sun in the process, but I still get confused by all the Kubectl commands and GCP integration with it.

    I might just be stupid, but it feels like all I ever do with Kubernetes is update and break YAML files, and then spend a day fixing them by copy-pasting increasingly-convoluted things on stackexchange. I cannot imagine how anyone goes to work and actually enjoys working in Kubernetes, though I guess someone must in terms of “law of large numbers”.

    If I ever start a company, I am going to work my damndest to avoid “cloud integration crap” as possible. Just have a VM or a physical server and let me install everything myself. If I get to tens of millions of users, maybe I’ll worry about it then.

    • tryauuum 8 days ago |
      I have same thoughts.

      the only form of kubernetes I would be willing to try is the one with kata-containers for having all the security of virual machines.

    • nyclounge 8 days ago |
      Or if you got a static ip and fast up speed, then just port forward 80, 443 and start hosting your self. Even an old Intel Mac Book pro from 2000's with 4 GB of RAM may not be that hot running MacOS, but install Debian with no X. It is running smooth as a whistle, while running several conduit matrix, haraka, zone-mta, ice cast, nginx with no issues.

      WebRTC/TUN/STUN becomes an issue with the nginx config. May consider looking at pingora. The whole rust -> binary + toml file is super nice to run from system admin perspective.

      • d3Xt3r 8 days ago |
        > It is running smooth as a whistle

        ... until you get hit by a DDoS attack. Not much you can do about it unless your ISP offers protection, or you end up going for Cloudflare or the like instead of exposing your IP and ports.

        • PittleyDunkin 7 days ago |
          Folding the second you get dos'd is aa feature, not a liability. Staying up through those is a great way to lose a ton of money basically instantly.
        • nyclounge 7 days ago |
          Or hetzner has also DDoS protect for all their customers. Seems like a lot less rip off and scam than AWS and Cloudflare.

          Set your TLS a low number, and you can swap whenever you feel like it.

    • tapoxi 8 days ago |
      This read as "old man yells at cloud" to me.

      I've managed a few thousand VMs in the past, and I'm extremely grateful for it. An image is built in CI, service declares what it needs, the scheduler just handles shit. I'm paged significantly less and things are predictable and consistent unlike the world of VMs where even your best attempt at configuration management would result in drift, because the CM system is only enforcing a subset of everything that could go wrong.

      But yes, Kubernetes is configured in YAML, and YAML kind of sucks, but you rarely do that. The thing that changes is your code, and once you've got the boilerplate down CI does the rest.

      • catdog 8 days ago |
        YAML is fine, esp. compared to the huge collection of often 10x worse config formats you have to deal with in the VM world.
      • cess11 7 days ago |
        I'd prefer Ansible if I was running VM:s. Did that at a gig, controlled a vCenter cluster and hundreds of machines in it, much nicer experience than Kubernetes style ops. Easier to do ad hoc troubleshooting and logging for one.
        • vrighter 7 days ago |
          until, as happened to us, you're in the middle of an upgrade cycle, with a mix of red-hat 6 and red-hat 8 servers, and ansible decide to require support for the latest available version of python on red-hat 8, which isn't available on red-hat 6, so we have no way of using ansible to manage both sets of servers.

          The python ecosystem is a cancer.

          • cess11 7 days ago |
            Sure, and I'm also not a fan of RedHat. We ran Ubuntu and Debian on that gig, the few Python issues we ran into we could fix with some package pinnings and stuff like that.
          • mkesper 7 days ago |
            Well, you were free to install a version of Python3 on the CentOS6 machines, that's what we ended up doing and using for Ansible. Python 2.6 support of Ansible was a bad lie, multiple things broke already. 10 years of support without acknowledging changes of ecosystem just don't work.
            • vrighter 7 days ago |
              we did, but then most ansible modules still didn't work on the system. They advertise the no-agent thing and how it does everything over ssh, and instead require python to be installed on all your servers because it generates python code and runs it on the remote machine. And somemodules require specific versions sometimes.
      • mollusk 7 days ago |
        > "old man yells at cloud"

        Literally.

      • raxxorraxor 7 days ago |
        I think it is also a difference between developer and IT. Usually the requirements don't ask for thousands of VMs if you don't run some kind of data center or a company that specializes on software services.
      • tombert 7 days ago |
        > But yes, Kubernetes is configured in YAML, and YAML kind of sucks, but you rarely do that.

        I'm sorry, citation needed on that. I spend a lot of time working with the damn YAML files. It's not a one-off thing for me.

        You're not the first person to say this to me, they say "you rarely touch the YAML!!!", but then I look at their last six PRs, and each one had at least a small change to the YAML setup. I don't think you or they are lying, I think people forget how often you actually have to futz with it.

    • voidfunc 8 days ago |
      I'm always kind of blown away by experiences like this. Admittedly, I've been using Kubernetes since the early days and I manage an Infra team that operates a couple thousand self-managed Kubernetes clusters so... expert blindness at work. Before that I did everything from golden images to pushing changes via rsync and kicking a script to deploy.

      Maybe it's because I adopted early and have grown with the technology it all just makes sense? It's not that complicated if you limit yourself to the core stuff. Maybe I need to write a book like "Kubernetes for Greybeards" or something like that.

      What does fucking kill me in the Kubernetes ecosystem is the amount of add-on crap that is pitched as "necessary". Sidecars... so many sidecars. Please stop. There's way too much vendor garbage surrounding the ecosystem and dev's rarely stop to think about whether they should deploy something when it's easy as dropping in some YAML and letting the cluster magically run it.

      • t-writescode 8 days ago |
        > Admittedly, I've been using Kubernetes since the early days and I manage an Infra team

        I think this is where the big difference is. If you're leading a team and introduced all good practices from the start, then the k8s and Terraform or whatever config files can never get so very complicated that a Gordian knot isn't created.

        Perhaps k8s is nice and easy to use - many of the commands certainly are, in my experience.

        Developers have, over years and decades, learned how to navigate code and hop from definition to definition, climbing the tree and learning the language they're operating in, and most of the languages follow similar-enough patterns that they can crawl around.

        Configuring a k8s cluster has absolutely none of that knowledge built up; and, reading something that has rough practices is not a good place to learn what it should look like.

        • paddy_m 7 days ago |
          Thank you. I can always xargs grep for a function name, at worst. Dir() in python at a debugger for other things. With YAML, kubernetes and other devops hotness, I frequently can’t even find the relevant scripts/YAML that are executed nor their codebases.

          This also happens with configuration based packaging setups. Python hatch in particular, but sometimes node/webpack/rollup/vite.

      • adastra22 8 days ago |
        I would buy that book.
      • figassis 8 days ago |
        This, all the sidecars. Use kubernetes to run your app like you would without it, take advantage of the flexibility, avoid the extra complexity. Service discovery sidecars? Why not just use the out of the box dns features?
        • tommica 8 days ago |
          Because new people don't know better - I've never used k8s, but have seen sidecars being promoted as a good thing, so I might have used them
          • namaria 7 days ago |
            Maybe the "I've heard about" approach to tooling is the problem here?
      • stiray 8 days ago |
        I would buy the book. Just translate all "new language" concepts into well known concepts from networking and system administration. It would be best seller.

        If I would only have a penny for each time I wasted hours trying to figure out what something in "modern IT" is, just to figure out that I already knew what it is, but it was well hidden under layers of newspeak...

        • radicalbyte 8 days ago |
          The book I read on K8S written by a core maintainer made is very clear.
          • schnirz 8 days ago |
            Which book would that be, out of interest?
            • radicalbyte 7 days ago |
              Kubernetes in Action
              • xorcist 6 days ago |
                A book from 2017? Is that still relevant to understand a modern Kubernetes cluster?

                The CNCF ecosystem looked a lot different back then.

                • radicalbyte 5 days ago |
                  Yup it's a great introduction as you can fly through it (3-4 hours to read most of it), at least if you already have a grasp of networking, linux, processes, threads, containers etc.

                  Then you can hit other resources (in my case working with a team who've been using K8S for a few years).

                  If you (or anyone else) has suggestions for something newer and covering more than just the core (like various different components you can use, Helm, Argo, ISTIO etc etc) then I'd appreciate it :-)

                • ofrzeta 5 days ago |
                  The second edition is being worked on for a long time: https://www.manning.com/books/kubernetes-in-action-second-ed...
          • c03 8 days ago |
            Please don't mention it's name, we don't want anyone else reading it..
            • radicalbyte 7 days ago |
              Kubernetes in Action

              (I didn't have access to my email or Amazon account let alone my office when I posted so couldn't check the name of the book).

          • jpalomaki 8 days ago |
            Is it this one, Kubernetes: Up and Running, 3rd Edition by Brendan Burns, Joe Beda, Kelsey Hightower, Lachlan Evenson from 2022? https://www.oreilly.com/library/view/kubernetes-up-and/97810...

            (edit: found the 3rd edition)

            • radicalbyte 7 days ago |
              No, Kubernetes in Action, but that book was also on my radar (mainly as Kelsey Hightower's name reminds me of the Police Academy films I loved as a kid).
        • deivid 7 days ago |
          This[0] is my take on something like that, but I'm no k8s expert -- the post documents my first contact with k8s and what/how I understood these concepts, from a sysadmin/SWE perspective.

          [0]: https://blog.davidv.dev/posts/first-contact-with-k8s/

      • AtlasBarfed 8 days ago |
        So you don't run any databases in those thousands of clusters?

        To your point, and I have not used k8s I just started to research it when my former company was thinking about shoehorning cassandra into k8s...

        But there was dogma around not allowing access to VM command exec via kubectl, while I basically needed it in the basic mode for certain one-off diagnosis needs and nodetool stuff...

        And yes, some of the floated stuff was "use sidecars" which also seemed to architect complexity for dogma's sake.

        • voidfunc 8 days ago |
          > So you don't run any databases in those thousands of clusters?

          We do, but not of the SQL variety (that I am aware of). We have persistent key-value and document store databases hosted in these clusters. SQL databases are off-loaded to managed offering's in the cloud. Admittedly, this does simplify a lot of problems for us.

          • tayo42 8 days ago |
            How much data? I keep hearing k8s isn't usable becasue sometimes there is to much data and it can't be moved around.
            • pletnes 7 days ago |
              The simplest approach I’m aware of is to create the k8s cluster and databases in the same datacenter / availability zone.
            • darkstar_16 7 days ago |
              In the managed k8s space, the data is on a PVC in the same availability zone as the node it is being mounted on. If the node dies, the volume is just mounted on to a new node in the same zone. There is no data movement required.
            • eek04_ 7 days ago |
              While I've not played with k8, I did run stuff in Google's Borg for a very long while, and that has a similar architecture. My team was petabyte scale and we were far from the team with the largest footprint. So it is clearly possible to handle large scale data in this type of architecture.
        • pas 7 days ago |
          postgresql operators are pretty nice, so it makes sense to run stateful stuff on k8s (ie. for CI, testing, staging, dev, etc.. and probably even for prod if there's a need to orchestrate shards)

          > exec

          kubectl exec is good, and it's possible to audit access (ie. get kubectl exec events with arguments logged)

          and I guess and admissions webhook can filter the allowed commands

          but IMHO it's shouldn't be necessary, the bastion host where the "kubectl exec" is run from should be accessible only through an SSH session recorder

      • theptrk 8 days ago |
        I would pay for the outline of this book.
      • mitjam 8 days ago |
        Absolutely, a reasonably sophisticated scalable app platform looks like a half-baked and undocumented reimagination of Kubernetes.

        Admittedly: The ecosystem is huge and less is more in most cases, but the foundation is cohesive and sane.

        Would love to read the k8s for greybeards book.

        • coldtea 7 days ago |
          >Absolutely, a reasonably sophisticated scalable app platform looks like a half-baked and undocumented reimagination of Kubernetes.

          Or maybe Kubernetes looks like a committee designed, everything and the kitchen sink, over-engineered, second system effect, second system effect suffering, YAGNI P.O.S., that only the kind of "enterprise" mindset that really enjoyed J2EE in 2004 and XML/SOAP vs JSON/REST would love...

          • p_l 7 days ago |
            You misspelled DCOS /s
      • awestroke 7 days ago |
        I would buy that book in a heartbeat. All documentation and guides on kubernetes seem to assume you already know why things are done a certain way
      • BiteCode_dev 7 days ago |
        Would buy. But you probably should teach a few live courses before writing it because of expert blindness. Otherwise you will miss thr mark.

        Would pay for a decent remote live course intro.

      • DanielHB 7 days ago |
        > What does fucking kill me in the Kubernetes ecosystem is the amount of add-on crap that is pitched as "necessary". Sidecars... so many sidecars.

        Yeah it is the same with terraform modules, I was trying to argument at a previous job that we should stick to a single module (the cloud provider module) but people just love adding crap if it saves them 5 lines of configuration. Said crap of course adding tons of unnecessary resources in your cloud that no one understands.

      • karmarepellent 7 days ago |
        Agreed. The best thing we did back when we ran k8s clusters, was moving a few stateful services to dedicated VMs and keep the clusters for stateless services (the bulk) only. Running k8s for stateless services was an absolute bliss.

        At that time stateful services were somewhat harder to operate on k8s because statefulness (and all that it encapsulates) was kinda full of bugs. That may certainly have changed over the last few years. Maybe we just did it wrong. In any case if you focused on the core parts of k8s that were mature back then, k8s was (and is) a good thing.

      • jq-r 7 days ago |
        Those "necessary" add-ons and sidecars are out of control, but its the people problem. I'm part of the infra team and we manage just couple of k8s clusters, but those are quite big and have very high traffic load. The k8s + terraform code is simple, with no hacks, reliable and easy to upgrade. Our devs love it, we love it too and all of this makes my job pleasant and as little stressful as possible.

        But we recently hired a staff engineer to the team (now the most senior) and the guy just cannot rest still. "Oh we need a service mesh because we need visibility! I've been using it on my previous job and its the best thing ever." Even though we have all the visibility/metrics that we need and never needed more than that. Then its "we need a different ingress controller, X is crap Y surely is much better!" etc.

        So its not inexperienced engineers wanting newest hotness because they have no idea how to solve stuff with the tools they have, its sometimes senior engineers trying to justify their salary, "seniority" by buying into complexity as they try to make themselves irreplaceable.

        • alienchow 7 days ago |
          How do you scale mTLS ops when the CISO comes knocking?
        • jppittma 7 days ago |
          Service mesh is complicated, but the reason you use it integrate services across clusters. That and it has a bunch of useful reverse proxy features. On the other hand, It took me and 2 other guys two evenings of blood, sweat, and tears to understand what the fuck a virtual service actually does.

          It’s not strictly necessary, but if you’ve had to put in the work elsewhere, I’d use it.

        • cyberpunk 7 days ago |
          To be fair, istio and cilium are extremely useful tools to have under your belt.

          There’s always a period of “omgwhat” when new senior engineers join and they want to improve things. There’s a short window between joining and getting bogged into a million projects where this is possible.

          Embrace it I recon.

          • p_l 7 days ago |
            Doing it well IMO requires not deploying everything as sidecar but maybe, maybe, deploying it as shared node service.

            In fact pretty sure I've read a write up from Alibaba? on huge wins in performance due to moving Istio out of sidecar and into shared node service.

            • cyberpunk 7 days ago |
              Sure, cilium is also much faster than istio. But I guess it depends on your workload. We don't care all that much about performance vs compliance (non-hft finance transactional stuff) and I think we're doing things reasonably well. :}
              • p_l 7 days ago |
                I didn't mean replace istio with cilium, I meant running the proxy and routing operations as shared part per node instead of per pod
                • cyberpunk 7 days ago |
                  How does that even work with envoy? The magic sauce behind istio is that every connection is terminated using iptables into the envoy process (sidecar), and istiod spaffs envoy configurations around the place based on your vs/dr/pas/access controls etc.

                  I suppose you could have a giant envoy and have all the proxy-configs all mashed together but I really don't see any benefit to it? I can't even find documentation that says it's possible..

                  • p_l 7 days ago |
                    Couldn't check all details yet, but from quick recap:

                    It's called ambient mode, and uses separate L4 and L7 processing on ways that would be familiar to people who dealt with virtual network functions - and neither l4 nor l7 parts require sidecar

        • withinboredom 7 days ago |
          > So its not inexperienced engineers wanting newest hotness because they have no idea how to solve stuff with the tools they have, its sometimes senior engineers trying to justify their salary, "seniority" by buying into complexity as they try to make themselves irreplaceable.

          The grass is always greener where you water it. They joined your company because the grass was greener there than anywhere else they could get an offer at. They want to keep it that way or make it even greener. Assuming that someone is doing something to become 'irreplaceable' is probably not healthy.

          • monooso 7 days ago |
            I really don't understand this comment.
          • zelphirkalt 7 days ago |
            They want to make it "greener" for whom? I think that is the question.
            • withinboredom 7 days ago |
              Wherever they came from, I suppose. There’s a reason they left.
        • ysofunny 7 days ago |
          > Then its "we need a different ingress controller, X is crap Y surely is much better!" etc.

          I regard these as traits of a junior dev. they're thinking technology-first, not problem-first

      • carlmr 7 days ago |
        >expert blindness at work.

        >It's not that complicated if you limit yourself to the core stuff.

        Isn't this the core problem with a lot of technologies. There's a right way to use it, but most ways are wrong. An expert will not look left and right anymore, but to anyone entering the technology with fresh eyes it's a field with abundance of landmines to navigate around.

        It's simply bad UX and documentation. It could probably be better. But now it's too late to change everything because you'd annoy all the experts.

        >There's way too much vendor garbage surrounding the ecosystem

        Azure has been especially bad in this regard. Poorly documented in all respects, too many confusing UI menus that have similar or same names and do different things. If you use Azure Kubernetes the wrapper makes it much harder to learn the "core essentials". It's better to run minkube and get to know k8s first. Even then a lot of the Azure stuff remains confusing.

        • wruza 7 days ago |
          This and a terminology rug pull. You wanted to upload a script and install some deps? Here’s your provisioning genuination frobnicator tutorial, at the end of which you’ll learn how to maintain the coalescing encabulation for your appliance unit schema, which is needed for automatic upload. It always feels like thousands times bigger complexity (just in this part!) than your whole project.
          • jterrys 7 days ago |
            You nailed it. Genuinely the most frustrating part about learning kubernetes... is just realizing that whatever the fuck they're talking about is a fancy wrapper for a concept that's existed since 90s.
        • rbanffy 7 days ago |
          > There's a right way to use it, but most ways are wrong.

          This is my biggest complaint. There is no simple obvious way to set it up. There is no "sane default" config.

          > It's better to run minkube and get to know k8s first.

          Indeed. It should be trivial to set up a cluster from bare metal - nothing more than a `dnf install` and some other command to configure core functionality and to join machines into that cluster. Even when you go the easy way (with, say, Docker Desktop) you need to do a lot of steps just to have an ingress router.

          • p_l 7 days ago |
            The easy baremetal cluster these days is k3s.

            Includes working out of the box ingress controller.

          • zelphirkalt 7 days ago |
            That is actually what my "try out for a day" experience with Nomad was years ago. Just run the VMs, connect them, and they auto load balance. While it took a week or so to get even the most basic stuff in Kubernetes and not even have 2 hosts in a cluster yet, while having to deal with hundreds of pages of bad documentation.

            I think since then the documentation probably has improved. I would hope so. But I will only touch Kubernetes again, when I need to. So maybe on a future job.

      • cryptonym 7 days ago |
        K8S is a beast and the ecosystem is wild. Newcomer won't know how to proceed to keep things simple, while still understanding everything that is being used.
      • pas 7 days ago |
        My problem is the brittleness.

        Sure, I am playing with fire (k3s, bare metal, cilium, direct assigned IP to Ingresses), but a few weeks ago on one cluster suddenly something stopped working in the external IP -> internal cluster IP network path. (And after a complete restart things got worse. Oops. Well okay time to test backups.)

      • quietbritishjim 7 days ago |
        I hardly know the first thing about Kubernetes or cloud, so maybe you can help explain something to me:

        There's another Kubernetes post on the front page of HN at the moment, where they complain it's too complex and they had to stop using it. The comments are really laying into the article author because they used almost 50 clusters. Of course they were having trouble, the comments say, if you introduce that much complexity. They should only need one single cluster (maybe also a backup and a dev one at most). That's the whole point.

        But then here you are saying your team "operates a couple thousand" clusters. If 50 is far too many, and bound to be unmanageable, how is it reasonable to have more than a thousand?

        • jalk 7 days ago |
          Sounds like it's their primary job is to manage clusters for others, which ofc is different from trying to manage your primary service, that you deployed as 50 microservices in individual clusters (didn't read the other article)
        • voidfunc 7 days ago |
          > But then here you are saying your team "operates a couple thousand" clusters. If 50 is far too many, and bound to be unmanageable, how is it reasonable to have more than a thousand?

          It's not unmanageable to have a couple thousand Kube clusters but you need to have the resources to build a staff and tool chain to support that, which most companies cannot do.

          Clusters are how we shard our customer workloads (a workload being say a dozen services and a database, a customer may have many workloads spread across the entire fleet). We put between 100 and 150 workloads per cluster. What this gives us is a relatively small impact area if a single cluster becomes problematic as it only impacts the workloads on it.

      • coldtea 7 days ago |
        >Before that I did everything from golden images to pushing changes via rsync and kicking a script to deploy.

        Sounds like a great KISS solution. Why did it regress into Kubernetes?

        • fhke 7 days ago |
          > Why did it regress into Kubernetes

          The “KISS solution” didn’t scale to the requirements of modern business. I remember running chef - essentially a complicated ruby script - on 100ks of servers, each of which with their own local daemon & a central management plane orchestrating it. The problem was that if a server failed… it failed, alongside everything on it.

          Compared to that setup, k8s is a godsend - auto healing, immutable deployments, scaling, etc - and ultimately, you were already running a node agent, API, and state store, so the complexity lift wasn’t noticeable.

          The problem came about when companies who need to run 5 containers ended up deploying a k8s cluster :-)

      • khafra 7 days ago |
        Container orchestration is second nature to us SRE's, so it's easy to forget that the average dev probably only knows the syntax for deployments and one or two service manifests.

        And pods, of course

        • pdimitar 7 days ago |
          A factor for sure, but as a programmer I find that the discoverability of stuff in code is much higher than with k8s.

          Give me access to a repo full of YAML files and I'm truly and completely lost and wouldn't even know where to begin.

          YAML is simply not the right tool for this job. Sure you got used to it but that's exactly the point: you had to get used to it. It was not intuitive and it did not come naturally to you.

      • tombert 7 days ago |
        If I had set up the infrastructure myself I'd probably have a different opinion on all of this stuff, but I came into this job where everything was set up for me. I don't know if it was don't "incorrectly", and I do fear that stating as such might get into territory adjacent to the "no true Scotsman" fallacy.

        I mostly just think that k8s integration with GCP is a huge pain in the ass, every time I have to touch it it's the worst part of me day.

        • fragmede 7 days ago |
          What about your integration makes it a huge pita?
          • tombert 7 days ago |
            It's just a lot of stuff; we have a couple hundred services, and when I've had to add shit, it ends up with me updating like two hundred files.

            Infrastructure as code is great, but lets be honest, most people are not thoroughly reading through a PR with 200+ files.

            There's of course tpl files to help reduce duplication, and I'm grateful for that stuff when I can get it, but for one reason or another, I can't always do that.

            It's also not always clear to me which YAML corresponds to which service, though I think that might be more of an issue with our individual setup.

            • fragmede 6 days ago |
              Yeah ew sounds like a mess. I'm sorry, thanks for sharing. Death by a thousand paper cuts is never fun. The solution in looking at for my problem in this area is to add another layer of indirection and make a config generator so that there's only one config file for users to define their service in, and the program goes off and makes all the necessary changes to the pile of yaml files.
      • thiht 7 days ago |
        > It's not that complicated if you limit yourself to the core stuff

        Completely agree. I use Kubernetes (basically just Deployments and CronJobs) because it makes deployments simple, reliable and standard, for a relatively low cost (assuming that I use a managed Kubernetes like GKE where I don’t need to care at all about the Kubernetes engine). Using Kubernetes as a developer is really not that hard, and it gives you no vendor lock-in in practice (every cloud provider has a Kubernetes offer) and easy replication.

        It’s not the only solution, not the simplest either, but it’s a good one. And if you already know Kubernetes, it doesn’t cost much to just use it.

      • mmcnl 7 days ago |
        I didn't grow up in the trenches like you did, but nonetheless I think Kubernetes has a very user-friendly API. Imo Kubernetes is only complex because IT infrastructure is complex. It just makes the complexity transparant and manageable. Kubernetes is a friendly guide helping you navigate the waters.
      • kjs3 7 days ago |
        Maybe it's because I adopted early and have grown with the technology it all just makes sense?

        Survivorship bias?

    • cheriot 8 days ago |
      > I might just be stupid, but it feels like all I ever do with ____ is update and break ____ files, and then spend a day fixing them by copy-pasting increasingly-convoluted things on stackexchange. I cannot imagine how anyone goes to work and actually enjoys working in ____, though I guess someone must in terms of “law of large numbers”.

      I'd make a similar statement about the sys admin stuff you already know well. Give me yaml and a rest api any day.

      I see where you and the article are coming from, though. The article reasonably points out that k8s is heavy for simpler workloads.

    • stiray 8 days ago |
      I agree, but what pisses me the most is that today higher level abstractions (like cloud, spring boost,...) are hiding lower level functionality so well, that you are literally forced to use obnoxious amounts of time to study documentation (if you are in luck and it is well written), while everything is decorated with new naming of known concepts that was invented by people who didn't know that the concept already exists and has a name or some marketing guy figured out it will sell better with more "cool" name.

      Like Shakespeare work would be clumsy and half translated to french advertising jargoon and you are forced to read it and make it work on a stage.

      • datadeft 7 days ago |
        The real benefits of LLMs is to give me a good summary of these documents. Still, using these abstractions is probably not worth it.
        • PittleyDunkin 7 days ago |
          Well, a summary. It's impossible to evaluate the quality of the summary without actually reading the document you're trying to avoid reading.
        • stiray 7 days ago |
          LLMs are a symptom, not a solution. Just another over-hyped bullshit (those downvoting will agree in few years or never as they are the ones riding the hype) that its only concern is to boost company stocks. Google is the proof, their search started to suck immediately when they added AI into concept. It promotes bullshit, while it doesn't hit the really relevant results, even if you specify them into the details. If AI would have any real value, it would never be given out for free and this is the case: they have a solution and they are searching for a problem it solves.
    • devjab 8 days ago |
      I don’t mind the cloud, but even in enterprise organisations I fail to see the value of a lot of the more complex tools. I’ve anlways worked with Azure because Denmark is basically Microsoft territory in a lot of non-tech organisations because of the synergy between pricing and IT operations staff.

      I’ve done bicep, terraform and both Kubernetes and the managed (I forgot what azure conteiner apps running on top of what is basically Kubernetes is called). When I can get away with it I always use the Azure CLI through bash scripts in a pipeline however and build directly into Azure App services for contained which is just so much less complicated than what you probably call “cloud shit”. The cool part about the Azure CLI and their app services is that it hasn’t really changed in the past 3 years, and they are almost one size fit any organisation. So all anyone needs to update in the YAML scripts are the variables. By contrast working with Bicep/Terraform, Jenkins and whatever else people use has been absolutely horrible, sometimes requiring full time staff just to keep it updated. I suppose it may be better now that azure co-pilot can probably auto-generate what you need. A complete waste of resources in my opinion. It used to be more expensive, but with the last price hike of 600% on azure container apps it’s usually cheaper. It’s also way more cost efficient in terms of maintaining since it’ll just work after the initial setup pipeline has run. This is the only way I have found that is easier than what it was when organisations ran their own servers. Whether it was in the basement or at some local hardware house (not exactly sure what you call the places where you rent server rack space). Well places like Digital Ocean are even easier but they aren’t used by enterprise.

      I’m fairly certain I’ve ever worked with an organisation that needed anything more than that since basically nothing in Denmark scales beyond what can run on a couple of servers behind a load balancer. One of the few exceptions is the tax system which sees almost 0 usage except for the couple of weeks where the entire adult population logs in in at the same time. When DevOps teams push back, I tend to remind them that StackOverflow ran on a couple of IIS servers for a while and that they don’t have even 10% of the users.

      Eventually the business case for Azure will push people back to renting hardware space or jumping to Hetzner and similar. But that’s a different story.

      • DanielHB 7 days ago |
        Terraform has the same problem as Kubernetes sidecars with terraform providers trying to do magic for you. If you stick to the cloud platform provider it is actually much nicer than using the CLI.

        Although my experience is with AWS, I find the terraform AWS provider docs better documentation than the official AWS docs for different options. If they don't answer any question I have right away they at least point me where to look for answers in the mess that is AWS docs.

      • MortyWaves 7 days ago |
        This was a good read! I have similar thoughts especially about IaC vs a bash script. Definitely clear pros and cons to both, but I’m wondering how you handle infrastructure drift with imperative bash scripts?

        I mean hopefully no one is logging into Azure to fuck with settings but I’m sure we’ve all worked with that one team that doesn’t give a flying fuck about good practices.

        Or say you wish to now scale up a VM, how does your bash script deal with that?

        Do you copy past the old script, pass new flags to the Azure CLI, and then run that, then delete the old infrastructure somehow?

        I’m curious because I think I’d like to try your approach.

        • devjab 7 days ago |
          I think you’ve nailed the issues with this approach. I think the best approach to control “cowboy” behaviour is to make everything run through a service connection so that developers don’t actually need access to your azure resources. Though to be fair, I’ve never worked with a non-tech enterprise organisation where developers didn’t have at least some access into Azure directly. I also think the best way to avoid dangers in areas like networking is to make sure the responsibility for these are completely owned by IT-Operations. With VNETs and private DNS zones in places all you really need to allow is for the creation of private end points and integration to the network resources. Similarity I think it’s best practice to have things like key vaults managed by IT operations with limited developer access, but this can depend on your situation.

          One of the things I like about the Azure CLI is that it rarely changes. I would like to clarify that I’m mainly talking about Azure App Services and not VMs. Function apps for most things, web apps for things like APIs.

          As far as the script goes they are basically templates which are essentially “copy paste”. One of the things I tend to give developers in these organisations is “skeleton” projects that they can git clone. So they’ll typical also have some internal CLI scripts to automate a lot of the code generation and an azure-pipelines-resource-creation.yml plays into this. Each part of your resource creation is its own “if not exist” task. So there is a task to create a resource group. Then a task to create an app service plan and so on.

          It won’t scale. But it will scale enough for every organisation I’ve worked with.

          To be completely honest it’s something which grew out of my frustration of repeating the same tasks in different ways over the years. I don’t remember exactly but I think quite a few of the AZ CLI commands haven’t change for the past three years. It’s really the one constant across organisations, even the Azure Poetal hasn’t remained the same.

    • ozim 8 days ago |
      I am running VPSes at our small startup-ish company on IaaS cloud.

      Every time we get a new guy I have to explain that we are already „in cloud” there is no need to „move to cloud”.

      • rcleveng 8 days ago |
        Do they mean PAAS vs IAAS when they say "move to cloud"?
        • ozim 8 days ago |
          Mostly business guys don’t know the difference but we are running on local cloud provider and they think of it is not on Azure or AWS it is not in cloud - they understand that we run stuff on servers but they also don’t understand VPS is IaaS.

          Developers want to use PaaS and also AWS or Azure so they can put it on their resume for the future.

          • bluehatbrit 7 days ago |
            > Developers want to use PaaS and also AWS or Azure so they can put it on their resume for the future.

            I think this is a little disingenuous. Developers want to use them because they already know them. The services composing them are also often well documented by the provider.

            I say all of that as someone trying to move a company away from aws, and over to our own hardware.

            • PittleyDunkin 7 days ago |
              I want to use them because I'm not paying for it and managing your own hardware is a pain in the ass I'd rather avoid when I could just code instead. Which is not to say that using your own hardware isn't smart, but it is definitely miserable.
              • regularfry 7 days ago |
                In my experience you swap managing your own hardware for managing intensely obtuse service configurations, pretty much 1:1. That might be preferable but I see a lot of folks approach it like the tradeoff doesn't exist.
              • xorcist 6 days ago |
                But you don't "manage your own hardware" if you are renting VMs, which was the question here.

                Also managing a cloud infrastructure is a lot more complex than running Debian and Ansible on a VM.

                • PittleyDunkin 6 days ago |
                  Sorry, I think I must have miscommunicated. I was communicating why I was arguing for using the cloud.
    • misswaterfairy 8 days ago |
      I hate "cloud shit" as well, though specifically that there's a vendor-specific 'app', or terminology, or both, for everything that we've had standard terms for, for decades.

      I just want a damn network, a couple of virtual machines, and a database. Why does each <cloud provider> have to create different fancy wrappers over everything, that not even their own sales consultants, and even engineers, understand?(1)

      What I do like about Docker and Kubernetes is that shifting from one cloud provider to another, or even back to on-premises (I'm waiting for the day our organisation says "<cloud-provider> is too damn expensive; those damn management consultants lied to us!!!!") is a lot easier than re-building <cloud provider>'s bespoke shit in <another cloud provider>'s bespoke shit, or back on-premises with real tech (the right option in my opinion for anyone less than a (truly) global presence).

      I do like the feel of, and being able to touch bare metal, though the 180-proof-ether-based container stuff is nice for quick, flexible, and (sometimes) dirty. Especially for experimenting when the Directors for War and Finance (the same person) say "We don't have the budget!! We're not buying another server/more RAM/another disk/newer CPUs! Get fucked!".

      The other thing about Docker specifically I like is I can 'shop' around for boilerplate templates that I can then customise without having to screw around manually building/installing them from scratch. And if I fuck it up, I just delete the container and spin up another one from the image.

      (1) The answer is 'vendor lock-in', kids.

      (I apologise, I've had a looooooong day today.......)

      • yungporko 7 days ago |
        agreed, cloud shit can fuck off. they market themselves as the solution to problems you will never have so that you build your shit on their land and they can charge you rent, and most people eat it up and think they're doing them a favour.
    • fragmede 8 days ago |
      I'll let you in on the joke. The joke is the demand for 100% availability and instant gratification. we're making services where anything less than 4 nines, which is 5 minutes month, is deemed unacceptable. three nines is 10 minutes a week. two nines is 15 minutes a day. there are some things that are important enough that you can't take a coffee break and wait for, but Kubernetes lets you push four nines of availability, no problem. Kubernetes is solving for that level of availability, but my own body doesn't have anything near that level of availability. demanding that from everything and everyone else is what pushes for Kubernetes level of complexity.
      • kitd 7 days ago |
        Are you the only one waiting on your app if it goes down?
      • EVa5I7bHFq9mnYK 7 days ago |
        4 nines is a lie. That's not what I experience as a user. I experience 403, 404, 405, 500, or other forms of down, such as difficult captchas, 2fa not working, geolocation blockages etc several times a day.
      • spacebanana7 7 days ago |
        Most outages these days are also caused by "devops errors" that're much more likely to happen when working with complex infrastructure stacks. So there's real value in keeping things simple.
      • karmarepellent 7 days ago |
        Its a matter of evaluating what kind of infrastructure your application needs to run on. There are certainly mission critical systems where even a sliver of downtime causes real damage, like lost revenue. If you come to the conclusion that this application and everything it involves better run on k8s for availability reasons, you should probably focus on that and code your application in a k8s-friendly manner.

        But there are tons of applications that run on over-engineered cloud environments that may or may not involve k8s and probably cost more to operate than they must. I use some tools every day where a daily 15 min downtime would not affect my or my work in the slightest. I am not saying this would be desirable per se. Its just that a lot of people (myself included) are happy to spend an hour of their work day talking to colleagues and drinking coffee, but a 15 min downtime of some tool is seen as an absolute catastrophe.

      • tombert 7 days ago |
        I agree with you in theory, and maybe even in practice in some cases, but I do not feel like we have less downtime with k8s than using anything else.
    • cess11 8 days ago |
      Once it gets hard to run the needed services for all the applications in the organisation/team on your development machine it'll start to look more attractive to turn to Docker/Podman and the like, and that's when automatic tests and deploys built on disgusting YAML starts to make more sense.

      I've been at a place where the two main applications were tightly coupled with several support systems and they all were dependent on one or more of Oracle DB, Postgres, Redis, JBoss, Tibco EMS and quite a bit more. Good luck using your development device to boot and run the test suite without containers. Before that team started putting stuff in containers they used the CI/CD environment to run the full set of tests, so they needed to do a PR, get it accepted, maybe wait for someone else's test run to finish, then watch it run, and if something blew, go back to commit, push to PR, &c. all over again.

      Quite the nuisance. A full test suite had a run time of about an hour too. When I left we'd pushed it to forty minutes on our dev machines. They didn't use raw Kubernetes though, they had RedHat buy-in and used OpenShift, which is a bit more sane. But it's still a YAML nightmare that cuts you with a thousand inscrutable error messages.

    • teekert 8 days ago |
      Sorry to have to tell you this, but you’re old. Your neural plasticity has gone down and you feel like you have seen it al before. As a result you cling to the old and never feel like you grasp the new. The only reasonable thing to is to acknowledge and accept this and try not let it get in your way.

      Our generation has seen many things before, but at the same time the world has completely changed and it’s led to the people growing up in it to be different.

      You and me didn’t fully grasp CPUs anymore. Some people today don’t grasp all the details of the abstractions below K8s anymore and use it when perhaps something simpler (in architecture , not necessarily in use!) could do it better. And yet, they build wonderous things. Without editing php.ini and messing up 2 services to get one working.

      Do I think K8s is the end all? Certainly not, I agree it’s sometimes overkill. But I bet you’ll like it’s follow-up tech even less. It is the way of things.

      • tedk-42 8 days ago |
        > Is K8s the end all? Certainly not, I agree it’s sometimes overkill. But I bet you’ll like it’s follow-up tech even less. It is the way of things.

        I agree with your analysis.

        People wanna talk up about how good the old days were plugging cables into racks but it's really laborious and can take days to debug that a faulty network switch is the cause of these weird packet drop issues seen sporadically on hot days.

        Same as people saying 'oh yeah calculators are too complicated, pen and paper is what kids should be learning'.

        It's the tide of change

        • tombert 7 days ago |
          I reject that comparison, I'm not really resistant to change, I'm resistant the awful bureaucratic crap that k8s and its ilk force you to use. It's not fun, as far as I can tell no one actually understands any of it (young or old), they just copy and past large blocks of YAML from blogs and documentation and then cross their fingers.

          I'm not saying that plugging in cables and hoping power supplies don't die is "better" in any kind of objective sense, or even subjective sense really, I'm just saying that I hate this cloud cult that has decided that the only way to do anything is to add layers of annoying bureaucratic shit.

      • creesch 7 days ago |
        While this is a nice essay, it also is purely an emotional argument hanging together from assumptions and fallacies.

        Even if you are right in this instance, just brushing things off with the "you are old" argument will ensure that you end up in some horrible tech debt spaghetti mess in the future.

        Being critical of the infrastructure you deploy to is a good thing. Because for all the new things that do stick around, there are dozens of other shiny new hyped up things that end up in digital purgatory quite soon after the hype phase is over.

        That's not to say there isn't some truth to your statement. The older you get, the more critical you do need to be to yourself as well. Because it is indeed possible to just be against something because it is new and unfamiliar. At the same time, does experience provide insights allowing senior people to be more critical to things.

        *tl;dr:* The world is complicated, not binary.

        • teekert 7 days ago |
          Well, I fully agree with you. Perhaps the -hate “cloud shit"- remark triggered me a bit. It's just such a 'drown the baby with the bathwater', curmudgeon thing to say. And, imho, it betrays age. It's like my grandfather saying, I hate all this digital stuff, "I will never put email on my phone because with emails come viruses." (Literal thing my father-in-law always claims, and perhaps he's not even wrong, he just stopped using new things, hating and resisting change. He has that right of course. And to be fair with his knowledge level it's perhaps even good to not have email on his Phone. But it's getting more difficult, i.e. he refuses our national Digital ID, making his life a lot harder in the process, especially because he also resists help, too proud). It's good to recognize this in oneself though, imho.
          • tombert 7 days ago |
            I don't think it betrays age really, I just think that a lot of this stuff with AWS and Azure and GCP is overly complicated. I am not convinced anyone actually enjoys working on it. I'm pretty sure that 21 year old me would have roughly the same opinion.

            As I said in a sibling comment, you can genuinely get a bachelors degree in AWS or Azure [1], meaning that it's complicated enough to where someone thought it necessitated an entire college degree.

            By "cloud shit", I don't mean "someone else hosting stuff" (which I tried to clarify by saying "give me a VM" at the end). I mostly think that having four hundred YAML files to do anything just sucks the life out of engineering. It wouldn't bother me too much if these tasks were just handled by the people who run devops, but I hate that since I am a "distributed systems engineer" that I have to LARP as an accountant and try and remember all this annoying arbitrary bureaucratic crap.

            [1] https://www.wgu.edu/online-it-degrees/cloud-computing-bachel...

      • tombert 7 days ago |
        I'm 33 dude, not exactly "old".

        I never really liked the devops stuff even when I was 20. I have no doubt that I could get better with k8s, but it's decidedly not fun.

    • bob1029 7 days ago |
      Cloud is a broad spectrum of tools. I think the Goldilocks zone is something right in between bare-ass VMs and FaaS.

      If I can restart the machine and it magically resolves the underlying hardware fault (presumably by migrating me to a new host), then I am in a happy place.

      Most of the other problems can be dealt with using modern tooling. I lean aggressively on things like SQLite and self-contained deployments in .NET to reduce complexity at hosting time.

      When you can deploy your entire solution with a single zip file you would be crazy to go for something like K8s.

      One other cloud thing that is useful is S3-style services. Clever use of these APIs can provide incredible horizontal scalability for a single instance solution. SQLite on NVMe is very fast if you are offloading your blobs to another provider.

    • grishka 7 days ago |
      > If I get to tens of millions of users, maybe I’ll worry about it then.

      Nope, then you'll set up sharded databases and a bunch of application servers behind a load balancer.

    • raffraffraff 7 days ago |
      I'm sure you can get some of the handy extras that come with a typical kubernetes deployment without the kubernetes, but overall I'll take kubernetes + cloud. Once you've got the hang of it, it's ok. I have a terraform project that deploys clusters with external-dns, external-secrets, cert-manager, metrics, monitoring stack, scalers and FluxCD. From there, pretty much everything else is done via FluxCD (workloads, dashboards, alerts). And while I detest writing helm charts (and sometimes using them, as they can get "stuck" in several ways) they do allow you to wrap up a lot of the kubernetes components into a single package that accepts more-or-less standardized yaml for stuff like resource limits, annotations (eg for granting and AWS role to a service) etc. And FluxCD .postBuild is extremely handy for defining environment vars to apply to more generic template yaml, so we avoid a sprawl. So much so that I am the one-man-band (Sys|Dev|Sec)Ops for our small company, and that doesn't give me panic attacks.

      The cloud integration part can be hairy but I have terraform patterns that, once worked out, are cookie cutter.

      With cloud kubernetes, I can imagine starting from scratch, taking a wrong turn and ending up in hell.

      But I'm exchanging one problem set for another. Having spent years managing fleets of physical and virtual servers, I'm happier and more productive now. I never need to worry about building systems or automation for doing OS build / patching, config management, application packaging and deployment, secrets management, service discovery, external DNS, load balancering, TSL certs etc. Because while those are just "words" now, back then each one was a huge project involving multiple people fighting over "CentOS Vs Ubuntu", "Puppet Vs Ansible", "RPMs Vs docker containers", "Patching Vs swapping AMIs". If you're using Consul and Vault, good luck - you have to integrate all of that into whatever mess you've built, and you'll likely have to write puppet code and scripts to hook it all up together. I lost a chunk of my life writing 'dockerctl' and a bunch of puppet code that deployed it so it could manage docker containers as systemd services. Then building a vault integration for that. It worked great across multiple data centers, but took considerable effort. And in the end it's a unique snowflake used by exactly one company, hardly documented and likely full of undiscovered bugs and race conditions even after all the hard work. The time it took to onboard new engineers was considerable and it took time away from an existing engineer. And we still had certificates expire in production.

    • datadeft 7 days ago |
      One thing that might help you in this madness is:

      https://github.com/dhall-lang/dhall-kubernetes

      Type safe, fat finger safe representation of your YAMLs is grossly underrated.

    • friendzis 7 days ago |
      In my limited experience k8s is treated as some magic instead of what it is: resource virtualization platform. Sort one (or two) step above hardware virtualization platforms (e.g. esxi).

      The fact that vmware can migrate a running vm to a different hardware node is surely powerful feature. Do you want to pay for that with complexity? If you are one infra team serving on-prem deployments with consistent loads you provision things when new projects are started and things break. However, if infra team serves internal product teams it is nice to give certain guarantees to them and limit blast radius how they can affect other teams.

      This is where kubernetes sit. It's a deployment platform, where the deployable is container image instead of VM image. Slice off a namespace to an internal team and have their deployment blast radius contained within their namespace.

      Do you need such flexibility? I'm pretty sure that roughly 99% of all teams do not. A static set of VMs provisioned with some container orchestration system is more than enough for them. Loads and deployments are too static for it to matter.

      >>> But it allows seamless upgrades!

      Dude or dudette, if you can't properly drain your nodes and migrate sessions now, kubernetes will not save you.

    • torginus 7 days ago |
      Kubernetes starts sucking at first sight and is kind of the sign of the ailment of modern times.

      Let me try to explain:

      First, you encounter the biggest impedance mismatch between cloud and on prem: Kubernetes works with pods, while AWS works with instances as the unit of useful work, so they must map to each other, right?

      Wrong, first each instance needs to run a Kubernetes node, which duplicates the management infrastructure hosted by AWS, and reduces the support for granularity, like if I need 4 cores for my workload, I start an instance with 4 cores, right?. Not so with k8s, you have to start up a big node, then schedule pods there.

      Yay, extra complexity and overhead! And it's like when you need 3 carrots for the meal your cooking, but the store only sells it in packs of 8, you have to pay for the ones you don't need, and then figure out how to utilize the waste.

      I'm not even going to talk about on-prem kubernetes, as I've never seen anyone IRL use it.

      Another thing is that the need for kubernetes is manifestation of crappy modern dev culture. If you wrote your server in node, Python and Ruby, you're running a single threaded app in the era of even laptops having 32 core CPUs. And individual instances are slow, so you're even more dependent on scaling.

      So, to make use of the extra CPU power, you're forced to turn to Docker/k8s and scale your infra that way, whereas if you went with something like Go, or god forbid, something as deeply uncool as ASP.NET, you could just put your 32 core server, and you get fast single threaded perf, and perfect and simple multi-threaded utilization out of the box without any of the headaches.

      Also I've found stuff like rolling updates to be a gimmick.

      Also a huge disclaimer, I don't think k8s is a fundamentally sucky or stupid thing, it's just I've never seen it used as a beneficial architectural pattern in practice.

      • erinaceousjones 7 days ago |
        Being nitpicky about Python specifically, Python not necessarily single-threaded; gunicorn gets you a multiprocess HTTP/WSGI server if you configure it to. asyncio and gevent have made it easier to do things actually-concurrently. Judicious use of generator functions lets you stream results back instead of blocking I/O for big chunks. And we have threads. Yeah, the Global Interpreter Lock is still hanging around, it's not the fastest language, but there are ways to produce a Docker image of an API written in Python which can handle thousands of concurrent HTTP requests and actively use all available CPU cores to do intensive computation.
        • zelphirkalt 7 days ago |
          Here I thought you would bring multiprocessing and process pool in Python.
      • regularfry 7 days ago |
        It really, really wants a higher-level abstraction layer over the top of it. I can see how you'd build something heroku-like with it, but exposing app developers to it just seems cruel.
        • torginus 7 days ago |
          Thing is, k8s is already an abstraction layer on top of something like AWS or GCP
          • regularfry 7 days ago |
            Yes, and that's fine. I can see the need for the things k8s adds (well, mostly). I just don't think there's any value at all in the average dev having to care about them.
        • switch007 7 days ago |
          > I can see how you'd build something heroku-like with it

          That's what many teams end up half-arsing without realising they're attempting to build a PaaS.

          They adopt K8S thinking it's almost 90% a PaaS. It's not.

          They continue hiring, building a DevOps team just to handle K8S infrastructure things, then a Platform team to build the PaaS on top of it.

          Then because so many people have jobs, nobody at this point wants to make an argument that perhaps using an actual PaaS might make sense. Not to mention "the sunk cost" of the DIY PaaS.

          Then on top of that, realising they've built a platform mostly designed for microservices, everything then must become a microservice. 'Monolith' is a banned word.

          Argh

      • ndjdjddjsjj 7 days ago |
        Run a node process on each core?
    • jmb99 7 days ago |
      I have pretty much the exact same opinion, right down to “cloud shit” (I have even used that exact term at work to multiple levels of management, specifically “I’m not working on cloud shit, you know what I like working on and I’ll do pretty much anything else, but no cloud shit”). I have worked on too many projects where all you need is a 4-8 vCPU VM running nginx and some external database, but for some reasons there’s like 30 containers and 45 different repos and 10k lines of terraform, and the fastest requests take 750ms which should be nginx->SQL->return in <10ms. “But it’s autoscaling and load balanced!” That’s great, we have max 10k customers online at once and the most complex requests are “set this field to true” and “get me an indexed list.” This could be hosted on a raspberry pi with better performance.

      But for some reason this is what people want to do. They would rather spend hours debugging kubernetes, terraform, and docker, and spending 5 digits on cloud every month, to serve what could literally be proxied authenticated DB lookups. We have “hack days” a few times a year, and I’m genuinely debating rewriting the entire “cloud” portion of our current product in gunicorn or something, host it on a $50/month vps, point it at a mirror of our prod db, and see how many orders of magnitude of performance I can knock off in a day.

      I’ve only managed to convert one “cloud person” to my viewpoint but it was quite entertaining. I was demoing a side project[0] that involved pulling data from ~6 different sources (none hosted by me), concatenating them, deduping them, doing some math, looking up in a different source an image (unique to each displayed item), and then displaying the list of final items with images in a list or a grid. ~5k items. Load time on my fibre connection was 200-250ms, sorting/filtering was <100ms. As I was demoing this, a few people asked about the architecture, and one didn’t believe that it was a 750 line python file (using multiprocessing, admittedly) hosted on an 8 core VPS until I literally showed up. He didn’t believe it was possible to have this kind of performance in a “legacy monolithic” (his words) application.

      I think it’s so heavily ingrained in most cloud/web developers that this is the only option that they will not even entertain the thought that it can be done another way.

      [0] This particular project failed for other reasons, and is no longer live.

      • jiggawatts 7 days ago |
        Speaking of Kubernetes performance: I had a need for fast scale-out for a bulk testing exercise. The gist of it was that I had to run Selenium tests with six different browsers against something like 13,000 sites in a hurry for a state government. I tried Kubernetes, because there's a distributed Selenium runner for it that can spin up different browsers in individual pods, even running Windows and Linux at the same time! Very cool.

        Except...

        Performance was woeful. It took forever to spin up the pods, but even once things had warmed up everything just ran in slow motion. Data was slow to collect (single-digit kilobits!), and I even saw a few timeout failures within the cluster.

        I gave up and simply provisioned a 120 vCPU / 600 GB memory cloud server with spot pricing for $2/hour and ran everything locally with scripts. I ended up scanning a decent chunk of my country's internet in 15 minutes. I was genuinely worried that I'd get put on some sort of "list" for "attacking" government sites. I even randomized the read order to avoid hammering any one site too hard.

        Kubernetes sounds "fun to tinker with", but it's actually a productivity vampire that sucks up engineer time.

        It's the Factorio of cloud hosting.

        • pdimitar 7 days ago |
          > I gave up and simply provisioned a 120 vCPU / 600 GB memory cloud server with spot pricing for $2/hour and ran everything locally with scripts. I ended up scanning a decent chunk of my country's internet in 15 minutes.

          Now that is a blog post that I would read with interest, top to bottom.

          • jiggawatts 7 days ago |
            It was the “boring” solution so I don’t know what I could write on the topic!

            Both Azure and AWS have spot-priced VMs that are “low priority” and hence can be interrupted by customers with normal priority VM allocation requests. These have an 80% discount in exchange for the occasional unplanned outage.

            In Azure there is an option where the spot price dynamically adjusts based on demand and your VM basically never turns off.

            The trick is that obscure SKUs have low demand and hence low spot prices and low chance of being taken away. I use the HPC optimised sizes because they’re crazy fast and weirdly cheap.

            E.g.: right now I’m using one of these to experiment with reindexing a 1 TB database. With 120 cores (no hyperthreading!) this goes fast enough that I can have a decent “inner loop” development experience. The other trick is that even Windows and SQL Server is free if this is done in an Azure Dev/Test subscription. With free software and $2/hr hardware costs it’s a no-brainer!

            • pdimitar 7 days ago |
              Well I mostly meant how do you supply the server resources and how do you crawl so much of the net so quickly. :)

              I thought about it many times but never did it on that scale, plus was never paid to do so and really didn't want my static IP banned. So if you ever write on that and publish it on HN you'd find a very enthusiastic audience in me.

              • jiggawatts 7 days ago |
                That was pretty boring too! The "script" was just a few hundred lines of C# code triggering Selenium via its SDK. The requirement was simply to load a set of URLs with two different browsers, an "old" one and a "new" one that included a (potentially) breaking change to cookie handling that the customer needed to check for across all sites. I didn't need to fully crawl the sites, I just had to load the main page of each distinct "web app" twice, but I had process JavaScript and handle cookies.

                I did this in two phases:

                Phase #1 was to collect "top-level" URLs, which I did via Certificate Transparency (CT). There's online databases that can return all valid certs for domains with a given suffix. I used about a dozen known suffixes for the state government, which resulted in about 11K hits from the CT database. I dumped these into a SQL table as the starting point. I also added in distinct domains from load balancer configs provided by the customer. This provided another few thousand sites that are child domains under a wildcard record and hence not easily discoverable via CT. All of this was semi-manual and done mostly with PowerShell scripts and Excel.

                Phase #2 was the fun bit. I installed two bespoke builds of Chromium side-by-side on the 120-core box, pointed Selenium at both, and had them trawl through the list of URLs in headless mode. Everything was logged to a SQL database. The final output was any difference between the two Chromium builds. E.g.: JS console log entries that are different, cookies that are not the same, etc...

                All of this was related to a proposed change to the Public Suffix List (PSL), which has a bunch of effects on DNS domain handling, cookies, CORS, DMARC, and various other things. Because it is baked into browser EXEs, the only way to test a proposed change ahead of time is to produce your own custom-built browser and test with that to see what would happen. In a sense, there's no "non-production Internet", so these lab tests are the only way.

                Actually, the most compute-intensive part was producing the custom Chromium builds! Those took about an hour each on the same huge server.

                By far the most challenging aspect was... the icon. I needed to hand over the custom builds to web devs so that they could double-check the sites they were responsible for, and it was also needed for internal-only web app testing. The hiccup was that two builds look the same and end up with overlapping Windows task bar icons! Making them "different enough" that they don't share profiles and have distinct toolbar icons was weirdly difficult, especially the icon.

                It was a fun project, but the most hilarious part was that it was considered to be such a large-scale thing that they farmed out various major groups of domains to several consultancies to split up the work effort. I just scanned everything because it was literally simpler. They kept telling me I had "exceeded the scope", and for the life of me I couldn't explain to them that treating all domains uniformly is less work than trying to determine which one belongs to which agency.

                • pdimitar 7 days ago |
                  EXTREMELY nice. Wish I was paid to do that. :/
                  • jiggawatts 7 days ago |
                    So do I! :(

                    I only get a "fun" project like this once every year or two.

                    Selling this kind of thing is basically impossible. You can't convince anyone that you have an ability that they don't even understand, at some fundamental level.

                    At best, you can incidentally use your full set of skills opportunistically, but that's only possible for unusual projects. Deploying a single VM for some boring app is always going to be a trivial project that anyone can do.

                    With this project even after it was delivered the customer didn't really understand what I did or what they got out of it. I really did try to explain, but it's just beyond the understanding of non-technical-background executives that think only in terms of procurement paperwork and scopes of works.

      • throwaway2037 7 days ago |
        These comments:

            > He didn’t believe it was possible to have this kind of performance in a “legacy monolithic” (his words) application.
        
            > I think it’s so heavily ingrained in most cloud/web developers that this is the only option that they will not even entertain the thought that it can be done another way.
        
        One thing that I need to remind myself of periodically: The amount of work that a modern 1U server can do in 2024 is astonishing.
        • HPsquared 7 days ago |
          It's nice to think about the amount of work done by game engines, for instance. Factorio is a nice example, or anything graphics-heavy.
        • sgarland 7 days ago |
          Hell, the amount of work that an OLD 1U can do is absurd. I have 3x Dell R620s (circa-2012), and when equipped with NVMe drives, they match the newest RDS instances, and blow Aurora out of the water.

          I’ve tested this repeatedly, at multiple companies, with Postgres and MySQL. Everyone thinks Aurora must be faster because AWS is pushing it so hard; in fact, it’s quite slow. Hard to get around physics. My drives are mounted via Ceph over Infiniband, and have latency measured in microseconds. Aurora (and RDS for that matter) has to traverse much longer physical distances to talk to its drives.

      • mschuster91 7 days ago |
        I mostly agree with you, but at least using Docker is something one should be doing even if one is on bare metal.

        Pure bare metal IME only leads to people ssh'ing to hotfix something and forgetting to deploy it. Exclusively using Docker images prevents that. Also, it makes firewall management much, much easier as you can control containers' network connectivity (including egress) each on their own, on a bare-metal setup it involves loads of work with network namespaces and fighting the OS-provided systemd unit files.

    • mrweasel 7 days ago |
      I wouldn't throw "cloud shit" in the same bucket as Kubernetes. Professional cloud services are mostly great, but super expensive.

      Kubernetes is interesting, because it basically takes everything you know and sort of pushes it down the stack (or up, depending on your viewpoint). To some extend I get the impression that the idea was: Wouldn't it be great if we took all the infrastructure stuff, and just merged the whole thing into one tool, which you can configure using a completely unsuitable markup language. The answer is that "No, that would in fact not be great".

      For me the issue is that Kubernetes is overused. You can not really argue that it's not useful or has its place, but that place is much much small than the Internet wants us to believe. It's one of the few services where I feel like "Call us" would be an appropriate sales method.

      The article is correct, you probably don't need Kubernetes. It's a amazing piece of technology, but it's not for the majority of us. It is and should be viewed as a niche product.

      • nixpulvis 7 days ago |
        What do you think its niche is?
        • mrweasel 7 days ago |
          Very large "single product" services, the focus being on "very large". It would also be relevant in the cases where your product is made up of a large number of micro-services, though if you have more than 50 services to support your main product I'd be interested in knowing why you have that many services. There might be completely sane reasons, but it is a special case.

          Mostly a question of scale to me, I'd guess that the majority (80-90%) of people running Kubernetes don't have large enough scale that it makes sense to take on the extra complexity. Most Kubernetes installations I've seen runs on VMs, three for control plane and 2 - 5 for worker node and I don't think the extra layer is a good trade off for a "better" deployment tool.

          If you do use Kubernetes as a deployment tool, then I can certainly understand that. It is a reasonably well-known, and somewhat standardised interface and there's not a lot of good alternatives for VMs and bare metal. Personally I'd just much rather see better deployment tools being developed, rather than just picking Kubernetes because Helm charts are a thing.

          You'd also need to have a rather dynamic workload, in the sense that some of your services is need a lot of capacity at one point in time, while other need the capacity at another time. If you have constant load, then why?

          It's like Oracle's Exadata servers, it's a product that has its place, but the list of potential buyers isn't that long.

    • kortilla 7 days ago |
      All of these anecdotes seem to come from people who don’t bother to try to learn kubernetes.

      > YAML files, and then spend a day fixing them by copy-pasting increasingly-convoluted things on stackexchange.

      This is terrible behavior. Its not any different from yanking out pam modules because you’re getting SSH auth failures caused by a bad permission on an SSH key.

      > If I get to tens of millions of users, maybe I’ll worry about it then.

      K8s isn’t there for 10s of millions of users. It’s there so you’re not dependent on some bespoke VM state. It also allows you to do code review on infra changes like port numbers being exposed, etc.

      Separately, your VM likely isn’t coming from any standard build pipeline so now a vulnerability patch is a login to the machine and an update, which hopefully leaves it in the same state as VMs created new…

      Oh, and assuming you don’t want to take downtime on every update, you’ll want a few replicas and load balancing across them (or active/passive HA at a minimum). Good luck representing that as reviewable code as well if you are running VMs.

      The people that don’t understand the value prop of infra as code orchestration systems like k8s tend to work in environments where “maintenance downtime” is acceptable and there are only one or two people that actually adjust the configurations.

      • secondcoming 7 days ago |
        Just because you're using VMs doesn't mean you're now dealing with state.

        It's 100% possible to have stateless VMs running in an auto-scaling instance group (in GCP speak, I forget what AWS calls them)

        • everfrustrated 7 days ago |
          In the beginning AWS didn't even support state on their VMs! All VMs were ephemeral with no state persistence when terminated. They later introduced EBS to allow for the more classic enterprise IT use cases.
        • kortilla 7 days ago |
          Once you have the tools to manage all of that, you effectively have kubernetes. Container vs VM is largely irrelevant to what the op is complaining about when it comes to k8s.

          People that don’t like k8s tend to be fine with docker. It’s usually that they don’t like declarative state or thinking in selectors and other abstractions.

          • pdimitar 4 days ago |
            Quite the contrary, I support declarative configuration and code-reviewable infrastructure changes but k8s is just too much for me.

            I paired with one of our platform engineers several months ago. For a simple app that listens on Kafka, stores stuff in PostgreSQL and only has one exposed port... and that needed at least 8 YAML files. Ingress, service ports and whatever other things k8s feels should be described. I forgot almost all of them the same day.

            I don't doubt that doing it every day will have me get used to it and even find it intuitive, I suppose. But it's absolutely not coming natural to me.

            I'd vastly prefer just a single config block with a declarative DSL in it, a la nginx or Caddy, and describe all these service artifacts in one place. (Or similar to a systemd service file.)

            Too many files. Fold stuff in much less of them and I'll probably become an enthusiastic k8s supporter.

      • tombert 7 days ago |
        Sure, because Kubernetes is convoluted and not fun and is stupidly bureaucratic. I might learn to enjoy being kicked in the balls if I practiced enough but after the first time I don't think I'd like to continue.

        > This is terrible behavior. Its not any different from yanking out pam modules because you’re getting SSH auth failures caused by a bad permission on an SSH key.

        Sure, I agree, maybe they should make the entire process less awful then and easier to understand. If they're providing a framework to do distributed systems "correctly" then don't make it easy for someone whose heart really isn't into it to screw it up.

        > K8s isn’t there for 10s of millions of users. It’s there so you’re not dependent on some bespoke VM state. It also allows you to do code review on infra changes like port numbers being exposed, etc.

        That's true of basically any container stuff or orchestration stuff, but sure.

        Kubernetes just screams to me as suffering from a "tool to make it look like I'm doing a lot of work". I have similar complaints with pretty much all Java before Java ~17 or so.

        I'm not convinced that something like k8s has to be as complicated as it is.

        • kortilla 7 days ago |
          > Sure, because Kubernetes is convoluted and not fun and is stupidly bureaucratic.

          Describe what you think bureaucratic means in a tool.

          > I might learn to enjoy being kicked in the balls if I practiced enough

          This is the same thing people say who don’t want to learn command line tools “because they aren’t intuitive enough”. It’s a low brow dismissal holding you back.

          • tombert 6 days ago |
            When I say “bureaucratic”, I mean having to edit multiple files for something that doesn’t seem like it should be very complicated.
      • xorcist 6 days ago |
        > It’s there so you’re not dependent on some bespoke VM state. It also allows you to do code review on infra changes like port numbers being exposed

        That's simply not true.

        Every Kubernetes cluster I have seen and used gives a lot more leeway for the runtime state to change than a basic Ansible/Salt/Puppet configuration, just due to the sheer number of components involved. Everything from Terraform to Istio and ArgoCD are all changed in their own little unique way with their unique possibilities for state changes.

        Following GitOps in the Kubernetes ecosystem is something that requires discipline.

        > environments where “maintenance downtime” is acceptable and there are only one or two people that actually adjust the configurations

        Yes, because before Kubernetes that was how all IT was done? A complete clown show, amirite?

    • PittleyDunkin 7 days ago |
      Isn't using VMs or servers just cloud shit in a different form? How does that fix anything? Orchestrating software (or hardware) is a pain in the ass whether it's in your closet or someone else's.
    • andyjohnson0 7 days ago |
      Kubernetes was developed by Google, and it seems to me it's a classic example of a technology that can work well in a resource-rich (people, money) environment. But in an averge business it becomes a time/money sink quite quickly.

      Way too many businesses cargo-cult themselves into thinking that they need FAANG-class infra, even though they haven't got the same scale or the same level of resourcing. Devs and ops people love it because they get to cosplay and also get the right words on their CV.

      If you're not Google-scale then, as you say, a few VMs or physical boxes are all you need for most systems. But its not sexy, so the business needs people who don't mind that.

      • xorcist 6 days ago |
        K8s is absolutely not Google-scale. Not even close. The scheduler is not built for it.
    • Woeps 7 days ago |
      I don't mind K8s, and it can be usefull if you just stay to the core essentials. But I do agree that there seems to be the sentiment:"Everybody has to like it!"

      And I mostly think that this is because our collective bias for everything Big Tech does is good. While often it just depends. Just because Google does X or Y doesn't mean it will work for everybody else.

    • wordofx 7 days ago |
      I’m confused. Do you hate cloud or the container shit? Cloud is awesome. Containers is dumb as hell for 99% of the applications hacker news kids work on.
      • tombert 7 days ago |
        I don't know what "cloud" means. I don't really like Kubernetes, and I don't really like the AWS and GCP systems for doing everything. The fact that WGU offers a bachelors degree in Cloud Computing with focuses on AWS or Azure should be telling. [1]

        Containers don't bother me that much on their own. I just feel like with k8s and its ilk I end up spending so much time futzing with weird YAML and trying to remember differences between stateful sets and services and fighting with stuff because I missed the thing that mounts a disk so my app will break in three hours.

        [1] https://www.wgu.edu/online-it-degrees/cloud-computing-bachel...

    • chromanoid 7 days ago |
      IMO the actual experience you want is something as simple as you can do with those old-school simple PHP-based "serverless" web hosters. You just upload your stuff and maybe configure a database URL and that's it.

      Regarding AWS, function.zip Lambda + DynamoDB + S3 + SQS is basically this at "enterprise-grade".

      Now you have the in-between left, where you want enterprise-grade (availability, scaling and fault tolerance) but with a catch (like lower costs, more control, data sovereignty or things that are not as serverless as you want, e.g. search engine etc.). In these cases you will run into many trade-off decisions, that may lead you to K8s, cloud specific stacks and also simple VM ClickOps / ShellOps setups.

      As long as you can still be on the pet instead of cattle range of problems, K8s is not what you want. But as soon as you want cattle, reproducible throw-away online environments etc. "just have a VM or a physical server" will become a "build your own private cloud solution" that may or may not become a bad private cloud / K8s.

    • antihero 7 days ago |
      I think a good compromise is building your applications in a containerised manner so that you can simply run them with docker-compose, but then if it turns out you need some heavy scale, it's merely a case of switching the underlying infrastructure.

      That said, my experience has been fairly different. Running microk8s on a VPS with some containers and an ingress just seems like a nice way to spin them up as pods and manage them. It really doesn't feel that complicated.

      Once you integrate with cloud providers and you get more complex systems, sure, it gets more complex.

      I much prefer the container paradigm for separating parts of apps out and managing dependencies and reproducibility. Having a bunch of raw servers with all the bullshit and drift and mutability that comes along with it feels far more like a headache to me than the abstraction of k8s or docker.

      If you aren't deploying stuff with GitOps/IaC in 2024 I fear for anyone that has to manage or maintain it. 100% reproducibility and declarative infrastructure are wonderful things I could never come back from.

      • pdimitar 6 days ago |
        I love the idea of declarative infrastructure but I'm never considering a bunch of YAML files as a good practice.

        Stuff like k9s / microk8s / k3s are clutches and workarounds and I hope we all see it.

        If they figure to use an actual programming language or just start using much smaller amount of files than they currently do then I'd be the first to learn k8s.

        Before that, nope.

        I love the idea but the implementation makes me want to slit my wrists.

    • gigatexal 7 days ago |
      The yaml and the k8s model can be grokked it just takes a willingness to.

      That being said there are abstractions terraform, pulumi, others.

      But my go to is always that most companies will never get to the point where k8s is required — most companies never get to that scale. A well maintained docker compose setup gets you a long way.

    • Deutschlandds 7 days ago |
      I find it always interesting and weird to read so dimetrical points to kubernetes.

      1. Cloud for me is a lot better than what we had before: Before i had to create a ticket for our internal it department, have huge cross-charges (like 500$ for a server, instead of 50), had to wait for a few weeks and than get lectured that installing basic default tools on that suse based server would take a week and add additional cross-charges onto it.

      Their reasoning? Oh we do backup and upgrades...

      With cloud, i click a server in a Minute for less money, i upgrade it myself and have snapshots as a basic backup workflow which actually is reliable and works.

      Then k8s came along and let me be clear: my k8s setup is big, so its definitly worth it but tx to my experience with a bigger k8s setup, my very small ones are also working very very well. I get, out of the box, HA, network policies, snapshotting through my selected storage setup, Infrastructure as code etc.

      Instead of having shitty shell scripts, ansible setup and co, i only write a little bit of yaml, check it into my git system and roll it out with ansible. Absolut no brainer.

      And the auto healing solved real issues: Out of memroy? just restart the pod. Out of disk? Just recreate it. Logging and metrics just works out of the box thanks to the prometheus based monitoring stack.

      Starting with one server, yeah why not but you know you are not doing it right if you are the only one who can set it up and recover it. But if you don't have anyone with expertise, i would also not just start with k8s.

      If my startup is a pure application + db thingy, i would go with any app platform out there. But we have real servers because we do stuff with data and need sometimes performance, performance is expensive if you run it in the cloud.

      • gizzlon 7 days ago |
        > shitty shell scripts, ansible setup and co, i only write a little bit of yaml

        Why are the shell scripts shity but the yaml not? When I look at those yaml files I always throw up just a little :P

        Also, have you tried Cloud Run?

        • Deutschlandds 7 days ago |
          Yaml is declarative, you tell k8s what you want and how it looks.

          For shell scripts, try proper error handling. You start doing some catch hooks, you have issues cehcking error codes of different tools, debugging is hard too.

          In one infra project we swtiched from shell scripts to golang just to have a lot more control/stability of our scripts.

          • gizzlon 7 days ago |
            YAML is not declarative, it's a format. Well, according to Wikipedia it's a "data serialization language". IMO It's also bad choice for this, and those files become unreadable and hard to work with.

            Agree that shell scripts are also hard to work with, especially if you did not write them yourselves. I guess it's a combo of the language features of, say bash, and that no one who writes them really know bash. I mean, at all. Including me.

            Declarative is nice, but also have pros and cons. And, it's of course many ways to achieve this if that's a priority.

            Usually, what you really want is: Low time to replicate AND no data loss if a server blows up. But this also have to extend to, say, the k8 cluster. And, again, many ways to achieve this.

            The article does not call for Ansible setups and shell scripts though.

            Cloud Run uses YAML btw. One of the things I personally don't like about it

            • Deutschlandds 5 days ago |
              Yes sry i was not precise but k8s is declaritive and in that particular case, i find it very fitting. I dont love it, but its direct.

              What i like with that declaritive setup: The other side, the executer, can be actually something reasonable to build and be reused. This strategy or architecture, feels a lot better than the classical approach especially because its so so often always the same thing.

    • Havoc 7 days ago |
      “let me install everything myself” doesn’t generalise well and gets messy even if you IaC it.

      There is a reason k8s gang keeps going on about “cattle not pets”. The starting assumptions and goals are fundamentally different vs “give me a physical server”

      Both have their place I think so not really one is right other is wrong

      • immibis 7 days ago |
        And people tend to vastly underestimate the power of a single server. These days you can get a terabyte of RAM, 96 cores and dual 10G Ethernet for a low-to-mid 4-digit price (used). Do you need the cloud? Some do, but often your highest conceivable scale fits on one or two servers. Stack Exchange famously ran this way for a long time (until recently when they've been bought by large investors and they're going full cloud and AIshit).
        • Havoc 7 days ago |
          Scaling wasn’t really what I was getting at there per se. Said generalise but that’s admittedly quite fuzzy. Meant it more in the abstraction layer and standardisation sense.

          Installing stuff straight on server is very messy especially if it’s lots of different providers with their own dependencies. So you need to do some form of containers or VMs to isolate them. At which point you need some sort of tooling around that. And deal with failures etc. Before you know it you’ve reinvented k8s except with less standardization and more duck tape.

          So it think there is a strong case for a k8s cluster but being mindful to keep it as simple as possible within that paradigm. Ie k8s but just the basic building blocks

      • tombert 7 days ago |
        Everyone says this, but do we actually have data to back that up?

        I feel like I spend so much time working around CloudSQL for postgres support in GCP at work, to a point where I'm not actually sure I'm saving a ton of time over running and managing it myself. That's probably not true, I'm sure there are edge cases that I'm not accounting for, but I'm a little tired of everyone acting like AWS and GCP and Azure are the "set it and forget it" thing that they advertise.

        • Havoc 7 days ago |
          Not sure how you’d even measure that meaningfully.

          My comment above was more k8s vs classic server rather than thinking about cloud k8s in particular.

          I do agree that cloud is stuff is a huge time sink. I’ve learned to look at it in terms of how close it is to FOSS like world. Things that follow normal protocols and standards like say it speaks Postgres or is a docker image then cloud is ok. Things that are cloud vendor specific or a custom product…run for the hills. Not only is it lock in but also that’s where the pain you describe is. The engineering around it just becomes so much more granular and fragile

    • zelphirkalt 7 days ago |
      I am sure the consultants and others peddling Kubernetes like it, since they are getting the paycheck for bringing it on, or cementing their position, as they are the ones with the expert knowledge of the system then, if anyone is at all, after something has been migrated to Kubernetes.
    • bananapub 7 days ago |
      > I have set up about a dozen rack mount servers in my life,

      yes, k8s and co are silly for trivially tiny systems like this.

    • randomtoast 7 days ago |
      Kubernetes adds an extra layer of abstraction on top of existing ones. When the stack of abstractions grows too large, it can start to feel sluggish and unwieldy.

      What if we had something like Kubernetes but at the hardware level? Imagine a single Linux installation running across multiple servers, where resources are seamlessly pooled and managed. In htop, you could visualize all CPU cores, with each core labeled by its corresponding node.

      Now, consider starting a container with Podman: the container would execute on the CPU cores of one node. If you start another container, it could run on the cores of a different node. This approach would essentially transform Linux into an operating system capable of spanning a distributed cluster of nodes.

      To achieve this, the operating system wouldn’t need to be entirely reinvented—it could simply be Linux, enhanced with the necessary kernel modifications to enable such distributed functionality. This could provide the simplicity and efficiency of a unified OS while leveraging the power of a distributed system. Or maybe it's a pipe dream.

    • CursedSilicon 7 days ago |
      Are you hiring? (only somewhat sarcastic)

      I did a 15~ month stint at AWS. Originally I was signed on to be a support engineer for their Linux teams, which sounded great! Absolutely within my skillset and a great way to get to play with new technologies and features at scale that I haven't before (Ansible, etc)

      After going through all the hoops I get sidelined into the "Containers" team and have to learn Kubernetes, ECS, Fargate etc all effectively from scratch

      It's all miserable. All of it. Massive Rube Goldberg machines of complexity for the sake of complexity that you need a team to decipher, let alone maintain.

      Unfortunately it feels like all the Sysadmin jobs have been replaced either by "DevOps" or Cloud Engineers. There's little market left for people who just want to keep boxes inside a datacenter (or on prem) humming along. The ones that do all want that also seem to be all wedded to MSP's unfortunately

    • BrandoElFollito 6 days ago |
      Installing a docker container (I am referring to your "cloud" point) and maintaining it is orders of magnitude easier that managing a service on a VM. I do IT stuff since 1994 (pro and perso) and I have seen it all (just an expression, I actually saw 2% of the all :))

      I have about 30 services at home, they do not require maintenance. They just update on their own (Home Assistant failed once in the last 10 years), there are no dependency hell and I need to only maintain one OS.

      I like to code and the idea to hace "service as code" or rather "service as yaml" is great (though I hate yaml)

    • osigurdson 6 days ago |
      Kubernetes allows me to treat hardware more like software and like it very much for this reason.
  • coding123 8 days ago |
    Cloud Run and K8S are not in the same space. One is to make the infra generic, Cloud Run ONLY works on GCP.
  • lmm 8 days ago |
    So how does this person link up the different parts that go into deploying a service? Like, yes, you can have a managed database, and you can have a managed application deployment (via cloud run), and you can have a pub-sub messaging queue, and you can have a domain name. But where's the part where you tie these all together and say that service A is these and service B is those? Just manually?
  • hi_hi 8 days ago |
    I'm running kubernetes (actually k3s with helm, but that counts, right) on a ludicrously old and underpowered Ubuntu thinclient thats about 10 years old

    - https://www.parkytowers.me.uk/thin/hp/t620/

    I didn't _need_ to, and it was a learning curve to setup that had me crying into my whisky some nights, but its been rock solidly running my various media server and development services for the past few years with no issues.

    Sure, its basically a fancy wrapper around a bunch of docker containers, and I use hardly any of the features which k8s brings to the party, but your cold hard logic won't win over the warm and fuzzy feelings I get knowing I did something stupid and it works!

  • seabrookmx 8 days ago |
    We dabbled with Cloud Run and Cloud Functions (which as of v2 are just a thin layer over Cloud Run anyways).

    While they worked fine for HTTP workloads, we wanted to use them to consume from Pub/Sub and unfortunately the "EventArc" integrations are all HTTP-push based. This means there's no back pressure, so if you want the subscription to buffer incoming requests while your daemons work away there's no graceful way to do this. The push subscription will just ramp up attempting to DoS your Cloud Run service.

    GKE is more initial overhead (helm and all that junk) but using a vanilla, horizontally scaled Kubernetes deployment with a _pull_ subscription solves this problem completely.

    For us, Cloud Run is just a bit too opinionated and GKE Autopilot seems to be the sweet spot.

    • gizzlon 7 days ago |
      > The push subscription will just ramp up attempting to DoS your Cloud Run service

      Interesting. My assumption would be that Cloud Run should quickly* spin up more containers to handle the spike and then spin them down again. So there would be no need for back pressure? Guess it depends on the scale? How big of a spike are we talking about? :)

      *Let's say a few seconds

      • seabrookmx 7 days ago |
        You typically have it configured with a cap on instances for cost reasons.

        Even if you don't, if you have other services (like a database) downstream, you might not want it to scale infinitely as then you're simply DoS'ing the DB instead of the Cloud Run service.

        Backpressure is really important for the resiliency of any distributed system, IMO.

  • wetpaste 8 days ago |
    RE: slow autoscaling

    Maybe the cloud companies could do something here by always keeping a small subset of machines online and ready to join the cluster. Provided there is some compromise in what the configuration is for the end user. I guess it doesn't solve image pulling. Pre-warming nodes is an annoying problem to solve.

    Best solution I've been able to come up with is: Spegel (lightweight p2p image caching) + Karpenter (dynamic node autoscaling) + pods with low priority to hold onto some extra nodes. It's not perfect though

    • p_l 7 days ago |
      1. Do some capacity planning

      2. Apply appropriate changes to application resources (like parameters for spreading pods around)

      3. Add descheduler[1] or similar tool to force redistribution of pods

      4. Configure your cluster autoscaling params according to values from step (1) and have it autoscale before nodes are too heavily loaded.

  • PohaJalebi 8 days ago |
    I like how the author says that Kubernetes has a vendor lock in, yet suggests a GCP managed service as their preferred alternative.
    • dpeckett 7 days ago |
      That probably runs on Kubernetes (or Borg) under the hood.
      • p_l 7 days ago |
        Cloud Run explicitly uses KNative APIs... which are kubernetes objects.
  • ribadeo 8 days ago |
    One can avoid container orchestration by avoiding the trend of containerizing your app. It wastes system resources, provides half-baked replicas of OS services, reduces overall security while simultaneously making networking a total pitas.

    Your cloud provider is already divvying up a racked server into your VPS's, via a hypervisor, then you install an OS on your pretend computer.

    While i can see how containerized apps provide a streamlined devops solution for rare hard to configure software that needs to run on Acorn OS 0.2.3 only, it should never be the deployment solution for a public facing production web service.

    Horses for courses.

  • siliconc0w 8 days ago |
    The downside to cloud run is you don't get a disk. If I could get a pd attached at runtime to each instance, it'd be a lot more compelling.
  • philbo 8 days ago |
    We migrated to Cloud Run at work last year and there were some gotchas that other people might want to be aware of:

    1. Long running TCP connections. By default, Cloud Run terminates inbound TCP connections after 5 minutes. If you're doing anything that uses a long-running connection (e.g. a websocket), you'll want to change that setting otherwise you will have weird bugs in production that nobody can reproduce in local. The upper limit on connections is 1 hour, so you will need some kind of reconnection logic on clients if you're running longer than that.

    Ref: https://cloud.google.com/run/docs/configuring/request-timeou...

    2. First/second generation. Cloud Run has 2 separate execution environments that come with tradeoffs. First generation emulates Linux (imperfectly) and has faster cold starts. Second generation runs on actual Linux and has faster CPU and faster network throughput. If you don't specify a choice, it defaults to first generation.

    Ref: https://cloud.google.com/run/docs/about-execution-environmen...

    3. Autoscaling. Cloud Run autoscales at 60% CPU and you can't change that parameter. You'll want to monitor your instance count closely to make sure you're not scaling too much or too little. For us it turned out to be more useful to restrict scaling on request count, which you can control in settings.

    Ref: https://cloud.google.com/run/docs/about-instance-autoscaling

  • rcleveng 8 days ago |
    To me the most important quote in the whole writeup is this "Our new stack is boring.". When you are creating a solution to a problem, boring is good. Strive to be boring. Only be exciting when you must.
  • thefz 8 days ago |
    5 years ago everyone and their uncle would swear that kubernetes was the new way of doing things and by not jumping on it you were missing out and potentially harming your career.
  • mitjam 8 days ago |
    I believe CloudRun is based on KNative which runs on Kubernetes. Thus, you’re still running on Kubernetes, it’s just abstracted away from you.
    • nosefrog 7 days ago |
      Nope.
      • p_l 7 days ago |
        It's not as much mentioned in the docs these days, and Google might have built a more optimized version, but it's still using knative APIs and originally was essentially KNative deployed on GKE just managed by Google.
  • figmert 7 days ago |
    This article kinda reads like "I didn't need HA, and you probably don't either". HA isn't necessary, but if you want a reliable system that will be online without being woken up at 3am (assuming you even have alerts at that point), you're better off with HA.

    Similarly, you don't need Kubernetes, but if you want something that makes developer's life's easier, makes it easy to use a singular API, has many, many integrations and tooling, then you're better off with K8s. Sure, you can go with VMs, but now you have to scale and manage your application on a per VM level instead of per container. You have to think about a lot of cloud specific services, network policies, IAMs, I don't know what else, scaling.

    I guess what I'm saying, you always have the option of writing in Assembly, but why would you when you can have a higher level language that abstracts most of it away. Yes, the maintenance burden on the devops/platform team is higher, but it's so much easier for users of the platform to use said platform.

  • madjam002 7 days ago |
    I don't get these recent anti-Kubernetes posts, yes if you're deploying a simple app then there are alternatives which are easier, but as your app starts to get more complex then suddenly you'll be wishing you had the Kubernetes API.

    I'd use Kubernetes even if I was spinning up a single VM and installing k3s on it. It's a universal deployment target.

    Spinning up a cluster isn't the easiest thing, but I don't understand how a lot of the complaints around this come from sysadmin-type people who replace Kubernetes with spinning up VMs instead. The main complexity I've found from managing a cluster is the boring sysadmin stuff, PKI, firewall rules, scripts to automate. Kubernetes itself is pretty rock solid. A lot of cluster failure modes still result in your app still working. Etcd can be a pain but if you want you can even replace that with a SQL database for moderate sized clusters if that's more your thing (easier to manage HA control plane then too if you've already got a HA SQL server).

    Or yes just use a managed Kubernetes cluster.

    • noname44 7 days ago |
      lol, even if you have complex apps there are always easier solutions than Kubernetes. It is evident that you have never run such an app and are just talking about it. Otherwise, you would know the issues you would encounter with every update due to breaking changes. Not to mention that you need a high level of expertise and a dedicated team, which costs far more than running an app on Fargate. Recommending a managed Kubernetes cluster is nonsense, as it goes against the whole purpose of Kubernetes itself.
      • madjam002 7 days ago |
        I've been running apps on Kubernetes clusters for the past 6 years and the only thing that really comes to mind that was a breaking change was when the ingress class resource type was introduced. Everything else has been incremental. Maybe I'm forgetting something.

        What's wrong with recommending a managed cluster? I wouldn't use one but it is certainly an option for teams that don't want to spin up a cluster from scratch, although it comes with its own set of tradeoffs.

        My project at the moment is definitely easier thanks to Kubernetes as pods are spun up dynamically and I've migrated to a different cloud provider and since migrated to a mix of dedicated servers and autoscaled VMs, all of which was easy due to the common deployment target rather than building on top of a cloud provider specific service.

        • p_l 7 days ago |
          There was breaking change around 1.18, which was spread over few releases to make migration easier. Similar fix pattern as with graduating beta to stable APIs for things like Ingress, they just IIRC covered all the core APIs or so? Don't have time to look it up right now.

          Generally the only issue was forgetting to update whatever you use to setup the resources, because apiserver auto-updated the formats to the point worst case you could just grab them with kubectl get ... -o yaml/json and trim the read-only fields.

      • mplewis 7 days ago |
        This is obvious FUD from a throwaway account. 1.x Kubernetes breakage has rarely affected me. I’m a team of one and k8s has added a lot of value by allowing me to build and run reliable, auto-managed applications that I’m confident I can transfer across clouds.
  • retinaros 7 days ago |
    The hell of k8s for le are the libs in version 0.12536 and all the frequent updates and compatibility versions you need to maintain between them. Why would anyone want to do that in prod. It feels like at some point the tech needs to mature in something more stable and its not coming
  • Imustaskforhelp 7 days ago |
    Interesting , I was reading the author's comments in the hackernews post about https://www.gitpod.io/blog/we-are-leaving-kubernetes

    And I was there also curious about it , and had posted some questions , though now I am not sure if all of those were interested.

    But yeah , I also agree with this sentiment , personally I would much rather optimize my code in hopes that it can run on single machine (vps like hetzner) instead of the dread of kubernetes

    Though I also want to see kubernetes , so maybe some day I am going to experiment it for the sake of experimentation with something like kubectl or other minimalist single computer based approach to learn new things.

    I also feel like I can't benchmark things unless done through kubernetes like anton putra , so I am not sure.

  • ptman 7 days ago |
    So what is the self-hosted solution for running something like cloud run on your own hardware? Multiple servers and optional redundancy so not dokku.
    • josegonzalez 7 days ago |
      Dokku runs on multiple servers by abstracting other schedulers (kubernetes, nomad). You don't really need to deal with the scheduler side (other than setting it up on the new servers in your cluster, which is a single command we document).

      Disclaimer: I am the Dokku maintainer

  • jusonchan81 7 days ago |
    I have to admit that I’m inspired to get rid of k8s for a couple of situations based on all the recent posts. There are some clusters that are not required and costs way too much to run compared to a single VM.
  • Panik 7 days ago |
    Nice solution. Just use vendor lock-in, so you can't use your own shit later.
  • torginus 7 days ago |
    By the way how much malice can we attribute to cloud vendors?

    >Moreover, Kubernetes’s slow autoscaling meant I had to over-provision services to ensure availability, paying for unused resources rather than scaling based on demand.

    A typical Linux instance on AWS starts up in about 8 seconds from the asking to start to command line, so lets double that. You could start up and shut down instances in 15 seconds.

    Why the hell do you need to overprovision instances then, or leave empty ones running?

    I don't see any other reasonable explanation than its in the best interest of cloud vendors to not have short lived instances for the purposes of load balancing, as well as make you consume as much CPU time as possible, even if you don't need it.

    • p_l 7 days ago |
      TL;DR someone didn't learn to plan capacity.
  • rnts08 7 days ago |
    The worst thing about the whole kubernetes-cult and the like is that IaC and CM was supposed to help us reduce configuration, to make it less prone to failure and easier to manage.

    But the truth is that we ran service based architecture, network meshes and containers with bash just fine before cloud everything, usually with less effort than it is to do literally anything today. Sure you had to know how to set up network bonding and how to tune your systems.

    Very, very few people and businesses _needs_ kubernetes or it's cousins. Most just need a decent system administrator.

    • boingo 7 days ago |
      I agree with you, but can offer a counterpoint. A system administrator might setup a server in a certain way, and if they leave the replacement has a hard time discovering what they did.

      With IaC, all decisions are templated in the code and the replacement has full insight into the state of the machine.

    • p_l 7 days ago |
      As I hope decent system administrator... turns out I find running minimal kubernetes setup (k3s is great there) more sensible and cheaper than managing the server in classic way.
    • pas 7 days ago |
      Good for you, my bash on balls was unfortunately not able to solve service based meshtainer architecture :(

      I remember how kernel module params for ALG/bonding/teaming had to be figured out for each fancy NIC/driver, it was fun, but definitely not great. Of course this is mostly solved if you pay the cloud premium.

      Most businesses need a product guy/gal, a website, then a developer, then later one VM or Heroku or something. And maybe if there's at least a few customers, yeah, it makes sense to think about ops (as in business ops), and then eventually sure, engineering needs to scale to solve the challenges, and that might mean getting a sysadmin.

  • buro9 7 days ago |
    I still just use VPS devices and some colocated hardware.

    I'm constantly told "I am not the typical customer, most do use K8s".

    Side projects and volunteer things I run include 470 websites serving about 320K registered users and about 500K-1M monthly guest users... on about 15 VPS devices or small servers.

    It's just classic dynamic websites, HTML is the output of a server (no single page web apps), lots of caching for guest users, reasonable complexity (load balancers ahead of web front ends ahead of API back ends ahead of databases).

    Things that look like microservices exist... it's just API calls, and they come back through the front door, it's easy to reason about.

    Monitoring is mostly Prometheus node exporter, it turns out that CPU + Memory + Disk IOPS + Network IOPS is +90% of what you need... some HTTP logs or profiles are the last 10%.

    It's just simple... this is run in my spare time, effort is under an hour per month. (and no it's not monetised, not everything has to be)

  • lakomen 7 days ago |
    Preach, however all fucking job ads want proficiency in k8s. So screwed if you do, screwed if you don't
  • vergessenmir 7 days ago |
    it's baffling to me that realising that your application deployment requirements don't fit k8s means that k8s is not fit for purpose. Note that the alternative isn't a return to the management of VMs, which we all agree none of us ever want to return to, but is a cloud service which hides the concerns that k8s makes explicit.

    It's a bit short-sighted (granted not intentionally) to talk about k8s not being fit for purpose without specifying the workloads it's not suitable for and at what scale of operations. Often times when these discussions come up it's usually reduced to a stateless web app being the justification for illustrating the shortfalls of k8s as a solution.

    Personally, I'm not a big fan of k8s because it can be complex but the alternatives tend to be more complex. I can guarantee, your solution over time will converge on operational knowledge trapped in the minds of your team members , runbooks and custom scripts that express an abstraction that k8s implements perfectly well.

  • netdevphoenix 7 days ago |
    There seems to be a trend of people ditching Kube, I wonder why. There are currently two trending articles about this right now on this website
  • daitangio 7 days ago |
    I created [misterio](https://github.com/daitangio/misterio) to exactly solve the same issue described in the article.

    So yes, if you have a simple setup you can use a "web" of docker servers.

    But if your needs are just a bit more complex, or you have a mix of different technology (rubyonrails, python flask, java spring boot) and you want to standardize the communication, the security, the performance tracking and so on, K8s is the way.

    The problem is the microservice architecture: the complexity is pushed to the infrastructure by the microservices, and K8s is a super-generic way to solve that complexity.

    It is like unpacking (in a exploding-kitten mode) an application server on the your Intranet.

    • jdsleppy 7 days ago |
      Nice, that is my current approach on my latest project (referring to what your linked repo does). Inlining it here for others: copy a .env and a docker-compose.yml, run `docker compose "$@"` on the instance via ssh. That lets me deploy, run one-off commands, and tail logs easily.
  • stuaxo 7 days ago |
    For a lot of things it's not needed.

    There is a huge tail of small websites where if it comes down someone can fix it and that's fine.

  • isoprophlex 7 days ago |
    > Complexity Overload

    Wait... so you're saying, I can rent a couple of beefy boxes instead, and live a life where I don't need to know what TolerateDuringExecutionButNotDuringScheduling does?!

  • anal_reactor 7 days ago |
    At my company we're running EKS Fargate. We spend very little time actually managing clusters, and it really boils down to "just running some containers". I don't understand why the hate, the solution really works for my use case.
  • openplatypus 7 days ago |
    You can start using Kubernetes with one YAML file and docker image.

    If you want to skip devops work of setting up cluster, go K8s managed. All major cloud providers give you that option.

    So, you replaced opensource, standard cloud solution such as K8s with proprietary cloud platform by Google. I mean, good for you but for many of us this sounds backwards.

    As a bootstrapped founder, going with K8s was the best decision ever: deployment, scaling, custom resources, jobs, cron, etc, all that stuff comes included and it costs writing a YAML file.

    Do you have to learn something to use K8s. Yes, like with everything.

    Is it difficult? No more difficult than any other solution out there.

  • rqtwteye 7 days ago |
    I think Kubernetes config is a perfect target for AI. Conceptually k8s is not that complex but setting up and managing all YAML files with tons of boilerplate is very hard if you don’t do it every day. An AI should be the perfect tool to analyze the files and make changes.
    • Havoc 7 days ago |
      Busy using a fair bit of A.I. to produce yams manifests. It still hallucinates a fair bit and doesn’t connect the dots well between the various parts of you colour outside the lines even slightly

      Works but feels substantially weaker than on coding tasks. Not sure whether that’s lack of training material or the lack of execution flow code has.

    • miyuru 7 days ago |
      Hard no.

      There is already people who uses k8s that does not understand how it works or even know how the app works inside k8s.

      Adding AI to it is just asking for a disaster down the line.

  • m0llusk 7 days ago |
    So this is an ad for a Google product? What happens when Google gets bored and drops it?
    • p_l 7 days ago |
      Well, since it's cloud run...

      ... you spin knative on kubernetes cluster and copy over the actually-k8s-manifests over :P

  • nixpulvis 7 days ago |
    Never once has anyone come close to the developer experience of Heroku and RoR circa 2014.
  • rbanffy 7 days ago |
    I have concluded that most (perhaps all) cloud offerings of Kubernetes clusters are a prime example of "enshitification". You take something that should be easy and trivial, then add layers of complexity and vendor-specific components, slightly different configuration options, lots of plug-ins you probably won't ever need (but you need to know which ones you will, and how to decide which is which), and so on. The result is that, when you are finished, you made something that runs only on the specific provider you selected.
  • sn9 7 days ago |
    OP I'm on firefox 131 on Ubuntu and your site is basically unreadable.

    Dark background and the text is the faintest of shades lighter.

    I don't mean this in a nitpicky way. Normally I'm fine with most sites.

    I have to highlight the text to read it.

    • beaugunderson 6 days ago |
      Chrome on macOS is the same, bailed when I couldn't read the text.
  • fhke 7 days ago |
    >The high cost arises from needing to provision a bare-bones cluster with redundant management nodes.

    ?!

    How else are you managing your infrastructure besides having redundancy in the control plane? Even if it’s a chef or puppet server, or even sysadmin Dave running scripts on his laptop, you should still have redundancy.

    > Moreover, Kubernetes’s slow autoscaling meant I had to over-provision services to ensure availability, paying for unused resources rather than scaling based on demand.

    ??????

    Slow compared to what exactly? Anecdotally, k8s with karpenter is significantly faster to scale than auto scaling groups.

  • prakashn27 6 days ago |
    Serverless is the best thing that happened for cloud.

    People argue that it is expensive for zillions of users but I have never reached that scale