There is a surprisingly easy way to address this issue: use (ridiculously cheap) Hetzner metal machines as nodes. The ones with nvme storage offer excellent performance for dbs and often have generous amounts of RAM. I'd go as far as to say you'd be better off to invest in two or more beefy bare metal machines for a master-replica(s) setup rather than run the db on k8s.
If you don't want to be bothered with the setup, you can use one of many modern packages such as Pigsty: https://pigsty.cc/ (not affiliated but a huge fan).
There are just pinning the database pods to specific nodes and using a LocalPathProvisioner or distributed solutions like JuiceFS, OpenEBS etc.
This is the guide I wrote for our customers: https://syself.com/docs/hetzner/apalla/how-to-guides/storage...
We don’t have such bursty requirements fortunately so I have not needed to automate this.
It took minutes to setup a cluster and I love having a UI to see what is happening.
I wish there were more products like this as I suspect there will be a trend towards more self-managed Kubernetes clusters given how expensive the cloud is becoming.
Last time I tried was admittedly in 2022, but in testing which distro to go with bottlerocket lost because we couldn't setup local builds...
My one big experience with it was the recent bug which (as I recall) attempted to harden the system by marking memory pages as no-execute, which caused virtual runtime languages like Java to basically break entirely when running on a node using this version of Bottlerocket.
It was fixed pretty quickly, but it did feel like a weird thing to slip through...
In one evening I had a cluster working.
It works pretty well. I had one small problem when the auto-update wouldn't run on arm nodes which stopped the single node I had running at that point (with the control plane taint blocking the update pod running on them).
https://github.com/syself/cluster-api-provider-hetzner
works rock solid
I have a side-question pertaining to cost-cutting with Kubernetes. I've been musing over the idea of setting up Kubernetes clusters similar to these ones but mixing on-premises nodes with nodes from the cloud provider. The setup would be something like:
- vCPUs for bursty workloads,
- bare metal nodes for the performance-oriented workloads required as base-loads,
- on-premises nodes for spiky performance-oriented workloads, and dirt-cheap on-demand scaling.
What I believe will be the primary unknown is egress costs.
Has anyone ever toyed around with the idea?
The comment was making fun of the wishful thinking and the realities of networking.
It was a funny comment :-(
https://tailscale.com/kb/1236/kubernetes-operator
They've even improved it, so you can now actually resolve the services etc via the tailnet dns
https://tailscale.com/learn/managing-access-to-kubernetes-wi...
I haven't tried that second part though, only read about it.
(Setting up a k8s cluster over software VPN was kinda annoying the last time I tried it manually, but super easy with the tailscale integration)
you can't just slap an overlay on and expect everything to work in a reliable and performant manner. yes, it will work for your initial tests, but then shit gets real when you find that the route from datacenter a to datacenter b is asymmetric and/or shifts between providers, altering site to site performance on a regular basis.
the concept of bursting into on-prem is the most offensive bit about the original comment. when your site traffic is at its highest, you're going to add an extra network hop and proxy into the mix with a subset of your traffic getting shipped off to another datacenter over internet quality links.
I'm sorry, you said absolutely nothing. You just sounded like you were confused and for a moment thought you were posting on 4chan.
b) You should be architecting your platform to accomodate these very common networking scenarios i.e. having edge caching. Because slow backends can be caused by a range of non-networking issues as well.
c) Many cloud providers (even large ones like AWS) are hosted in or have special peering relationships with third party DCs e.g. [1]. So there are no "internet quality links" if you host your equipment in one of the major DCs.
>All root servers have a dedicated 1 GBit uplink by default and with it unlimited traffic.
>Inclusive monthly traffic for servers with 10G uplink is 20TB. There is no bandwidth limitation. We will charge € 1/TB for overusage.
So it sounds like it depends. I have used them for (I'm guessing) 20 years and have never had a network problem with them or a surprise charge. Of course I mostly worked in the low double digit terabytes. But have had servers with them that handled millions of requests per day with zero problems.
The 10GBit uplink is something you need to explicitly request, and presumably it is more limited because if you go through the trouble of requesting it, you likely intend to saturate it fairly consistently, and that server's traffic usage is much more likely to be an outlier.
[1]: https://lowendtalk.com/discussion/180504/hetzner-traffic-use...
It sounds like a good tradeoff. The monthly cost of a small vCPU is equivalent to a few TB of bandwidth.
Sidero Omni have done this: https://omni.siderolabs.com
They run a Wireguard network between the nodes so you can have a mix of on-premise and cloud within one cluster. Works really well but unfortunately is a commercial product with a pricing model that is a little inflexible.
But at least it shows it's technically possible so maybe open source options exist.
The sibling comments recommendation, Nebula, does something similar with a slightly different approach.
Interesting.
A quick search shows that some people already toyed with the idea of rolling out something similar.
Of course you could always move the data-science compute workloads to the cluster, but my gut says that bringing the data closer to the people that need it would be the ideal.
You get what you pay for, and all that.
Any free hosting service will be overwhelmed by spammers and fraudsters. Cheap services the same but less so, and the more expensive they are the less they will be used for scams and spams.
By that point I had already moved to a different provider of course.
I use OVH because the cost reduction supremely adds up for my workloads (remote video editing/ custom rendering farm at scale with a lot more cheaper OVH s3 suitable for my temporary but too many asset workload with high egress requirements) but otherwise I miss AWS and get now, just how much superior their support and attention to detail is.
Said experience being that the very highly paid support team ghosted us when we asked questions guided by their docs (AWS Workspace stuff), probably after finding that there are no answers and that we went with service based on promise of feature that apparently didn't exist
Can you say more? Their Cloud instances, for example, are less than half the cost of OVH's, and less than a fifth of the cost of a comparable AWS EC2 instance.
But i do agree, it is much cheaper.
But let’s also be honest, if you’re THAT bootstrapped, you probably have no business running kubernetes to begin with. If the company has a short runway, it doesn’t make sense to work on a complex architecture from the start. Focus on shipping something and getting revenue.
K8s is my tool of choice when I am that boostrapped, because a single server with k3s thrown on it will cost me maybe 80 EUR a month and hold all environments plus CI/CD plus various self-hosted business components (with backup sent over to separate provider, just in case), and I'll be free to build my project instead of messing with server setup or worry about cloud bills.
Wouldn't reccomend any of these outside of personal use though.
This is demonstrably false.
The only other time I have received better support was from Aussie ISPs. Back in the day when you called Internode the guy who answered the phone was a bona-fide network engineer and would go as far as getting a shell on the DSLAM to check out what is going on. To me that is peak support, live debugging of the problem!
Similarly I called into Aussie Broadband to do my first NBN setup, explained I did "BYO" modem because I was going to initiate the PPPoE session with my Linux router and they said no problem. She even offered to send me a cookie cutter pppd config along with the info to set it up myself. Easily the some of the most knowledgeable and "can do" attitude for first layer support I have encountered.
Needless to say when I encounter damn good support I stay even when it costs more.
What do the fine people of HN think about the size/scope/amount of technology of this repo?
It is referenced in the article here: https://github.com/puppetlabs/puppetlabs-kubernetes/compare/...
The general flow was Imager->pre-configured puppet agent->connect to controller->apply changes to make it perform as x
originally it never really had the capacity to kick off the imaging/instantiation. THis meant that it scaled better (shared state is better handled than ansible)
However ansible shined because although it was a bastard to get running on more than a couple of hundred hosts in any speed, you could tell it to spin up 100x EC2(or equivalent) machines and then transform them into which every role that was needed. In puppet that was impossible to do in one go.
I assume thats changed, but I don't miss puppet.
There ain't many large European cloud companies, and I would like to understand how they differentiate.
Ionos is another European one. Currently, it looks like their cloud business is stagnating, though.
Bonkers first experience in the last two weeks.
Graphical "Data center designer", no ability to open multiple tabs, instead always rerouting to the main landing page.
Attached 3 IGWs to a box, all public IPs, GUI shows "no active firewall rules".
IGW 1: 100% packet loss over 1 minute.
IGW 2: 85% packet loss over 1 minute.
IGW3: 95% packet loss over 1 minute.
Turns out "no active Firewall rules" just wasn't the case and explicit whitelisting is absolutely required.
But wait, there's more!
Created a hosted PostgreSQL instance, assigned a private subnet for creation.
SSH into my server, ping the URL of the created Postgres instance: The DB's IP is outside the CIDR range of the assigned subnet and unreachable.
What?
Deleted the instance, created another one, exact same settings. Worked this time around.
Support quality also varies extremely.
Out of 3 encounters, I had a competent person once.
Other two straight out said they have no idea what's going on.
Are there cloud providers you prefer?
I didn't have any of these web UI issues with Hetzner, but iirc OVH is cheaper for domain names, as well as having very reliable and fast DNS servers (measured various query types across some 6 months), and that's why I initially chose them — until my home ISP gave me a burned IP address and I needed an externally hosted server for originating email data (despite it coming from an old and trusted domain that permitlists the IP address) so now I'm with both OVH and Hetzner... Anyway, another thing I like in OVH is that you can edit the raw zone file data and that they support some of the more exotic record types. I don't know how Hetzner compares on domain hosting though
This is a very low usage toy server, can't speak for performance/cost.
Parsing works the same but is based on a simple regex rather than splitting on a hyphen.
euc=eu central; 1=zone/dc; p=production; wkr=worker; 1=node id
End of they day, they are a business!
From my experience, the cloud bill on Hetzner can sometimes be as low as 20% of an equivalent AWS bill. However, this cost advantage comes with significant trade-offs.
On Kubernetes with Hetzner, we managed a Ceph cluster using NVMe storage, MariaDB operators, Cilium for networking, and ArgoCD for deploying Helm charts. We had to handle Kubernetes cluster updates ourselves, which included facing a complete cluster failure at one point. We also encountered various bugs in both Kubernetes and Ceph, many of which were documented in GitHub issues and Ceph trackers. The list of tasks to manage and monitor was endless. Depending on the number of workloads and the overall complexity of the environment, maintaining such a setup can quickly become a full-time job for a DevOps team.
In contrast, using AWS or other major cloud providers allows for a more hands-off setup. With managed services, maintenance often requires significantly less effort, reducing the operational burden on your team.
In essence, with AWS, your DevOps workload is reduced by a significant factor, while on Hetzner, your cloud bill is significantly lower.
Determining which option is more cost-effective requires a thorough TCO (Total Cost of Ownership) analysis. While Hetzner may seem cheaper upfront, the additional hours required for DevOps work can offset those savings.
Sure, but the TLDR is going to be that if you employ n or more sysadmins, the cost savings will dominate. With 2 < n < 7. So for a given company size, Hetzner will start being cheaper at some point, and it will become more extreme the bigger you go.
Second if you have a "big" cost, whatever it is, bandwidth, disk space (essentially anything but compute), cost savings will dominate faster.
Sure, you can get away with legoing some K3S stuff together for a while but one major outage later, and that cost saving might have entirely disappeared.
Then get mad at them because they don't "produce value", and fold it into a developers job with an even higher level of abstraction again. This is what we always do.
Originally it was ansible, and so spinning up a new node or updating all nodes was editing one file (k8s version and ssh node list), and then running one ansible command.
Now I'm using nixos, so updating is just bumping the version number, a hash, and typing "colmena apply".
Even migrating the k8s cluster from ansible to nixos was quite easy, I just swapped one node at a time and it all worked.
People are so afraid of just like learning basic linux sysadmin operations, and yet it also makes it way easier to understand and debug the system too, so it pays off.
I had to help someone else with their EKS cluster, and in the end debugging the weird EKS AMI was a nightmare and required spending more time than all the time I've had to spend on my own cluster over the last year combined.
From my perspective, using EKS both costs more money, gives you a worse K8s (you can't use beta features, their ami sucks), and also pushes you to have a worse understanding of the system so that you can't understand bugs as easily and when it breaks it's worse.
One day it broke because of something to do with certificates (not that it was easy to determine the underlying problem). There was plenty of information online about which incantations were necessary to get it working again, but instead I nuked it from orbit and rebuilt the cluster. From then on I did this every few weeks.
A real kubernetes operator would have tooling in place to automatically upgrade certs and who knows what else. I imagine a company would have to pay such an operator.
I run BareMetalSavings.com[0], a toy for ballpark-estimating bare-metal/cloud savings, and the companies that have it hardest to move away from the cloud are those who are highly dependent on Kubernetes.
It's great for the devs but I wouldn't want to operate a cluster.
No, it is not.
I'd like to see your breakdowns as well, given that the cost difference between a 2 vCPU, 4GB configuration (as an example) and a similar configuration on AWS is priced much higher.
There's also https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... to reduce the operational burden that you speak of.
I have a better suggestion, which will save time, energy, money, and human work.
Don't.
Write it yourself. If you can't, don't post.
You’d assume people would use tools to deliver a better and well composed message; whereas most people try to use LLMs to decompress their text into an inefficient representation. Why this is I have no idea, but I’d rather have the raw unfiltered thought from a fellow human rather than someone trying to sound fancy and important.
Not to say I still find the 20% claim a little suspect.
Just using your browser's built-in proofreader is enough in 99.9% of the cases.
Using ChatGPT to rewrite your ideas will make them feel formulaic (LLMs have a style and people exposed to them will spot it instantly, like a code smell) and usually needlessly verbose.
Or as ChatGPT would put it:
Precise grammar and spelling are undeniably important, but minor imperfections in English rarely obstruct communication. As the most widely used language in the world, English is highly flexible, and most people navigate small errors without issue. For the majority of cases, a browser’s built-in proofreader is entirely sufficient.
On one hand, tools like ChatGPT can be valuable for refining text and ensuring clarity. On the other hand, frequent reliance on such tools can result in writing that feels formulaic, especially to those familiar with AI-generated styles. Balancing the benefits of polished phrasing with the authenticity of your own voice is often the most effective approach.
Really - your comment on its own is good enough without the LLM. (And if you find an error, you can always edit!)
If we really wanted ChatGPT’s input on a topic (or a rewording of your comment), we can always ask ChatGPT ourselves.
You are much better off having a bunch of smaller file systems exported over NFS make sure that you have block level replication. Single address space filesystems are ok and convenient, but most of the time are not worth the cost of admin to get reliable at scale. like a DB shard your filesystems, especially as you can easily add mapping logic to kubernetes to make sure you get the right storage to the right image.
whether is a good fit for general purpose storage of stuff at a small scale is harder question. Its not easy to get good performance at small scale, and to get good performance requires a larger than you'd like number of storage nodes.
Yes it has inline FEC, (https://www.ibm.com/docs/en/storage-ceph/7?topic=components-...) but its lots of layers to get to a file system.
Personally I'd have a redundant array of storage nodes and be done with it. Its easier to debug a single server than 3 layers of ceph weirdness.
How many nodes are there, how much traffic does it receive, what are the uptime and latency requirements?
And what's the absolute cost savings? Saving 75% of $100K/mo is very different from saving 75% of $100/mo.
I do think 100k/mo is the tipping point actually, that is $1.2M/yr.
It costs around $400k/yr in engineering salaries to reasonably support a sophisticated bare metal deployment (though such people can generally do that AND provide a lot of value elsewhere in the business, so really it's actual cost is lower than this) and about $100k/yr in DC commitments, HW amortisation, and BW roughly. So you save around $700k a year which is great but the benefit becomes much greater when your equiv cloud spend is even bigger than that.
If you do that in Europe you have to pay them during standby hours.
400k/year seems very low to me.
Everywhere I have worked where we have run clusters in the 100s to 1000s of nodes we have rarely had a team larger than 4-5 of true k8s folks and even then it's been a split between folks that are very hardware provisioning/network/etc focused and more higher level k8s folk which also take on a large portion of CI/CD work also.
At smaller scale (in the $1M/yr ballpark) I have done all the k8s bare metal ops myself along with all CI/CD and been responsible for a ton of the backend programming too. This is feasible because with distros like Talos etc it doesn't take a lot of manpower once it's setup and upgrades aren't too painful at small scale if you aren't running stateful services.
So tbh no, you just need ideally 2 folks at around ~200k/yr each that are competent and have done it before. The rest of the folks on the on-call rotation are just the rest of your engineers (and if you are at $1m/yr cloud spend you have more than 10 of those).
well, running on bare metal would be even better
I don't think this is true. With Digital Ocean, the worker nodes are the same cost as regular droplets, there's no additional costs involved. This makes Digital Ocean's offering very attractive - free control plane you don't have to worry about, free upgrades, and some extra integrations to things like the load balancer, storage, etc. I can't think of a reason to not go with that over self-managed.
8GB RAM, shared cpu on hetzner is ~$10
Equivalent on digital ocean is $48
If you want a managed experience on Hetzner, you could take a look at https://syself.com
Disclaimer: I'm an employee there
In order to integrate a load-balancer provided by hetzner with our k8s on dedicated servers we had to implement a super thin operator that does it: https://github.com/Intreecom/robotlb
If anyone will be inspired by this article and would want to do the same, feel free to use this project.
The costs of cloud hosting are totally out of control, would love to see more efforts that lets developers move down the stack.
I’ve been humbly working on https://canine.sh which basically provides a Heroku like interface to any K8 cluster
I believe that Hetzner data centers in Europe (Germany, Finland) are powered by green energy, but not the locations in US.
In comparison, 30% of total energy (energy! Not electricity) goes to transport!
As another point of comparison, transport in Sweden in 2022 used 137 TWh [1]. So the same order of magnitude as total datacenter energy use.
And datacenters are powered by electricity which increases the chance that it comes from renewable energy. Conversely, the chance that diesel comes from a renewable source is zero.
So can we please stop talking about data center energy use? It’s a narrative that the media is currently pushing but as so many things it makes no sense. It’s not the thing we should be focusing on if we want to decrease fossil fuel use.
[1]: https://www.energimyndigheten.se/en/energysystem/energy-cons...
But on the other side, to bring down CO2 levels, fast change everywhere is required. As far as I see data center energy consumption continues to grow, specifically with AI.
If I am not mistaken, data centers produce more CO2 than aviation.
And sure, most 'green hosting' is probably 'green washing', yet I would still support and link initiatives such as: https://www.thegreenwebfoundation.org/
If you dive into a detailed breakdown of emissions you'll find that it's a complex hierarchy of categories. You can't just fix "all of transport" or treat it like a "low hanging fruit", just look at how much time it's taken for EV penetration to be in any way significant; look at how much of transport emissions are from aviation or shipping or other components.
Any energy use that's measurable in whole percentage points of global emissions needs addressing. That includes data centers.
To be fair, until China does something about their emissions, the rest of us are just pissing in the ocean.
Everything is intertwined and tightly coupled, such simple statements are rarely accurate.
I would argue that the world wouldnt use as much stuff if China stopped manufacturing it
China and the US are in the same order of magnitude in emissions. So NO that's absolutely not the argument I am making.
> Any energy use that's measurable in whole percentage points of global emissions needs addressing
But it isn't! That's my point. Electricity use is about 20% of total energy use. So if we talk about global emissions, data center is only about 20% * 2% = 0.4% of total energy use.
And then if we talk about total emittance, it's even lower because 40% of electricity is generated from nuclear and renewables.
> just look at how much time it's taken for EV penetration to be in any way significant
Yes so let's focus on that instead of data centers. Data centers are not the problem!
EDIT: Also CPUs and GPUs are still becoming more energy efficient. So I'm a bit skeptical of extrapolations which say that data centers will consume a large percentage of US energy. If the number of CPUs and GPUs doubles each 2 years, but energy efficiency doubles too, then overall energy usage doesn't grow so fast. Especially if old CPUs and GPUs are taken out of the system over time because they become too expensive to operate.
This is like saying I shouldn’t care about pollution from the local auto painting shop because there are strip mines somewhere else. Yes, it’s not the top priority but that doesn’t mean that we shouldn’t be trying to reduce pollution just as we are for every large producer, and with both LLMs and cryptocurrency having potential demand outstripping the existing supply we have every reason to expect continued growth in emissions at a time when we need decline.
Rather than taking this so personally, consider that people on HN talk about it because our choices actually matter here. Very few of us affect heavy industrial policy but all of us can think about how much our applications need to run.
Hetzner is using 100% green hydro and wind power for that, which is as sustainable as any grid-connected company can be.
A lot of EU datacenter providers specifically pick green electricity providers/sources, and pride themselves on it, and use it in advertising their sustainability.
Scaleway in particular are 100% no-CO2 (they have it easy, most of their DCs are in France where it's easy to be fully nuclear+renewable). Hetzner are the same.
Green lignite.
You can see the paperwork here:
- https://cdn.hetzner.com/assets/Uploads/oekostrom-zertifikat-...
- https://cdn.hetzner.com/assets/Oomi-sertifikaatti-tuuli+vesi...
I set up rook ceph on a talos k8s cluster (with vm volumes) and experienced similar low performance; however, I always thought that was because of the 1Gi vSwitch (i.e. networking problem)?! The SSD volumes were quite fast.
Additionally, hetzner has an IOPS limit of 5000 and write limit of some amount that does not scale with the size of database.
50G has the same limits as 5TB.
For this reason, people are sometimes using different table spaces in postgres for example.
Ceph puts another burden on top of already-ceph-based cloud volumes, btw, so don't do that.
Whilst I wouldn't run Kubernetes by choice, we've had success moving our custom SSH / Docker compose deployments over to use GitHub Actions with kamal-deploy.org, easy to setup and nice UX tools for monitoring remote deployed apps [1]
It's the sort of place where people say Transit is cheaper than paid peering. (For eyeball networks at least).
I think carrying traffic from Europe for some images and videos might make sense financially. But there's always bulk CDN's
The vast majority of Hetzner's traffic in europe (and tbh, anyone's traffic) is free peering. Telekom is the one major exception.
All of this makes sense considering the extremely low price.
1) A staging cluster for testing updates is really a must. YOLO-ing prod updates on a Sunday is no one's idea of fun.
2) Application level replication is king, followed by block-level replication (we use OpenEBS/Mayastor). After going through all the Postgres operators we found StackGres to (currently) be the best.
3) The Ansible playbooks are your assets. Once you have them down and well-commented for a given service then re-deploying that service in other cases (or again in the future) becomes straightforward.
4) If you can I'd recommend a dedicated 10G network to connect your servers. 1G just isn't quite enough when it comes to the combined load of prod traffic, plus image pulls, plus inter-service traffic. This also gives a 10x latency improvement over AWS intra-az.
5) If you want network redundancy you can create a 1G vSwitch (VLAN) on the 1G ports for internal use. Give each server a loopback IP, then use BGP to distribute routes (bird).
6) MinIO clusters (via the operator) are not that tricky to operate as long as you follow the well trodden path. This provides you with local high-bandwidth, low-latency object storage.
7) The initial investment to do this does take time. I'd put it at 2-4 months of undistracted skilled engineering time.
8) You can still push ancillary/annoying tasks off onto cloud providers (personally I'm a fan of CloudFlare for HTTP load balancing).
[1]: https://lithus.eu
How much is that worth to your company/customer vs a higher monthly bill for the next 5 years?
As a consultancy company, you want to sell that. As a customer, I don't see how that's worth it at all, unless I expect a 10k/month AWS bill.
xkcd comes to mind: https://xkcd.com/1319/
Well I do rather agree, but as a consultancy I'm biased.
But let's do some math. Say it's 4 months (because who has uninterrupted time), a senior rate of $1000/day. 20 days a month, so 80 days, is an $80k outlay. That's assuming you can get the skills (because AWS et al like to hire these kinds of engineers).
Say one wants a 3 year payback, that is $2,200/month savings you need. Which seems highly achievable given some of the cloud spends I've seen, and that I think an 80-90% reduction in cloud spend is a good ballpark.
The appeal of a consultancy is that we'll remove the up-front investment, provide the skills, de-risk the whole endeavour, even put engineers within your team, but you'll _only_ save 50%.
The latter option is much more appealing in terms of hiring, risk, and cash-flow. But if your company has the skills, the cash, and the risk tolerance then maybe the former approach is best.
EDIT: I actually think the(/our) consultancy option is a really good idea for startups. Their infrastructure ends up being slightly over-built to start with, but very quickly they end up saving a lot of money, and they also get DevOps staffing without having to hire for it. Moreover, the DevOps resource available to them scales with their compute needs. (also we offer 2x the amount of DevOps days for startups for the first year to help them get up and running).
I think the AWS way made clear sense in the days before the current generation of tooling existed, when we were SSH-ing into our snowflake servers (for example). But now we have tools like Kubernetes/Nomad/OpenShift/etc/etc, the logic just doesn't seem to add up any more.
The main argument against it is generally of the form, "Yes, but we don't want to hire for non-cloud/bare-metal". Which is why I think a consultancy provides a good middle ground here – trading off cost savings against business factors.
Ping me an email (see bio), always happy to chat.
Basically the idea is that you define your infrastructure in a rather short .NET script (e.g. for example postgres + backend + frontend + auth service) and the tooling then lets you either download all the components and launch the whole thing locally, or generate a script of some kind to deploy it to an infrastructure provider (type of script depends on provider). And it provides extensive logging, monitoring, tracing etc out of the box for the majority of the included components with API endpoints and dashboards.
But if you are starting from scratch instead of looking for someone to help you migrate, then yeah, the AWS way has probably higher setup costs than making it portable.
Do you have to ask Hetzner nicely for this? They have a publicly documented 10G uplink option, but that is for external networking and IMHO heavily limited (20TB limit). For internal cluster IO 20TB could easily become a problem
[1]: https://docs.hetzner.com/robot/general/pricing/price-list-fo...
Are you willing to share example config for that part?
You'll need a bit of baseline networking knowledge.
It's not rocket science, but it is complex, and building something complex you don't fully understand for production services can be a very bad idea.
Perhaps you could take a look at https://syself.com (Disclaimer: I'm an employee there). We built a platform that gives you production-ready clusters in a few minutes.
I wonder what is the motivation behind manually spinning up a cluster instead of going with more established tooling?
If Hetzner has an issue or glitch once a month, the middle-tier providers have one every 2-3 months, and a place like AWS maybe every 5-6 months. However, prices also follow that observation, so you have to carefully consider on a case-by-case basis whether adding some extra machines and backup and failure scenarios is a better deal.
The major benefit by using basic hosting services is that their pricing is a lot more predictable; you pay for machines and scale as you go. Once you get hooked into all the extra services a provider like AWS provides, you might get some unexpectedly high bills and moving away might be a lot harder. For smaller companies, don't make short-sighted decisions that threaten your ability to survive long-term by choosing the easy solution or "free credits" scheme early on.
There is no right answer here, just trade-offs.
Yes, there is some added value in the level of convenience provided. But maybe with a bit more competition, pricing could be more competitive. A lot more competitive.
Thank you for sharing your experience. I also have my 3 personal servers with Hetzner, plus a couple VM instances in Scaleways (French outfit).
Disclaimer: I’m a Googler, was SRE for ~10 years for GMail, identity, social, apps (gsuites nowadays) and more, managed hundreds of jobs in Borg, one of the 3 founders of the current dev+devops internal platform (and I focused on the releases,prod,capacity side of the platform), dabbled in K8s on my personal time. My opinions, not Google’s.
So, my question is: given the significant complexity that K8s brings (I don’t think anyone disputes this) why are people using it outside medium-large environments? There are simpler and yet flexible & effective job schedulers that are way easier to manage. Nomad is an example.
Unless you have a LOT of machines to manage, with many jobs (I’d say +250) to manage, K8s complexity, brittleness and overhead are not justifiable, IMO.
The emergence of tools like Terraform and the many other management layers in top of K8s that try to make it easier but just introduce more complexity and their own abstractions are in itself a sign of that inherent complexity.
I would say that only a few companies in the world need that level of complexity. And then they will need it, for sure. But, for most is like buying a Formula 1 to commute in a city.
One other aspect that I also noticed is that technical teams tend to carry on the mess they had in their previous “legacy” environment and just replicate in K8s, instead of trying to do an architectural design of the whole system needs. And K8s model enables that kind of mess: a “bucket of things”.
Those two things combined, mean that nowadays every company has soaring cloud costs, are running things they know nothing about but are afraid to touch in case of breaking something. And an outage is more career harming than a high bill that Finance will deal with it later, so why risk it, right? A whole new IT area has been coined now to deal with this: FinOps :facepalm:
I’m just puzzled by the whole situation, tbh.
K8s has a whole kit of parts which sound really grand when you are starting out on a new platform, but quickly become a pain when you actually start to implement it. I think thats the biggest problem, is by the time you've realised that actualy you don't need k8s, you've invested so much time into learning the sodding thing, its difficult to back out.
The other seductive thing is helm provides "AWS-like" features (ie fancy load balancing rules) that are hard to figure out unless you've dabbled with the underlying tech before (varnish/nginx/etc are daunting, so is storage and networking)
this tends to lead to utterly fucking stupid networking systems because unless you know better, that looks normal.
Every time I try to use Nomad, or any of the other "simpler" solutions, I hit a wall - there turns out to be a critical feature that is not available, and which if I want to retrofit into them, will be a hacky one-off that is badly integrated into API.
Additionally, I don't get US-style budgets or wages - this means that cloud prices which target such budgets are horrifyingly expensive to me, to the point that kubernetes pays itself off at the scale of single server
Yes, single server. The more I make it fit the proper kubernetes mold, the cheaper it gets, even. If I need to extend something, the CustomResourceDefinition system makes it easy to use a sensible common API.
Was there a cost to learning it? Yes, but honestly not so bad. And with things like k3s deploying small clusters on bare metal became trivial.
And I can easily wrap kubernetes API into something simpler for developers to use - create paved paths that reduce the amount of what they have to know, provide, and that will enforce certain deployment standards. At lowest cost I have encountered in my life, funnily enough.
Maybe you could give example of feature in case of nomad?
1. Ingress and Service objects vs. Nomad/Consul Service Discovery + Templating
This one is big, as in really big thing. Ingress and Service API let me easily declaratively connect things with multiple implementations involved, and it's all handled cleanly with type-safe API.
For comparison, Nomad's own documentation tells you how to majorly use text templating to generate configuration files for whatever load balancer you decide to use, or use one of two they point to that have specific nomad/consul integration. And even for those, configuring specific application's connectivity happens though cumbersome K/V tags for apparently everything except port name itself.
You might consider it silly, but Ingress API with it's easy way to route different path prefixes to different services, or specify multiple external hosts and TLS, especially given how easily that integrates (regardless of used load balancer) with LetsEncrypt and other automated solutions, is an ability you're going to pick out from my cold dead hands.
Similarly the more pluggable nature of Service objects turns out critical when redirecting traffic to appropriate proxy, or doing things like exposing some services using one subsystem and others with another (example: servicelb + tailscale).
In comparison Nomad is like going back to Kubernetes 1.2 if not worse. Sure, I can use service discovery. It's very primitive service discovery where I have to guide the system by hand with custom glue logic. Meanwhile the very first kubernetes in production I set up had something like 60 Ingress objects setting up 250 domains which totaled about 1000 host/path -> service rules. And it was a puny two node cluster.
2. Persistent Storage handling
As far as I could figure out from Nomad docs, you can at best reuse CSI drivers to mount existing volumes to docker containers - you can't automate storage handling within Nomad, more or less you're being told to manually create necessary storage, maybe using terraform, then register it with Nomad.
Compared to this, Kubernetes' PersistentVolumeClaim system is a breeze - I specify what kinds of storage I provide through StorageClasses, then can just throw a PVC into definitions of whatever I am actually deploying. Setting up a new workload with persistent storage is reduced to me saying "I want 50G generic file storage and 10G database-oriented storage" (two different storage classes with real impact of performance/buck for both).
Could I just point to a directory? Sure, but then I'd have to keep track of those directories. OpenEBS-ZFS handles it for me and I can spend time on other tasks.
3. Extensibility, the dark horse of kubernetes.
As far as I know none of the "simpler" alternatives have anything like CustomResourceDefinition, or the very simple API model of Kubernetes that makes it easy to extend. As far as I understand Nomad's plugins are nowhere close to the same level of capability.
The smallest cluster I have currently uses following "operators" or other components usind CRDs: openebs-zfs (storage provisioning), traefik (easy trackable middleware configuration beyond unreadable tags approach), tailscale (also provides alternative Ingress and Service implementation), CloudNative PG (automated Postgres setup with backups, restores, easy access with psql, etc.), cert-manager (LetsEncrypt et all, in more flexible ways than embedded into traefik), external-dns (let's me integrate global DNS updates with my service definitions), k3s' helm controller (makes life easier in loading external software sometimes).
There's more but I kept to things I'm directly interacting with instead of all CRDs currently deployed. All of them significantly reduce my workload, all of them have either no alternative under Nomad or very annoying options (stuffing configuration for traefik inside service tags)
And last, some stats from my cluster:
4, soon to be 5 or 6, "tenants" (separate namespaces), without counting system ones or ones that provide services like OpenEBS
Runs 2 VPN services with headscale, 3 SSOs, one big java issue tracker, 1 Git forge (gitea, soon to get another one with gerrit), one nextcloud instance, one dumb webserver (using Caddy). Additionally runs 7 separate postgres instances providing SQL database for aforementioned services, postfix relays connecting cluster services with sendgrid, one vpn relay connecting gitea with VPN, some dashboards, etc.
And because its kubernetes, my configuration to setup for example new Postgres looks like this: local k = import "kube.libsonnet";
local pg = import "postgres.libsonnet";
local secret = k.core.v1.secret;
{
local app = self,
local cfg = app.cfg,
local labels = app.labels,
labels:: {
"app.kubernetes.io/name": "gitea-db",
"app.kubernetes.io/instance": "gitea-db",
"app.kubernetes.io/component": "gitea"
},
dbCluster: pg.cluster.new("gitea-db", storage="20Gi") +
pg.cluster.metadata.withNamespace("foo") +
pg.cluster.metadata.withLabels(app.labels) +
pg.cluster.withInitDb("gitea", "gitea-db") +
pg.cluster.withBackupBucket("gs://foo-backups/databases/gitea", "gitea-db") +
pg.cluster.withBackupRetention("30d"),
secret: secret.new("gitea-db", null) +
secret.metadata.withNamespace("foo") +
secret.withStringData({
username: "gitea",
password: "FooBarBazQuux",
"credentials.json": importstr "foo-backup-gcp-key.json"
})
}
And this is older version that I haven't updated (because it still works) - if I were to setup the specific instance that it's taken from it would have even less writing.Even something as simple as an oil change, really isn't worth doing yourself. First you buy the tools (oil drip pan, filter wrench, funnel, creeper). Then you set aside the time to use them, find your dingy work clothes. You go to the store and buy new oil and a filter. You go home and change the oil. Then that day or another day you go to a store that will take your used oil. Versus 20 minutes at an auto mechanic, for about $15 more than the cost of the oil and filter.
Kubernetes is an entire car (and a complex one). It's really not worth doing the maintenance yourself, I promise you. Unless you're just doing it for fun.
A lot of it is finding balance between what to do yourself, what to outsource, and it's not as easy or clean as some people here like to claim.
(This is what I think about when someone says "hey, my monthly bill is cheaper!" and later ends up with unhappy customers when their cluster goes kaput and they can't get it working again for days. Don't ask me how I know...)
My opinion, from the viewpoint of a consultant often involved in Kubernetes, is to get initial help and a persistent help line, but get somebody internally interested enough to ride along and learn.
Consultants and experts in general can save you from a lot of bad up-front decisions and banging your head against the wall for months. It's not trivial to learn your way around technologies or ecosystems, including common dark corners and pitfalls, in a reasonable amount of time while also having to focus on your core business. Accept help but learn to fish and to make a fire.
> Hetzner volumes are, in my experience, too slow for a production database.
That's true, though. To solve that we developed a way to persist the local storage of bare metal servers across reprovisionings. This way it's both faster and cheaper. Now we are adding an automated database deployment layer on top of it.