Back in 2021 (https://news.ycombinator.com/item?id=29648325, https://news.ycombinator.com/item?id=30177907) and 2022 (https://news.ycombinator.com/item?id=32608734) Heroku went from "well this is costing enough that it probably makes sense to divest at some point and save the $X00/mo" to "Heroku is now the biggest systemic risk to uptime that I have", and it felt _very_ high-priority to get off of them and onto someone else.
Two years later, though... the inclination has ebbed. Heroku hasn't shipped anything meaningful for my use case in the past 24 months, but also they have been fairly stable. I'm sure I will migrate off of them onto something else in the fullness of time, but it would take a pretty severe precipitating event.
It's frustrating to see engineering teams in 2024 spending countless hours optimizing their applications to run on what essentially amounts to 2008-era hardware configurations.
It's not hard, and it provides a nice managed service without the full complexity of running K8s.
And haven't used one again since moving off ...
Start from the premise that the majority of developers out there have no experience dealing with concurrency, then consider that changing something's approach to concurrency/parallelism can occupy several developers' time for months or possibly years. Then consider that for a business the absolute costs of using 2-20x more instances than "necessary" may not justify the investment to actually switch to that. Then it makes sense why people so often choose to use concurrency = 1 (in which case you basically want the smallest instance size capable of running your app performantly) or whatever concurrency setting they've been using for ages, even if there are theoretical cost savings.
In terms of response time, that's something you'd need to benchmark for your application - though, given most DBaaSes run in the same major cloud providers are your application, it'll either be the same region or go cross-region via their private backbone, so the latency delta should be manageable. Of course if your app is particularly latency-sensitive for the DB that won't work.
Devs running infrastructure is always an interesting experience.
The heck? Like sure, people may call me "too perfect", but 20 minutes of outage for a Postgres database or a Redis instance / month is entirely not acceptable? Crossing out the less professional words there.
We're not particularly ambitious at work at guaranteeing 99.5 SLA to our customers, but 20 minutes of outage / month is already 99.5%. Availability only goes down if your database just has that. We observe that much downtime on a postgres cluster in a year.
At work you’re committing to 3 h / month.
Yet, a promise of 99.95% or 20 minutes of downtime, or having 20 minutes of interruptions and downtime / month are still a wild difference.
My experiences differ depending on the above, I've mostly use Render or an alternative for side projects now (just due to cost/forgettability). As a daily user of Heroku professionally - it's clear Heroku isn't a priority for Salesforce. Heroku has struggled to maintain any form of product development and, if anything, has become more unreliable over the last year or two.
As an add-on developer, my communication with Heroku has been fantastic. You can assume so because it's a direct revenue stream and feature expander - but my experience with other platforms isn't (iOS has slow/poor communication and docs, Chrome's extension support is non-existent and often not backwards compatible etc.). It's kind of re-ignited my love affair with Heroku, like it was pre-salesforce.
Overall I can't see us moving from Heroku unless costs demand us to - it's just too 'easy' to stay. Vendor lock in is real and I'm okay admitting that.
A few annoyances (like CLI auto-updating and rebuilding Go etc. when want to deploy a fast fix) but overall very solid
Also Render have been useful for running scripts
The vertical DBaaS are great for early phases but, generalising, seem to have pricing models tuned to punish any success (such as storage overage fees even if compute is low) -- also sneaky tier configs where lower tiers don't offer important features most need during prototype/dev phase forcing dedicated instances even though no volume hitting
I'm still a fan of Heroku and would highly recommend it to a brand new startup. But, after awhile, you start realizing the limitations of Heroku and you need to move on. The fact that your startup is still around and growing enough that you need to migrate off Heroku should be seen as a sign of success
We started by moving our heroku redis and postgres to redislabs and crunchy respectively, which were 10x+ better in terms of support, reliability, performance, etc. Then our random addons.
We recently moved our background job workers (which were a majority of our Heroku hosting cost) to render.com with ~0 downtime. Render has been great.
We now just have our web servers running on Heroku (which we'll probably move to Render next year too)...
End of an era. Grateful for Heroku and the next generation of startups spawned by its slow decline :)
The equivalent AWS is probably similar in price.
Many people remember moving off Heroku, but few seem to realize that the "new" providers are going to have the same period of increased costs, backlash, and settling in to just working with the big fish that can't or won't justify moving. So any discussion about how Vercel or Render or whoever is better just feels like missing the point.
The one thing I'll say is that a company like Vercel is definitely making a reasonable bet by trying to control the software as much as possible as well as the hardware. I find it unfortunate.
Every alternative seems to be pitching some different thing (the oddest to me is Fly with its edge computing stuff… I legit wonder how many projects at Fly go beyond like 2 machines let alone do all the fancy stuff), meanwhile “charge a bunch of people 100 bucks a month for 20 bucks of compute” seems to be where Heroku really thrived.
I'm not sure why it is so slow. Is like to blame it on something... Heroku, Rails...
I’m just throwing this out there but that may be something you want to get to the bottom of…
The engineers are making a dozen cost tradeoffs a day. You want to instill cost savings in every decision.
Look how long this team suffered on a legacy platform thanks to a perfectly rational approach.
a go binary in a zip builds and uploads to lambda in 1 second. handle routing and everything else in binary, don’t use aws features. you don’t need em.
lambda, s3, and dynamo all scale to zero with usage based billing.
toss in rds if you really miss sql or dislike scale to zero.
once this is too expensive, move to ovh metal, which sells significantly faster compute, ie epyc 4244p ddr5 5200.
more importantly, they sell bandwidth for $0.50/TB instead of $0.10/GB, which is aws price before paas wrapper markup.
the ovh price is after the 1Gbps unmetered you get for free with any server.
most companies will never even need metal, and between lambda and metal is ec2 spot, which might be of use anyway for elasticity.
ovh metal bills monthly, ec2 spot bills by the second. recently i learned ec2 spot in localzones is x2-3 cheaper than standard spot. i only use r5.xlarge in los angeles these days.
ovh metal has an old fashioned but perfectly workable api. aws has a great api.
spend a few days figuring out a decent sdlc, and then freeze it permanently. it will grow insanely robust over time.
i haven’t published my ovh metal workflows. my aws workflows are here[1].
lower overhead means more interesting products coming to market. lower friction definitely doesn’t hurt. viva le renaissance!
For us, Heroku allows us to focus on the product and simply ship features which brings revenue and keeps everyone happy; it may not be sexy right now but it sure as heck is mature and stable with lots of integrations.
Salesforce might eventually end up completely dismantling it but I'm hoping by that time other players can catch up.
my point was that it’s not really needed.
if you’re migrating off, you already have an sdlc that you like and want to preserve from provider degradation over time.
simplify it a bit, and encode it directly into aws or ovh. then it’s permanent, and grows insanely robust over time.
sdlc doesn’t not need to be constantly evolving. evolve it for the next greenfield product.
Maybe you’ve figured it out, but the local dev flow seemed pretty hacky/nonexistent. It also got expensive with real traffic
only thing worse than dysfunctional companies is dysfunctional technology.
someday we’ll get the incentives aligned properly, and thrive.
Of course it helps that we've grown very gradually over many years, so we don't need to scale rapidly; we can just over-provision by a few times to handle the spikes we do get, and work out tuning and upgrades each time we brush up against bottlenecks. So I'm sure it wouldn't work for everyone. But I bet there are still a lot of startups that would do well to just lease a dedicated box or two.
If you’ve building on docker for compute, something s3 compatible for object store and say something that is line compatible with Postgres or Redis then you’ve got clear boundaries and industry standard tech.
Stuff like that you can move fairly easily. The second you embrace something vendor specific for core logic you’re locked in. Which implies doing a vendor AND refactor change simultaneously