No real surprise there. As long as the bill for infrastructure remains sane, nobody is going to put in the effort to change out important parts of your arrangement, instead sticking with a "Well, we know this works, and we know how to deal with it..." approach.
If they'd raised the bill somewhat - 50%, 100%... people probably would have stuck with it. But to jack it an order of magnitude, well, now it's worth putting the engineers on the project to find a cheaper solution (that may very well be better - virtio vs what VMWare is using... I certainly prefer virtio for most of my storage and networking needs).
> The tech team also warned management that the quality of VMware's support services and innovation were falling.
I mean, the writing was on the wall, you don't buy out a product and jack the prices 10x if you plan to actually support it. It's pure "value extraction" at that point. Sad, really, because VMWare has a lot of nice features behind it and has been a well thought out bit of virtualization software throughout the years.
I had the displeasure of having to update a VMWare install on a laptop recently (VMWare Player had been perfectly fine, which was discontinued, Workstation is now free for personal use, but you have to register with your full physical address to download it, and I just want to run the VM I use to talk to my car, please...). I can't say I'll be considering them for anything going forward.
It's worth pointing out that VMware haven't been innovating on their core product for many years. The last major feature they added was vTPM, in 2018.
We're almost in 2025, and they still don't have an identity and metadata service available that can attest VM's identity in front of third parties, or any way to securely introduce secrets into VMs. AWS had this in 2012, and it's honestly embarrassing that VMware have done nothing about it.
And support has been atrocious since at least 2015. I remember when I had to debug myself driver issues, because the people at VMware put a network card with a broken driver on their "Hardware Compatibility List", and then played dumb for a year of reports that the bloody driver is broken. The hosts kept crashing, or even more fun, silently stopping to process network traffic. And of course nobody at VMware's or Dell's support had any idea what's happening, even though there were abundant reports all over the internet and various forums about this.
I have read few comments that he wasn't great at it (no personal knowledge). Combined with Intel stuff...
Inertia? I was a part time vSphere admin at an MSP 2015-2022 and the solution was overwhelmingly underwhelming, with lots of bugs, rough edges, and some of the worst APIs I've ever seen. There was almost no innovation in that time period, and they even managed to bungle an REST API (which was billed as some sort of revolution, in 2015/2016).
The last thing I want is for a hypervisor to handle some secrets or manage identities.
This is OS/app level job, not a HV one.
It works marvelously in AWS, GCP, Azure. It allows for an extremely secure and low maintenance solution to that existential issue.
It's a perfect thing for the platform to handle for you.
Next time you need to do this, you can also choco install vmwareworkstation :) if you have Chocolatey.
https://arstechnica.com/information-technology/2024/10/broad...
It's now free period, even for commercial use, and even if you want to pay for support they won't take your money anymore once existing contracts expire. In other words Workstation is probably going on life support and won't have any significant development going forwards.
https://blogs.vmware.com/cloud-foundation/2024/11/11/vmware-...
Cloud is staggeringly expensive compared to your own physical servers. Has been for all but trivial (almost toy) workloads since day 1. And that's before you pay for bandwidth.
I was spending a decent chunk of change monthly on cloud boxes just for my personal hosting projects, and eventually realized I could get a stonking 1U box, colo at a local data center, pay for the server in the savings in a year or two, and have radically more capability in the deal.
If you need a "droplet" type VM, with a gig of RAM, a couple gig of disk, and bandwidth, they're not bad. DigitalOcean works well for that, and is way cheaper on bandwidth than other places (1TB per droplet per month, combined pool). So I'll use that for basic proxy nodes and such.
But if you start wanting lots of RAM (I run, among other things, a Matrix homeserver, and some game servers for friends, so RAM use goes up in a hurry), or disk measured in TB, cloud costs start to go vertical, in a hurry. It's really nice having a box with enough RAM I can just toss RAM at VMs, do offsite backup of "everything," etc.
If you're spending more than a few hundred a month on cloud hosting, it's worth evaluating what a physical box would save you.
//EDIT: By "go vertical," I mean "To get a cloud box with similar specs to what I racked up would be half the cost of the entire 1U, per month."
Sure, we could use physical boxes. But those will go to procurement. The budget will have to be approved. Orders are sent to suppliers. Hardware arrives, it is a colo is not so bad, but it will be installed according to the colo timelines. If it's your own DC you may have staff on hand, or it could very likely be a third party, and now you have to work with them etc etc. It can easily take months for any non trivially sized company. In the meantime, _we need capacity now_ and customers won't wait. I can provision thousands of machines on demand with a simple pull request and they will be online in minutes. And I can do that without exceeding the pre-approved budget, because those machines may not be needed forever; as soon as they are no longer needed, they are destroyed very quickly.
And then, a random machine fails somewhere. Do you have staff to detect and diagnose the problem? I don't care how good your monitoring system is, there are some thorny issues that are difficult to identify and fix without highly specialized staff on board. Staff that you are paying for. Me? I don't care. If a VM somewhere is misbehaving, it is automatically nuked. We don't care why it had issues (unless it's a recurring thing). That happens a few times daily when you have 5 to 6 digit number of machines, and that's either initiated by us when our system detect health check failures, or initiated by AWS (either planned or unplanned maintenance).
Don't think just how much an individual machine costs. It's all the supporting personnel that matters, with their (expensive) specialized skills. Managing one machine is doable (I have a few servers at home). Managing 50k? You have a beefy team now, with many specialized skills. You are probably running more exotic hardware.
You also need to measure apples to apples. You 'disk measured in TB' is a locally attached disk almost certainly. In the cloud, that's likely to be a network attached storage. That _is_ more expensive (try buying something similar for your home lab), but it gives a lot of flexibility, flexibility that may not be necessary in a homelab, but it is certainly needed in larger environments. That's what allows our VMs to be fungible and easily destroyed or recreated, as they themselves don't store any state. That storage will also be more resilient too (AWS EBS backs up snapshots on S3, with 11 nines of durability, and can automatically retrieve blocks if they go bad).
That said, even for large enterprises, the AWS egress costs are extortionate (more so if you use their NAT gateways). And there could be uses for workloads that don't change too much where it might be a good idea to have a hybrid model, and some physical boxes (but please try to not make them pets!).
"Cloud as a workaround for internal corporate dysfunction" is certainly a novel argument for cloud. I'm aware of the OpEx vs CapEx issues at a lot of companies, I just happen to think it's a really stupid reason to spend a lot more money than you otherwise would for some set of capabilities.
> You also need to measure apples to apples. You 'disk measured in TB' is a locally attached disk almost certainly. In the cloud, that's likely to be a network attached storage.
If I want to stuff 2TB of files into somewhere that's not-local, why does it particularly matter to me what the exact technology used for storing them is?
I mean, obviously "cloud" is quite successful, and comes with the ability to be able to say "Not our problem!" when AWS is down for some reason or another. But none of the problems you talk about are new, and all of them were quite well solved 20 years ago by companies running their own hardware. Been there, admin'd that. A four-machine cluster (two web front ends doing the bulk of the compute, two SQL database servers replicating to each other, and some disk storage regularly synced between the two database servers) could handle a staggering amount of traffic when properly tuned. The same is true today, without any of the problems of rotational disk latency. SQL on NVMe solves an awful lot of problems.
But, again, not my money to spend. I just find it baffling that a lot of people today don't even seem to realize that physical servers are still a thing.
Cloud is really expensive, but so is doing it yourself. Plus there is a plethora of regulations coming this way, NIS2, CRA and so on. If a software is down, it means a lot of lost revenue or extra cost.
If you just need pure compute or bandwidth, there no point in going to the cloud.
How much time was wasted by customer on-premise JIRA not sending emails. It was always... a didn't get email for a long time. Ask them to check it and restart it. Or my recent Win2012 (no R2) end of support and migration. At least the customer does pay for the Extended Security Updates.
And "the cloud" is not a magical wand for reliability, either. How many times a year does one of AWS's regions being down (or something with CloudFlare being down) front page HN, because a substantial swath of the internet has stopped working?
I'm not saying cloud is never the right answer. However, I do think that anymore, it's such the "default option" that very few people even consider the possibility of hosting their own hardware somewhere. I'm pretty sure I was the first random individual to come talk to my colo in years, because it sure looks like they spun up a "shared rack" for me.
AWS overcharges for traffic, specially traffic going out.
However, they have repeatedly released instances that are cheaper than their predecessors (often, higher performing too). I don't think that's out of the goodness of their hearts, as it likely allows them to refresh their fleet more often than it would be the case otherwise.
If you are large they will work with you on pretty sweet discounts.
So far, there's been no indications that they will pull the same move. They might, but that would be surprising. VMWare has never been cheap though.
Doesn't make any sense to me, but I'm not a corporate raider, either. I'd just be happy to help people port their internal tooling over to Xen or KVM...
They are looking to extract the maximum amount of money possible. I'd argue that Broadcom could extract more money with smaller uplifts but I think they are also looking to consolidate their customer base. Some of these crazy numbers may be doing just that - saying that these people aren't wanted as customers anymore.
Is it really though?
For public companies the primary goal is creating the highest possible stock price, or in rarer cases high dividends. Extracting as much money as possible is a common strategy for achieving that but it's not the only one. And arguably the strategy is used way more than it should be. Boards tend to set up bad incentives for the company leadership.
For private companies the goal is whatever the owner wants. That can be profit, but often it's about legacy. Or something entirely different. As far as we know SpaceX's purpose is indeed to create a self-sustaining mars colony
For example, Meta could say "if you don't pay us 100 dollars per year, we will delete your entire account including all of your memories and photos across instagram, threads, and Facebook" and probably make a giant amount of money in the next quarter as people panic over losing their memories to something they considered reliable. However, it would kill the company's long term growth.
Except that's not what they are doing. By all accounts (and their financial guys may disagree with me and that's ok), what they are actually doing is trying to extract more money than what the market will bear. They may succeed for a while, but it comes at the cost of cannibalizing your own business.
[citation needed]
More subtle consideration: over what period of time? A quarter? Year? Decade? Other?
The leadership at Boeing tried to maximize numbers for a while, and where are they now?
Jack Welch who is/was all about monetary results:
> Regarding shareholder value, Welch said in a Financial Times interview on the global financial crisis of 2008–2009, "On the face of it, shareholder value is the dumbest idea in the world. Shareholder value is a result, not a strategy...your main constituencies are your employees, your customers and your products."[69]
For those that do not know the reference, "Fork Yeah! The Rise and Development of illumos" by Bryan Cantrill at LISA11:
* https://www.youtube.com/watch?v=-zRN7XLCRhc&t=38m24s
And the lead up is entertaining too:
reputation tracking for the mostly anonymous relationships we have with businesses these days can be difficult.
https://duckduckgo.com/?t=ffab&q=rent+uk+news&iar=news&ia=ne...
"My landlord's 34% rent rise felt like an eviction" - BBC on MSN.com|6 days ago
"UK households who rent face £200 being added to payments" - Birmingham Mail on MSN.com|12 days ago
"Rents now 'unaffordable' across most of the UK" - PropertyWire|5 days ago - "Monthly rents have been labelled 'unaffordable' in every region of the UK except for the North East, data from analytics company TwentyCi has revealed. The ONS [Office of National Statistics] defines a rental property as affordable if the median rent is 30% or less of the median income of private renting households."
"UK Cities See Rents Surge More Than 40% in Four Years" - Financial News|7 days ago
"Revealed - where rent has risen over 30% in the past year" - lettingagenttoday.co.uk|7 days ago - "The figures show that, across Britain, the average monthly cost of renting has increased by 8.7% over the last 12 months"
"UK tenants hit by highest inflation in September" - The Financial Times|5 days ago
That said, even AT&T is jumping ship: https://arstechnica.com/information-technology/2024/10/broad...
To think that the migration away from VMWare would cost $40-50m but have a "very quick payback" presumably means that AT&T is a gigantic customer paying a ton of money, so if they're not in the top 10% contracts worth keeping then who is?
https://www.ciodive.com/news/broadcom-att-vmware-settlement-...
Say what you like about Sun or Oracle or Novell, their pricing was much more reasonable than CA’s. Plus we were a public university, and CA didn’t seem to believe in education discounts, whereas Oracle gave us a standard education discount of over 90% off list price.
CA was famously the place where mainframe software went to die. When I worked for Oracle, I had some very limited exposure to CA TopSecret and ACF2, which are mainframe security products (RACF competitors) that CA bought, which an Oracle product I was working on integrated with. No idea what the licensing was but I’m sure it wasn’t cheap.
A lot of companies out there (Canonical's OpenStack, RH's OpenShift) are so swamped with customers wanting to migrate off of VMWare that they can't keep up, meaning a lot of companies are going to have to keep paying VMWare while they wait in the migration queue.
When it was time for our contract to renew a few months back, our licensing and support quote was 3x the previous year and Broadcom would not budge, even a little. They said take it or leave it. Well, we left it.
There is a big internal effort now to get us off of VMware and onto Kubernetes and OpenShift. Our whole fleet of VMware is still running but we're on borrowed time as we're on our own if any major technical issue comes up.
We have always been hybrid cloud and I don't see that that would change in the future. Honestly the future will probably be what was always predicted: have a set of "core origin" servers that are on-prem and then a cloud membrane around that.
On-prem might still mean using a vendor for the actual care and feeding of hardware, there's no money in us running our own datacenters.
Also...it's still Red Hat. They're owned by IBM but they're still allowed to operate independently.
But back to the original point: you shouldn't be paying as much for OpenShift as you were for the equivalent VMWare offering. We used OpenShift at my last job and VMWare at the one before it; OpenShift was cheaper than VMWare was before the Broadcom acquisition.
And sorry bud, but the whole "operating independently" thing...I don't buy it. I've worked for too many companies that were owned by someone else and purported to operate independently. It's just a flat-out lie.
I don't doubt it, given that this has been Broadcom's MO from the beginning. But IBM is not Broadcom, and while they've definitely messed things up, they've recognized the value in letting Red Hat remain independent.
You're leaving out bending the customer over the barrel come renewal time once services are migrated and there's lock-in.
This is easily resolved by negotiating a longer contract, and planning for alternative vendors prior to the expiration of said contract. The amount of the potential increase at renewal is capped at the cost of switching (see, for example...all the VMWare customers switching off VMWare because its significantly cheaper to take the one-time switching costs than to pay 1000x every year).
This is all part of basic Negotiating 101. It sounds like your company isn't any good at it, and they could save a lot of money by getting better negotiators. (Now you know why Legal gets paid $$$ to play solitaire most of the day.)
Oh I see you tried our enterprise software. Welcome!
VMware is bananas. One console runs thousands of VMs across hundreds of physicals in dozens of data centres. VMs can, and do, move around at will thanks to vMotion. Need to upgrade this physical host? Just vMotion its guests somewhere else.
Oh and it’ll replicate all of this for you, live. So if one half of your data hall goes down, it doesn’t matter.* Your users don’t even notice.
And many, many other features that Virtualbox can’t touch.
(*Well, someone like me is having a bad day, but you know.)
For some companies, the migration from oracle to anything else would cost too much and hard to justify in some places, especially government. It's the same for VMware, It will takes many years for some government agencies to replace it with something else.
I'm guessing Broadcom is going the same way. They can't compete against cloud migrations or open source alternatives.
So as you said, they're squeezing locked in customers, the fastest they can until they've all migrated... Or until some decided that keeping it at that price was still a better ROI than migrating.
I don't think a lot of people will start a new VMware data center in the years to come.
And then at the next renewal, when the price goes up 3x, the cost of a one-time migration will look cheap - to the new executive that's in charge at that time.
This[1] from 2022 says "Broadcom's stated strategy is very simple: focus on 600 customers who will struggle to change suppliers, reap vastly lower sales and marketing costs by focusing on that small pool, and trim R&D by not thinking about the needs of other customers – who can be let go if necessary without much harm to the bottom line. The Register offers that summary based on Broadcom's own words, as uttered at a November 2021 Investor Day."
and "Krause said Broadcom is content to have those 100,000 customers "trail" over time."
[1] https://www.theregister.com/2022/05/30/broadcom_strategy_vmw...
I suspect Broadcom's plan is to figure out just how much they can bear and cut a deal along those lines.
That would totally work -- we're neck deep in vmware, and there's no easy way out -- except that there's a kind of network effect at play.
Support is huge for us. In the beginning, we developed expertise in vmware (particularly in regard to our products), and guided our customers to it as well. Customers were happy to follow our recommended/supported option, and we were happy supporting it. (The support ain't cheap!)
But our customers, by-and-large, are smaller and somewhat more nimble, meaning many will be unable or unwilling to do a deal with broadcom, and will insist on switching away. I suspect new customers will lead the way... there's no way we're going to walk away from a deal because the customer refuses to pay for vmware, and we probably can't afford to eat that cost for them. So we're going to start gaining expertise and supporting other vm platforms. I think we'll follow our original pattern: settle on one, gain expertise, start using it in support, then test and dev, and steer our customers to it. At some point we'll realize we know how to get rid of the rest of our internal use of vmware, AND realize we can sell remaining customers support to switch away from vmware themselves. Once that happens, I think it becomes a ball rolling down hill, and things will accelerate quickly.
There are a couple reasons this isn't like Oracle, BTW. For one, with Oracle we're 10,000 ft below the surface with Oracle-provided air tanks. Second (I guess this is the main thing), Oracle is our problem, not our customers'.
A bit of nostalgia: my first VMware product was VMware Express for Linux. It was a stripped down version of Workstation (probably 2.0?) that could only run Windows 95/98:
https://web.archive.org/web/20010124081300/http://www.vmware...
Does anyone remember Win4Lin 9x (based on SCO Merge)?
Then VMware dies because it cannot decrease prices anymore due to lack of volume.
It's just further re-emphasizing that it doesn't matter how good your vendor is now, they will probably eventually get acquired by a company attempting to squeeze every last penny out of you. And their cost-benefit calculation is no longer based on how much unique value the software is providing, but it's instead about how much of a PITA it is to migrate off said software.
Where we are now is on the cusp of starting another of these lock-in rejection cycles, but where the lock-in isn't at the OS or devtools layers, but at the data/analytics layer and with IaaS or PaaS having become unreasonably expensive alternatives to on-prem data centers [for many enterprises].
It'll be interesting to see how things evolve for Snowflake & Databricks (as well as Pega, C3AI, and similar), and whether CIOs start placing bets on their own team creating their own solutions using FOSS tooling on-prem, leaving public cloud as the domain of things like ERP & whatever SaaS business software they license.
VMware's biggest problems remain its lack of product cohesion and complexity relative to its cost, and while their recent conference suggested they're working to address these concerns, I worry it's about half a decade too late for it to be of value. Aria should have been a single product suite and appliance when they pivoted to public cloud, and vRealize Automate shouldn't have been so overly complex as to require professional services for deployment and maintenance. NSX seems powerful on the surface, but really only seems to thrive when the network team either relies upon it directly, or gives you a widely trunked subnet to work with and carve out. vSphere and ESXi are excellent stalwarts, but they're not as usable/automation-friendly as they should be in the modern era of IaC. Pre-Broadcom, VMware's strength was its ubiquity and core feature set for VMs relative to its pricing; post-Broadcom, not so much.
As for the alternatives, I'm not really sold on any of them from commercial vendors.
* Nutanix has never turned a yearly profit, doesn't support commodity hardware very well (especially if you want the full feature set), and is already expensive on its surface. The pivot towards cloud-hosted services in lieu of on-prem services suggests it's trying to emulate larger and more successful competitors, rather than focusing on its core business strategy (HCI). It's a shame, because if they went the VMware route (software only on commodity hardware), I think they'd be more successful.
* Virtuozzo was the best commercial competitor to VMware on paper, but I never got to do a proper PoC. They're more focused on consumption-based services and XaaS, which is a plus, but I'm cautious endorsing them further as I have no direct experience with them - though I did suggest a PoC at the time, to explore their IaaS and PaaS suites.
* OpenShift, while a very neat tool (VMs managed the same way as Kubernetes, huzzah!), doesn't really fit modern Enterprise needs either. I applaud any tooling that tries to make infrastructure scalable like containers are, but Enterprise workloads remain "VM first" for the most part, which OpenShift isn't really aiming for. If you're already working mostly with containers, I'd strongly suggest evaluating OpenShift, but most companies aren't, and Red Hat/IBM is very focused on selling VM-heavy customers on this container transition despite most of our workloads not supporting containers at all, and those that are available as containers often have strong warnings against orchestration support with K8s/OpenShift.
* Microsoft's own stack is kind of a known quantity that I'd really only recommend if you're already a huge Microsoft customer, or rely heavily on MSPs/outsourced labor to support it. If you're not already on Microsoft, there's no reason to switch to it.
* Proxmox was considered, but didn't make it into my final proposal due to prior evaluations not finding it Enterprise-suitable.
* Red Hat Cockpit + KVM is surprisingly powerful! It'd be my default recommendation for SMBs, as RHELS pricing is fairly comparable to what vSphere licenses used to be, and once your engineers are onboarded into the world of Linux, they can convince you to migrate distros to something cheaper without significant disruption.
When looking through free/Open Source projects, I initially narrowed it down to two:
* Apache CloudStack was my 1st choice in my research, and the PoC was great. The initial buildout can be annoying, as it's typical Apache faire (in my experience anyway) of multi-page-manuals for a "quick start", but once the initial manager is up and running, the rest moves pretty quickly.
* OpenStack was considered, but its complexity and composition of dozens of individual projects made it untenable to support in our environment.
* OpenNebula was not considered at the time, but I've been looking into it on my downtime and considering my own PoC, since I dislike Proxmox for my homelab.
My instincts tell me that we're a few years out from a "great reshoring" of workloads into more appropriate placements: companies reluctant to move to public cloud will do so for customer-billable workloads because they can have an easier time judging/fixing margins on it and scaling to demand, while Enterprise and stable workloads will likely move back on-prem where sovereignty and security are paramount and cost savings can be achieved through longer hardware lifespans. In that context, nobody does a particularly good job of creating a "Universal Cloud" abstraction layer to free engineers from juggling dozens of APIs, CLIs, codebases, and pipelines for each specific vendor or workload; not even Kubernetes does this well, and it's arguably the best presently-available technology suited to cross-cloud management and orchestration. Broadcom jumped the gun on price increases, because given another year or two of product improvement and shifting landscapes of both technology and geopolitics, they'd essentially have a captive audience to squeeze for higher margins; instead, we have ample time to consider alternatives that are quite comparable to VMware in capabilities, and often substantially cheaper to boot.
What makes something Enterprise-suitable, or not?
In the case of Proxmox, I believe at the time it was its insistence on operating exclusively in a privileged space (root, by default) that was also incompatible with corporate security standards and single sign-on. In other words, we couldn’t effectively secure its default accounts and operation to our satisfaction. When the dated and overly-technical UX is taken into account (including it defaulting to legacy-compatible options), its weird cluster configs if you wanted centralized control, and its…unique tagging and ID systems, and ultimately it’s a heavier lift to implement than other alternatives with no real advantage of its own.
I don’t doubt there’s some Wizard out there gladly running it across continents and supporting tens of thousands of VMs and containers, but for a transition project from VMware it just wasn’t remotely competitive in my research.
Great for SMBs and homelabs though, if you don’t want to learn KVM directly or fire up Cockpit.
I've not used ProxMox so I don't know how well it does on that front, but I can believe it. My recent experience is "if you don't check these security boxes we will drop your product".
That said, DOM0's have to run high privilege level. Maybe you are saying ProxMox doesn't offer logins with different levels of access.
Even in the early 2015's kubernetes was mature enough to run production workloads. Any issues that did come up were with our own application code, never the orchestrator.
I never found myself thinking that it would be nice to have a close sourced expensive option to reach for.
What area does VMWare excel so much to justify this pricing power?
VMWare absolutely owns this market in the enterprise. It's been reliable, it's well-supported, there's a huge ecosystem of vendors and integration partners around it, and it's been the no-brainer choice for virtualization for CIOs since about 2010 (or earlier!).
If you are deploying enterprise apps from the 1990-2000s you use vSphere, if you are building your own SaaS product then you use K8s.
My experience is only with VMWare's desktop-virtualization tools, but hands-down they have the best integration features and services, especially for... uh... "retro" small-business computing needs (in my case, it was the only way I could get a VM running Windows Server 2003 to work - which I needed in-order to be able to run a Progress-based CRM).
I find it odd that Microsoft's own virtualization/Hyper-V stuff is useless if you're wanting to run older versions of Windows, especially XP/2000/2003 (as Hyper-V was a post-Vista/WS2008 thing); it's not just the lack of drivers, but the lack of absolutely-essential integration features like USB port forwarding and "real" GPU emulation (because Hyper-V's "Enhanced Session mode" doesn't actually show you the local-console desktop: it's all just using a special mode of RDP).
With that said though, I think a lot of customers using VMware don't really need all the features of VMware. They could get away with something simpler and cheaper.
They are the second shittiest company I’ve ever dealt with — only recently outdone by Intel/Altera.
Broadcom bought VMWare knowing they'd trap most into existing contracts as whales, and knew most would bear the cost increase regardless.
Few casualties, so what - look at them profits.