Broadcom jacked up VMware prices. CIOs panicked. Infrastructure teams scrambled.
What looked like a budget-friendly decision five years ago turned into a replatforming nightmare.
Suddenly, companies were bound to vendor contracts they couldn’t exit, locked into infrastructure that wasn’t built to move.
This isn’t a one-off.
Cloud pricing changes, licensing terms shift, and risk profiles evolve every day. When they do, teams are forced to react with infrastructure that was never designed to shift under pressure.
You don’t get cost control by picking the cheapest provider. You get cost control by designing infrastructure that can move when it’s necessary.
That’s what Red Hat OpenShift enables. And it’s how teams are starting to rethink cost optimization not as a spreadsheet exercise, but as a strategy.
Most teams don’t get locked into a cloud provider because of price. They get locked in because their systems aren’t built to move.
It’s easy to forget how different cloud environments really are. On paper, AWS, Azure, and GCP all offer equivalents: Lambda, Azure Functions, Cloud Run. JSON-based templates for automation. Storage tiers and monitoring tools. But in practice, the naming conventions, tooling, and underlying infrastructure all behave differently.
The moment you start building inside any one of these environments, you’re not just adopting the tools. You’re marrying the methodology.
That’s where the cost traps start.
We’ve seen this firsthand: several of our clients and prospective ones want to move off VMware after the Broadcom acquisition.
The problem is that by the time some of these companies want to move, they don’t have time. They have two months before their contract renews. It would take months of replatforming and phased workload migration to start the shift. So they stay put and pay the price.
It’s not just about licensing either. One small company saw a 30x increase in AWS usage when a junior employee pushed something live with AWS keys attached to it. They were left with a $300,000 bill. AWS gave them a $70,000 credit. They launched a GoFundMe to stay afloat.
Even with dashboards and alarms in place, most teams aren’t architected to identify cost increase and move quickly.
They think cost optimization is about line-item control: right-sizing compute, shutting down idle resources. But when a contract changes, or a breach hits, or pricing spikes overnight, those savings disappear fast.
And unless your infrastructure can shift with it, you’re stuck.
Cloud cost calculators give you a snapshot. But infrastructure is a moving target. And so is pricing, performance, and risk. If your architecture is built on fixed assumptions, you're at a disadvantage.
Workloads that seem optimized fall apart fast when pricing shifts or licensing terms change.
Developers understand their code, but they don’t usually know what it consumes. As a result, infrastructure often gets over- or under-provisioned.
And without visibility into usage, no one notices it until they see the bill. Tools like Densify regularly flag EC2 instances oversized by 30%—but only after the spend.
You need to monitor the whole cost usage of your environment.
Most companies will hyper-focus on using the cloud for storage as a cost-saving technique because it is cheaper than on-prem. But the real cost is compute. Running a VM in AWS or Azure is far more expensive than running it on premise.
Cost optimization means knowing when to run certain components in the cloud vs. others. And having well-achitected accounts, which can be exponentially complex given the size of the organization.
That complexity can make cost control nearly impossible for large organizations, no matter what tools they use.
The smart play isn’t picking between AWS, Azure, or on-prem. It’s being able to move between all of them at any time.
Every cloud environment is different (not just in services, but in how those services behave). The naming conventions are different. The tooling is different. Even the way security policies or firewall rules are managed can be totally divergent. So when teams try to migrate between environments, they hit the wall fast.
For example, Terraform doesn’t just magically move workloads between clouds. You have to write separate modules and manage providers. And you can still hit 1:1 mismatches.
That’s the trap. You start using the native services in a specific cloud because it’s convenient. But convenience becomes dependence. You’re not just building on AWS. You’re building for AWS. Same with Azure. Same with any environment.
Over time, that adds up to hardwired infrastructure. You get spaghetti-junctioned into the environment.
Moving means rewriting automation, migrating data, and untangling services. It’s not impossible, but it’s expensive and complicated. And in most cases, if the migration wasn’t planned from day one, it doesn’t happen. The effort doesn’t justify the cost.
So the team stays locked in.
But with true infrastructure agility, that all changes. And you build this with Red Hat OpenShift.
OpenShift is the same everywhere. It is effectively a “cloud in a box”. It gives you networking, software-defined storage, service discovery, and functions-as-a-service—all inside a Kubernetes-native environment. Automation that works in one OpenShift cluster works in another.
So instead of asking, “Which provider should we choose?” teams start asking, “Where should we run this right now?
Remember that portability isn’t something you bolt on later. You have to design for it from day one or you won’t move at all.
You need the right tool stack to build an agile infrastructure:
Dynatrace is best known for application performance monitoring, but it goes way beyond that. You can also build custom widgets to track cost projections or trigger alerts when usage spikes. Dynatrace detects anomalies, CVEs, SQL Injections, and code-level vulnerabilities, and can trigger automation when thresholds are breached.
Ansible is the Swiss Army knife of automation. If it has an API, Ansible can talk to it. You can respond to a Dynatrace alert by submitting a job that resizes JVM parameters, changes load balancer configurations, or tweaks operating system settings.
Terraform handles infrastructure changes with logic from Ansible or GitOps: resizing VMs, scaling clusters, spinning up or tearing down resources across clouds.
These tools don’t work in isolation. They’re part of a loop.
Suppose EC2 pricing spikes. Dynatrace flags the cost threshold. That triggers a GitOps pipeline to resize or move workloads. Ansible makes the config changes. Terraform adjusts the infrastructure. Then Dynatrace runs synthetic checks to validate the fix. If everything looks good, the alert clears.
Together, these tools create a feedback loop that lets teams react automatically to cost, performance, or security changes.
This isn’t theoretical.
The tech is all there. What’s missing is the executive buy-in and the technical expertise to architect infrastructure with this kind of flexibility.
When you're agile, you get leverage.
You stop asking, “What’s the cheapest provider today?” and start asking, “Where should we run this right now?”
Most teams don’t plan for infrastructure agility. They start from the wrong place. They look at AWS, Azure, or on-prem and assume each environment needs its separate design. They build for the environment they’re in, not the outcome they want.
But the real advantage comes from starting with something consistent, like OpenShift. If you build inside OpenShift, it’s the same everywhere. The automation, workloads, and services don’t change. Only the location. You’re not rewriting Terraform. You’re not rebuilding infrastructure. You’re just making a decision: stay or move.
The biggest myth in cloud cost strategy is that you need to marry yourself to a cloud provider to be efficient.
It’s somewhat true. If you’re using Lambda functions in AWS, that’s cheaper than running a VM 24/7.
But that only holds if you’re optimizing for one-off tasks. If everything lives inside OpenShift, you’re sharing capacity across jobs, services, and applications. You get predictability, portability, and you get a system that scales.
Teams get stuck because they build around what’s directly in front of them.
That might mean using every native AWS service. It might mean picking a monitoring tool and going all-in on its proprietary setup. But once you start doing that—once you marry yourself to the implementation instead of the standard—you lose the ability to move.
I’ve seen it happen at every level. Infrastructure, security, application code. It doesn’t matter. Someone chooses tooling that seems faster or more performant at the time. But when it’s time to switch platforms or scale across environments, they realize they’re hardwired into something that doesn’t translate. And now they’re rewriting everything just to get unstuck.
That’s why architectural freedom has to be intentional.
With OpenShift, you’re building on Kubernetes. That’s the standard. Instead of relying on provider-specific services (like AWS-native load balancers), you use OpenShift-native constructs that behave the same no matter where you’re deployed.
The automation you write for one cluster works in the next. Backup and recovery works the same. Storage, networking, services, it’s all predictable.
Is there still a transition cost? Sure. You still need to containerize. You still need to deploy OpenShift in each environment.
Once you’ve made that investment, you’re free to scale and shift based on business, not what’s cheaper on paper.
---
Derrick Sutherland - Chief Architect at Shadow-Soft
Derrick is a T-shaped technologist who can think broadly and deeply simultaneously. He holds a master's degree in cyber security, develops applications in multiple languages, and is a Kubernetes expert.