In 2013, I was a junior product manager at IBM sitting in a conference room watching Ben Golub pitch Docker to a table of senior decision-makers. The energy in that room was unlike anything I'd experienced in enterprise tech. Docker had a razor-sharp answer to a problem we'd been wrestling with for years: "works on my machine" was the bane of every engineering organization. Docker killed it with an elegant, developer-friendly solution. Everyone at that table could see it. The demo was clean. The value was obvious. The adoption curve was going to be vertical.
And they were right about all of that. Docker exploded. Within two years, it had been downloaded millions of times, raised over $180 million, and was valued at a billion dollars. "Dockerize" became a verb. Every major tech company Google, Microsoft, Amazon, and IBM was building Docker support into their platforms. It was, by almost any measure, the sharpest product entry into the infrastructure market since I had started in the space a decade earlier.
So why is Docker today a developer productivity tool worth a fraction of what the container revolution it created was actually worth? Why did Kubernetes, which launched a year later and was significantly harder to use end up capturing the orchestration market that Docker should have owned?
The answer is something I've been thinking about for years, and it's something every infrastructure founder needs to understand: Docker built a spike. It didn't build a platform.
If you've raised capital, pitched at a demo day, or read any startup playbook written in the last fifteen years, you've heard this advice: "Build a wedge." Find a narrow, acute pain point. Solve it brilliantly. Use that initial traction to expand into adjacent markets.
It's excellent advice. I've watched it work firsthand at multiple companies across my career. The problem isn't the advice - it's how founders execute on it.
Too many founders hear "build a wedge" and build what I'd call a spike instead. A spike is sharp. It penetrates. It gets attention, traction, and investor interest. But a spike has no wood behind the tip. There's no shaft, no mass, no structure to drive it deeper or expand the opening it created.
A true wedge is different. A wedge has a sharp leading edge (just as sharp as a spike) but it has a body behind it. As the tip penetrates, the body follows, widening the opening and creating space for expansion. The tip gets you in but the body behind it is what lets you stay and grow.
The difference between a spike and a wedge isn't visible at the point of entry. In years one through three, they look identical. The spike company and the wedge company both have sharp products, rapid adoption, and excited investors. The difference only becomes visible when you try to expand, when you need to add capabilities, enter adjacent markets, or scale beyond your initial use case. That's when the spike company discovers it has to rebuild its foundation, and the wedge company discovers it can simply extend what already exists.
Docker's container runtime was a genuine breakthrough. The Dockerfile abstraction was elegant. The developer experience was years ahead of anything else in the market. When Ben Golub walked into that room at IBM, he had a product that solved a real, visceral problem in a way that made everyone immediately want to adopt it.
But running containers in production requires a lot more than a runtime. You need orchestration: how do you manage hundreds of containers across a cluster? You need networking: how do containers discover and talk to each other? You need persistent storage, security isolation, monitoring, self-healing. The container runtime was the spike. All of those additional capabilities are the wood behind it.
Docker tried to expand into all of these areas. Docker Swarm was their orchestration answer. Docker Enterprise was their enterprise play. They acquired companies like Tutum for container management, and Unikernel Systems for lightweight virtualization. But every expansion felt bolted on rather than built in. Swarm was an extension of the Docker CLI, not an orchestration platform designed from first principles. Enterprise features were layered on top of a runtime architecture that wasn't designed to support them.
Meanwhile, Google employees Joe Beda and Craig Mcluckie (fellow VMware acquirees) had been running containers internally at massive scale for over a decade through a system called Borg. When they open-sourced Kubernetes in 2014, it wasn't a container runtime trying to be a platform, it was a platform designed for container orchestration from the ground up. Kubernetes was harder to set up. The learning curve was steeper. In 2014 and 2015, Docker looked like it was winning by every metric that mattered.
By 2017, the orchestration war was over. Kubernetes won so decisively that Docker itself integrated Kubernetes support effectively conceding that Swarm couldn't compete. By 2019, Docker sold its entire enterprise business to Mirantis, went through significant layoffs, and retreated to developer tooling. The company that created the container revolution ended up capturing a small fraction of the value it generated.
Docker didn't fail because the spike wasn't sharp enough. The spike was brilliant. Docker failed to capture the full market opportunity because the architecture behind the spike wasn't designed to expand. Every new capability required building something new rather than extending something that already existed. Docker won the initial battle but lost the war.
Now contrast that with Cloudflare.
Cloudflare launched in 2010 with a wedge that was just as sharp as Docker's: make any website faster and more secure by putting it behind Cloudflare's network. CDN and DDoS protection, capabilities that previously required expensive Akamai contracts, available to anyone for free. Change your DNS nameservers, and you're protected. Simple, compelling, immediate value.
The early traction was massive. By 2014, Cloudflare was handling roughly 5% of all internet requests. From the outside, it looked like a CDN company with a great free tier.
But here's what was happening underneath: Matthew Prince and the Cloudflare team weren't building CDN infrastructure. They were building a global, programmable edge computing platform that happened to start with CDN. Every data center wasn't just a caching node; it was a full compute-capable edge server running a unified software stack. The Anycast architecture that routed traffic to the nearest node wasn't a CDN optimization; it was a platform capability that every future product would inherit.
This is the difference between a spike and a wedge. Cloudflare's CDN competitors Limelight Networks, and MaxCDN saw themselves building content delivery infrastructure. They optimized their architectures for caching and serving static content. They built spikes, and very good ones.
Cloudflare saw itself building a platform, and invested accordingly, even when that investment didn't show in the product surface during the early years.
Then the compounding started.
In 2017, Cloudflare launched Workers serverless compute at the edge. This was only possible because the edge nodes were already programmable compute platforms. In 2018, they added Workers KV for distributed storage. Then Cloudflare Access for Zero Trust security. Then R2 for object storage with zero egress fees. Then D1 for distributed databases. Then AI inference at the edge. Then email routing, image optimization, video streaming, developer deployment tools.
Each new product was cheaper and faster to build than the last because it leveraged the platform that already existed. And each new product made every other product more valuable. A customer using Workers and R2 together gets more value than either product alone.
Cloudflare IPO'd in 2019 at roughly $4.4 billion. Their market cap has since grown to $30 billion or more, with annual revenue exceeding $1.5 billion. Net revenue retention stays above 115% because customers expand their platform usage over time.
Meanwhile, Limelight Networks, which had a sharp CDN spike and at one point a market cap above $5 billion merged with Yahoo's Verizon Media platform, rebranded as Edgio, and filed for Chapter 11 bankruptcy in 2024.
Same starting wedge. Fundamentally different architecture behind it. Dramatically different outcomes.
I don't want to be unfair to founders who build spikes. In many cases, it's the rational short-term choice.
Building a spike is fast. You can get to market in months. It reduces near-term risk because you're only solving one problem with one architecture. It's what investors often reward: rapid traction, clear metrics, a focused narrative. And in the infrastructure space specifically, you can get to that first spike even faster by running on someone else's platform. Stand up some containers on AWS, build a beautiful developer experience on top, and you're live.
I've seen this pattern play out repeatedly across my career from startup competitors while I was at IBM, at SaltStack, at VMware, at Broadcom. The competitor with the sharp spike gets the early press coverage, the fast traction, the excited investor conversations. In years one through three, the spike company looks like it's winning.
The problem is what happens in years three through seven. The spike company wants to add a second product, enter an adjacent market, or serve larger customers and discovers that every expansion requires significant architectural work. The features they need to add don't compose with what already exists. The infrastructure choices they made for speed now constrain their options. The technical debt they deferred is now blocking the roadmap.
This is how you end up with companies that achieved genuine product-market fit but can't monetize at the scale they need as they grow. They got the penetration they wanted, but there's not enough wood behind the spearhead to widen the opening. They're stuck not because the market opportunity isn't there, but because their architecture can't reach it fast enough.
I've watched companies spend 12 to 18 months rebuilding their foundation after initial traction. That's 12 to 18 months where competitors are catching up, customers are getting impatient, and investors are asking why growth has stalled. The short-term speed advantage of the spike becomes a long-term velocity penalty.
This brings me to ContextOS, and to a decision my co-founder Tom Hatch and I made early on that has defined our approach.
Tom created SaltStack, a configuration management platform that was adopted by over 20% of Fortune 500 companies before VMware acquired it. I spent 22 years in enterprise infrastructure, from IBM to SaltStack to VMware to Broadcom. Between us, we've built and shipped infrastructure platforms at every scale, and we've seen from the inside what happens when the architecture can't support the ambition.
We could have built a spike. We could have stood up a developer experience platform on AWS, nailed the "push code, get infrastructure" use case, and been live in months. The early traction would have looked great. The pitch deck would have been clean.
Instead, we built a platform.
ContextOS has a sharp leading edge, our initial compute offering gives developers the simplicity of pushing code and getting production-ready infrastructure with zero-trust security, auto-scaling, and transparent pricing by default. The wedge is every bit as sharp as a spike would be.
But behind that leading edge is a platform architecture: the CTX/ICC construct, the Virtual File System, the Zero Trust Bridge, and the Distributed State Machine designed from first principles as a unified system where every component makes every other component more capable.
This means that when we add GPU support, it's not a separate product bolted onto the compute platform. It's just a new resource type on the existing platform that inherits all the security, orchestration, and state management that already exist. When we add multi-datacenter capability, it isn’t a re-architecture - the distributed state machine was designed for it. When we add new services they're just software drivers that compose with everything already built.
We took more time to build the foundation. That was a deliberate choice, and it was not the easy choice. But it means our expansion velocity will accelerate with each new capability rather than slowing down. Each new feature adds wood behind the wedge rather than requiring us to forge a new spike.
If you're building an infrastructure company (or investing in one) here's the question I'd ask: when this company wants to add its second major capability, does the existing architecture accelerate that expansion or constrain it?
If the answer is "we'll need to do significant re-architecture," you're looking at a spike. It might be a brilliant spike. The traction might be real. But the expansion will be slower and more expensive than it looks from the outside.
If the answer is "the new capability composes with and extends what we've already built," you're looking at a wedge with a platform behind it. The early traction might look similar, or even a little slower, but the long-term trajectory will compound.
Docker and Cloudflare both entered their markets with sharp, compelling initial products. The difference was what was behind the tip of the spear. That difference took years to become visible but when it did, it determined everything.
We're building ContextOS to be the wedge, not the spike. The leading edge is sharp. The platform behind it is what will let us drive it home.
Alex Peay is Co-Founder and COO of ContextOS, an infrastructure automation platform that makes distributed computing as simple as running code on a single machine. Previously, he led product at IBM Tivoli, Domo, SaltStack, VMware, and Broadcom, and served 22 years in the U.S. Army and Space Command.