Build versus buy has been a topic of conversation across the software ecosystem these last few months. Something that was once in limited supply (software) is now abundantly available. Once rate limited by the number of engineers, software is now being produced by AI coding agents at a staggering rate, evidenced by the revenue curves of the companies playing in this space (Claude Code, Cursor, Lovable, etc). Vibe coding was the first instantiation of this trend– the fact that AI made anyone a developer. We now have software factories.
What does all of this mean for enterprise software? The public markets have metabolized it as a net negative. For decades, much of the value in software lied in its scarcity. Teams bought software because they didn’t want to spend their finite engineering resources building something themselves. Even the ascent of cloud-hosted business models for open source companies (think MongoDB with Atlas, Databricks, Elastic, etc) worked perfectly because enterprises didn’t want to spend finite engineering time self-hosting an open source project. Trading money for time has always been at the heart of software purchases; what’s happening now is the radical compression of the time it takes to generate software.
That being said, the build versus buy discussion is a lot more nuanced than the simple assumptions that people will vibe code their CRM. Here’s our take:
Lessons from Architecture
I think it’s very easy to apply one-sized-fits-all, abstract thinking to the future of software. Very easy to extrapolate from early data points of the efficacy of these coding models and generalize from there. While these models are incredibly powerful and will get better over time, that doesn’t mean the value of all software will be commoditized to zero. Yes, certain software contract renewal conversations will have the undertone of “I could vibe code this in a week”, but the exception has always been the rule in the startup business. Not all software is created equal, and the software products built with a certain element of taste, coherence, and self-improving refinement systems will be all the more important.
It’s similar to architecture. A key input into buildings, concrete, is a commodity. And there are a number of commodity buildings. Some are functional and utilitarian, do their job in housing people and storing inventory, and collect rent, but don’t necessarily glimmer with greatness. And then there’s the architecture that people travel to see….432 Park Ave, The Chrysler Building, One World Trade. What makes these buildings special isn’t an inflated budget; there are a number of expensive buildings that are forgettable. The secret sauce is that the exceptional ones solve a hard constraint in a way that couldn’t be reverse-engineered by throwing more steel at the problem.
Take 432 Park for example. At 1,396 feet, it’s the tallest residential building in the Western Hemisphere, balanced on a footprint smaller than a tennis court. The building is twice the height of neighboring buildings with similar exposure to high winds, and the slenderness ratio (measuring the relationship between the width and height of a building) of 1:15. Enabling a building like this to be durable in the “production” of real life is incredibly difficult and nontrivial. But the engineering team behind 432 Park was able to do it, with a number of interesting techniques: using two skeletons instead of one, placing giant pendulums on the top of the building to cancel out the sway that happens in high winds, and even using engineered concrete that has three times the strength of normal concrete. None of these techniques are unique (i.e. the Taipei 101 Skyscraper also utilizes a pendulum), but the combination of them is what enables 432 Park to deviate from the normal building. The edge is in the transformation of those inputs that flows out of a proprietary point of view, not necessarily the basic inputs used.
What if code is the concrete in this new era? Normal code may be commodity, just like normal concrete is, but that doesn’t mean that basic code is capable of building the 432 Park Aves of software products. And even beyond edits made to the code itself, much of the real defensibility lies in the transformations. How building blocks of code are recombined and remixed and interweaved to create a magical product. And the reality is, this same theme of thinking has been true pre-AI as well! A number of databases use the same “commodity” ingredients: B-trees, hash tables, bloom filters, and write-ahead logs among other things. Many distributed systems use Paxos or Raft, multi-version concurrency control, query planners, and other techniques. The grocery stores of available ingredients are the computer science textbooks. What makes great software infrastructure special is similar to what makes great Michelin star dishes special: the proprietary transformations of the ingredients, with a set of non-obvious tradeoffs underlying it, that turn commodity inputs into non-commodity outputs. For example:
- Figma: enabling multi-player design was a key product property that catapulted Figma to where it was today, and it was a very intentional product choice. But even with implementing multi-player, Figma decided to use CRDTs (conflict-free replicated data types → a class of data structures designed to stay consistent when multiple users edit them simultaneously) as opposed to OTs (stands for operational transformation; this is the multiplayer technology that powers Google docs). The ingredient was out there in the computer science research world; Figma’s specialty was in deeply understanding 1) why using CRDTs was better for them than OTs and b) what elements of CRDTs to use and what to discard.
- Snowflake: “separation of storage and compute” defined the zeitgeist of enterprise software conversations in September of 2020 when Snowflake went public, and for good reason. The ingredients had been there prior to Snowflake’s founding in 2012: columnar storage, cloud object storage, and MPP query engines among others. Snowflake’s architectural bet was a defining property of their transformation of existing ingredients, something expressed by how the pieces were recombined (not in any of the pieces themselves). And making it work took years of execution: a micro-partition format for laying data out in S3, the metadata service that tracked it, the virtual warehouse abstraction, and even the zero-copy cloning and time-travel features that the separation enabled. Any one of these pieces are reproducible, but the systems design to compose them coherently (such that the whole architecture was faster, cheaper, and simpler than the bundled incumbents) was the transformation that housed Snowflake’s initial defensibility.
- Kafka: the write-ahead log (WAL) is not a new concept in databases and storage systems. The essential idea is to write the changes to a log before they’re applied to the main data store, with the log serving as a persistent and sequential record of every operation. If the system crashes, the log can be replayed to restore the system to a consistent state; this guarantees both transaction durability (changes aren’t lost once they’re logged) and consistency (system can always replay incomplete operations to return to a consistent state). Kafka’s bet was transforming the technical ingredient that is WAL into the product itself, not just a property of a database system. Before Kafka, if you wanted durable, replayable event streams, you’d cobble together message queues (RabbitMQ), log files, and custom replication. Kafka’s insight was that a distributed, partitioned, replicated log, exposed directly as the API, was a cleaner foundation than any of those stitched-together alternatives. Every ingredient (append-only storage, consistent hashing for partitioning, leader-follower replication) had been in the systems literature for decades. What was new was the inversion: take the primitives databases had been hiding and make it the developer-facing abstraction.
That being said, there are instances where enterprises build their own 432 Park software internally, often because the transformation itself is their competitive edge. Goldman Sachs’ SecDB (Securities Database) is a canonical example. Started in 1993, SecDB was built to handle the rapid growth of non-standard options that existing systems couldn’t accommodate. The ingredients weren’t new: object-oriented databases, real-time pricing engines, and position-keeping systems existed in various forms across Wall Street. What Goldman built was the proprietary transformation: a single unified system where every position, trade, and risk across the entire firm flowed through the same data model, computing over 20 billion prices daily. The systems design to compose these pieces coherently (such that risk calculation happened at the speed of the market, rather than at the speed of overnight batch jobs) was the transformation that produced Goldman’s edge. When Lehman collapsed in 2008 and competitors spent weeks rolling up thousands of spreadsheets to calculate their exposure, Goldman could do it almost instantly. The transformation was so specific to the firm’s needs that Goldman even built its own proprietary programming language, Slang, to develop against it. SecDB is the 432 Park of internal bank software: commodity ingredients, proprietary point of view, built in-house because the transformation itself was the alpha.
Transformations That Don’t Scale With Compute
These examples drive the point home. Many of the underlying technical building blocks existed years before product inception; the true technology was the transformation of those blocks. The recombination of existing ingredients with a clear point of view around why this precise combination is worth more than the sum of parts. The question then becomes: which class of transformations stay defensible in the age of AI? At a time where code is seemingly infinite and there’s near-zero marginal cost of intelligence, what are the properties of transformations that turn something that is n-of-many (code) into an n-of-1 product? I think of these as transformations that don’t scale with compute: even if a customer (or a competitor) had infinite GPUs to power coding agents, they still couldn’t reach a certain product quality obtained by a truly n-of-1 product.
What are some attributes of transformations that don’t scale with compute? Here are some hypotheses:
- Craft: or in biological terms, selection pressure. Craft in software is the selection mechanism that decides which of the infinite possible recombinations to pursue. Out of numerous possible ways to build a certain product, which ones will result in the best work of art? It’s why the Figma team chose to go with CRDT as opposed to OT. Why Thierry and Benoit from Snowflake, after decades at Oracle, knew that decoupling storage and compute was the right nonconsensus bet. When AI commoditizes painting and generates an explosion of optionality (much of which is noise), true product ingenuity lies in the sculpting – eliminating what is unnecessary to create the best possible system. This discernment is earned. It’s hard to analytically craft taste or make it legible to an algorithm or machine learning system. A developer loving a certain workflow, feeling like the interface is the gold standard, the sense that a software creator just gets it. It does not exist in training data.
- Coherence: This is the discipline AI coding agents struggle with most: they’re great at producing a single good component, but where they struggle is with producing ten thousand components that all implicitly agree on what kind of system is being built. Think about how Snowflake’s metadata service, virtual warehouse abstraction, and zero-copy cloning all fit together seamlessly. Coherence is the operationalization of taste: it requires a theory of the product that persists across thousands of decisions, evolving intentionally as the product matures. AI agents are designed to be helpful; coherence is structurally about being unhelpful to the requests that would break the product’s underlying philosophy.
- Refinement: In simple terms, does every customer interaction make the product measurably better for the next customer? This is important because it means that the improvement loop can’t be replicated without owning the distribution. Whether it’s Ramp learning about categorization/savings-recommendation models from every transaction across their customer base or Databricks’s optimizer being shaped by every new query that pushes the edge, proprietary customer learnings are one of the few things that you can’t get anywhere else. Same goes for fraud attempts on Stripe circa 2015 shaping the defense that protects a new merchant in 2026 or Postgres correctness bug fixes accumulated over 35 years being encoded in the current behavior of the database, these products carry their own history as a structural feature.
N-of-1 Software in a World of Infinite Code
What does this mean for the future of software and the build vs buy dilemma? Commodity software is at risk, and enterprises that are looking to optimize spend and resource allocation are definitely right to consider what they can bring in-house. Spending money on a certain tool “because we’ve always spent on it” is never an optimal way to buy software in the first place, and the best software creators don’t want that to be the prime reason for adoption. Which is why the transformations that don’t scale with compute mental model is such an interesting one: what element of quality, performance, and efficacy can’t be replicated by applying dozens of coding agents to the problem surface? What proprietary product self-improvement loops would a buyer not ignore? In many ways, people are the ultimate moat: what combination of time spent working on a problem, experiences in different environments, and proprietary learnings feed into a collective point of view on a software system that isn’t elsewhere in the market? If you had the choice between building a software product in-house or buying it from people who’ve spent more time thinking about it than the industry has even been aware of the technology, the choice is pretty obvious. This is where I see software going: even in a world of infinite code, the best product artists will build n-of-1 software.





