Build, Buy, or Prompt: AI Adds a Third Option to Your Software Decision Framework

By Stephen Ledwith May 5, 2026

A little over a year ago, I wrote that the build vs. buy debate was the wrong frame — that the real question was how to orchestrate the right blend of built, bought, and integrated capabilities to deliver value faster. I still believe that. Composability and platform thinking haven’t gone anywhere.

But something has changed that makes me want to revisit the framework: AI has added a third option that didn’t exist in any practical form when I wrote that piece. And it’s forcing decisions that teams thought were already settled.

I’m calling it prompt — shorthand for the growing category of AI-native capabilities you can integrate, fine-tune, or build on top of that sit somewhere between buying a SaaS product and building from scratch. It changes the calculus in ways that matter.


What “Prompt” Means in Practice

When I say prompt as a third option, I mean the range of approaches that involve integrating large language models or AI agents into your product or workflow — without building the underlying model yourself and without buying a pre-packaged SaaS solution.

This includes:

  • API-first AI integration: calling a model API (like Claude, GPT-4, or Gemini) to add intelligence to an existing workflow
  • Retrieval-augmented generation (RAG): grounding a model’s responses in your own data to create internal knowledge tools
  • Fine-tuned models: customizing a base model on your proprietary data for specific tasks
  • Agentic workflows: building systems where AI agents complete multi-step tasks autonomously within your product

What makes this distinct from traditional build or buy is the economics and the capability curve. You can integrate sophisticated AI capabilities faster than you can build them and cheaper than a full SaaS contract — but you’re still owning the integration, the data pipeline, the prompting strategy, and the output quality.

It’s a new kind of ownership. And most decision frameworks aren’t accounting for it yet.


Where the Old Framework Still Holds

Before getting into what’s changed, let me be clear about what hasn’t. The core questions from the original framework still apply:

  • Is this core to your unique value proposition? If yes, lean toward owning it. If no, lean toward buying.
  • What’s the total cost of ownership? Don’t just count implementation — count maintenance, security, uptime, and iteration.
  • How fast do you need to move? Time to value is still a real competitive factor.
  • What are your control and compliance requirements? Some data simply can’t leave your environment.

These don’t go away with AI. If anything, AI makes them more important — because the failure modes of a poorly designed AI integration can be harder to detect and more damaging than a buggy feature.


Where AI Changes the Framework

1. The “Build” Option Got Much Faster

One of the historical arguments for buying was speed. Building took time, and the market didn’t wait. That gap has narrowed significantly.

With AI-assisted development, teams are shipping non-trivial features in days that used to take weeks. The build option is more competitive on time-to-value than it’s been in years. This doesn’t mean build is always right — the TCO argument still stands, and ownership complexity is real — but it does mean that “we don’t have time to build it” is a weaker argument than it used to be.

Before defaulting to buy on timeline grounds, pressure-test whether an AI-augmented build could actually close the gap.

2. The “Buy” Evaluation Changed

When you’re evaluating a SaaS product today, one of the most important questions on the table is: how deeply has this vendor integrated AI, and does that integration actually serve your use case?

The SaaS market is moving fast, and there’s a wide gap between vendors who have genuinely rebuilt their products around AI capabilities and those who have bolted on a chatbot and called it “AI-powered.” The evaluation criteria need to reflect that.

Questions I now ask in every vendor evaluation:

  • What specifically does AI do in this product, and where does human review still happen?
  • How does the vendor handle data privacy when feeding your data to an underlying model?
  • What’s the explainability story — when the AI makes a recommendation, can you understand why?
  • How will AI capabilities evolve in their roadmap, and what’s the pricing model as usage scales?

The last one matters more than people realize. AI inference costs are real, and some vendors are hiding them today in flat-rate pricing that won’t hold up at scale.

3. The “Prompt” Option Is Often Faster Than Both

Here’s the scenario I keep seeing: a team is evaluating whether to buy a mid-market SaaS tool or build a custom feature. The evaluation drags on for six weeks. Meanwhile, an engineer on the team quietly wires up an LLM API call, runs the data through a well-crafted prompt, and produces 80% of the capability in a weekend.

This isn’t hypothetical. I’ve seen it happen repeatedly in the past year across client organizations.

The prompt option is often underweighted in formal decision processes because it doesn’t fit neatly into the build or buy category — it lacks the internal ownership feel of a real build and the vendor accountability of a real buy. But when the use case is well-defined, the data is accessible, and the requirements don’t demand perfection, AI integration can be the fastest and cheapest path to value.


An Updated Framework: Build, Buy, or Prompt

Here’s how I’d update the decision framework for 2026:

Step 1: Is this core to your value proposition?

  • Yes → Build or Prompt. If it’s your moat, you want to own it. The question is whether you’re building from scratch or building on top of AI capabilities.
  • No → Buy or Prompt. Commodity functionality is still best purchased. But AI-native integration may get you there faster and cheaper than a full SaaS contract.

Step 2: What are your data and compliance constraints?

  • Sensitive data, strict compliance → Build or self-hosted Prompt (models deployed in your own environment). Eliminate any option that requires sending regulated data to a third-party model.
  • Standard data, reasonable compliance → All options remain open.

Step 3: What’s the timeline pressure?

TimelineConsider
DaysPrompt first — it’s often the fastest path to a working prototype
WeeksEvaluate all three; AI-augmented build may compete with buy on timeline
MonthsStandard build or buy evaluation; timeline isn’t the deciding factor

Step 4: What does the maintenance picture look like?

This is where “prompt” gets more complex than it first appears. An AI integration isn’t static. Models get updated, prompts drift in quality, output formats change. Someone has to own that. Factor in:

  • Who monitors output quality over time?
  • What happens when the underlying model changes behavior?
  • How do you version and test prompts the same way you’d version and test code?

If you don’t have clear answers, the apparent simplicity of a “quick AI integration” can become a maintenance debt problem. It’s not insurmountable — but it’s real.

Step 5: What’s the worst-case failure mode?

Every option has a failure mode. Build fails when scope creep and maintenance costs spiral. Buy fails when the vendor doesn’t deliver or locks you in. Prompt fails when AI output quality degrades silently, the model behaves unexpectedly, or you discover a use case the model handles poorly after you’ve built a workflow around it.

Understand the failure modes before you commit. They’re different for each path, and your risk tolerance should inform the decision.


Practical Example: Internal Knowledge Base

You want to give your support team better access to internal documentation, past decisions, and product context. The old decision might have been: buy a knowledge management SaaS, or build a basic wiki.

Today the options look like this:

  • Buy: Notion, Confluence, Guru — solid products with search, but no deep intelligence about your specific context
  • Build: Custom knowledge base — full control, but significant development investment
  • Prompt: RAG system on top of your existing documentation — a model that can answer “why did we make this architecture decision in 2024?” with citations, trained on your actual data

For many teams right now, the prompt option wins on this specific use case — it’s faster to deploy than a custom build, cheaper than a new SaaS contract, and substantially more useful than a searchable wiki because it reasons over your content rather than just indexing it.


Final Thought

I ended the original article with: You’re not building or buying. You’re orchestrating.

I’d update that now: You’re not building, buying, or even just orchestrating. You’re deciding how much intelligence to bake into every layer of what you build.

That’s a more complex decision than it was two years ago. It requires new evaluation criteria, new risk awareness, and a clearer understanding of where AI adds real value versus where it adds complexity without a proportional return.

The teams getting this right aren’t treating “add AI” as a default answer. They’re asking the same rigorous questions they always asked — and updating the framework to account for what’s now possible.

“Every software decision is ultimately a decision about where you invest your team’s attention. AI changes the menu of options. It doesn’t change the importance of choosing carefully.” — Stephen Ledwith


This article is a follow-on to Why ‘Build vs. Buy’ Isn’t the Right Question Anymore. For questions about applying this framework in your organization, let’s connect.