SaaS at a Junction Point: What we learned building AI in 2025

SaaS at a Junction Point: What we learned building AI in 2025

2025 has been an eventful year for most businesses. Tariff hikes, market volatility, renewed bubble talk—and, inevitably, everything AI.

This year, we worked across mortgage, retail, real estate, and marketing—but the common thread wasn’t the industry, it was the economics. We built workflow automation for marketing agencies that lifted productivity by 12%. We deployed AI agents that helped retailers cut inventory costs while increasing turn rates. We consolidated fragmented data and built agents that supported investors in acquisition decisions. We even built systems that autonomously read county meeting minutes so an electrical services firm could surface new sales leads at scale.

Taken together, these projects point to a conclusion that’s hard to avoid: 2025 marks a structural shift in software economics—one not seen since object-oriented programming displaced Assembly. Not because the technology itself is comparable, but because the cost of learning has collapsed. The line is drawn: what once moved slowly now moves fast; what felt stable is already becoming past. The old order—years of generalized software built before real validation—fades quickly when tailored systems can be tested, proven, or discarded in weeks. Let’s start with a story, then the lessons.

For the times, they are a-changin’

A tale of two eras

n 2018, another startup shared our co-working space. They built a SaaS platform for campus energy management—lighting controlled by traffic, usage measurement, peak prediction.

They wrote everything on a whiteboard: quarterly revenue, features in progress, customer logos. We passed by the board every day.

It took them five years to build the product. By 2020, it was deployed at four universities and five industrial campuses, growing ~25% YoY since 2018. Over that time, the engineering team grew from three people to thirty-five, two CTOs were hired, and millions were raised—roughly half spent on software development.

It was a good SaaS outcome for that era.

Now fast-forward to May 2025. We attended a software conference with 15,000 attendees and 107 exhibitors. Roughly half had “AI” in their name or an ”.ai” domain.

One session stood out. The speaker walked on stage wearing a string of lightbulbs, opened his MacBook, and live-built a working system using Cursor. In five minutes, he connected MCPs, ChatGPT, and open-source React libraries to control the lights via rules, chat input, and UI interaction.

It was not production-ready. It did not replicate five years of engineering.

But it demonstrated something more important: The cost of validating an idea has collapsed, even though the cost of production software has not.

That distinction matters.

The rise of bespoke software

The SaaS model was built on a simple economic premise: invest heavily upfront, amortize costs across many users, and grow through scale.

That model relied on bespoke software being slow, expensive, and risky.

AI changed that.

Today, narrow, purpose-built systems can be designed, built, and deployed in days. They may not scale broadly—but they don’t need to. They only need to solve a specific job well enough for a small group of users.

This is not theoretical. It shows up directly in our work.

When we started in 2019, most of our engagements involved building models and AI features for software companies. In 2025, our largest customer base consists of end users—marketing agencies, retailers, professional services, and mortgage brokers—who previously relied on off-the-shelf SaaS.

Our work shifted from multi-year engagement of model building/tuning to building AI agents and workflow automation in weeks. This reflects a growing demand for hyper-specific solutions that generalized products cannot economically prioritize.

This is also why no-code and low-code agencies are growing rapidly. We’ve seen agencies grow from $3–4M to over $15M in two years, helping businesses build custom solutions faster and cheaper than traditional development.

This does not mean bespoke software replaces SaaS. It means SaaS no longer wins by default.

Product management is now the dominant constraint

In August, we hosted a dinner with seven founders who had exited previous companies. One sold her last startup for over $500M in 2022. Of the four now building again, three were actively prototyping on Lovable.

When asked why they weren’t hiring offshore teams and following traditional build-then-scale playbooks, the answer was blunt:

“For a few hundred dollars, I can prototype, connect SEO and ads, and see who signs up. Most importantly, I don’t have to wait.”

These prototypes are often consumer-facing, not enterprise-grade—but the implication for product management is unavoidable.

Traditional PM processes are slow, costly, and frequently wrong. Product manager quality varies widely, and many teams now rely on secondhand signals—analytics dashboards, internal debates, or even AI-written PRDs—without deeply understanding the customer problem. When those decisions are wrong, the consequences are severe.

In today’s environment, SaaS products face pressure from all directions: bespoke systems, competitors moving faster, and users stitching together their own solutions. At the same time, traditional SaaS carries heavy baggage—legacy codebases, large teams, and high marginal costs for change. A bad product decision is no longer just a missed opportunity; it can require months of rework, significant capital, and often erodes hard-won user trust.

This exposes the real risks in SaaS:

  • Will users adopt it quickly?
  • Will they keep using it after novelty fades?
  • Can we afford to maintain and evolve it?

AI meaningfully reduces the third risk. It amplifies the first two.

As a result, product management quality—not process—has become the dominant constraint. Teams with vague problem definitions, slow learning loops, or decision-making detached from real users are structurally disadvantaged, especially as small, focused teams use vibe coding to test product-market fit faster and users increasingly build their own alternatives.

Competition is no longer product-to-product

In Competing Against Luck, Clayton Christensen reframed competition around jobs to be done. A product does not only compete with similar products; it competes with any solution that accomplishes the same job in a given context. That framing is no longer academic—it is operationally decisive today.

Historically, SaaS competed within clear categories: IDEs against IDEs, CRMs against CRMs. Today, that boundary has collapsed. Users can stitch together tools they already pay for, layer in plugins, low-code platforms, or even vibe-code bespoke systems tailored to their exact workflows. These solutions may be inelegant—but they are often “good enough.”

Today, users can assemble their own solutions. They can stitch together existing tools they already pay for, layer in low-code or no-code platforms, and increasingly vibe-code bespoke systems that fit their workflows precisely. An IDE no longer competes only with Cursor or VS Code; it competes with plugins, command-line tools, AI agents, and improvised workflows that may be clunky—but “good enough.”

This changes the core user decision. The question is no longer “Which product is better?” but:

Is this SaaS sufficiently better than the solution I can assemble myself to justify the cost, the learning curve, and the loss of flexibility?

That question anchors willingness to pay downward. When users can combine familiar tools, tolerate some friction, and avoid onboarding yet another system, many will do exactly that.

  • To survive, SaaS teams must ask hard questions:
  • What is the precise job to be done?
  • we solve it completely?
  • Do we do it materially better, cheaper, or faster than a bespoke alternative?
  • Is the experience more reliable and pleasant than a stitched-together workflow?

Enterprise SaaS can sometimes avoid this reckoning through contracts and lock-in. Outside those protected zones, however, SaaS must justify itself on real value alone—functional and experiential—or risk being replaced by systems users build themselves.

AI coding changes engineering failure modes

As Martin Fowler recently observed, AI tools can produce correct results while introducing unnecessary complexity. What should be small changes often become bloated modifications.

His conclusion was apt: AI coding tools resemble very capable interns—fast, enthusiastic, and occasionally brilliant, but disastrous without supervision.

This does not eliminate the need for experienced engineers. It increases the importance of architectural judgment, code deletion, and review discipline.

For mature SaaS products with clear, high-usage features, AI tools can significantly boost productivity. For unvalidated products, they’re ideal for testing hypotheses quickly—but the code should be disposable.

For investors: AI branding is not a moat

At the May conference, nearly half of the 107 exhibitors branded themselves as AI companies. In practice, most were integrating LLM APIs.

Through our work with private equity, the companies that retained strong valuations shared common traits: disciplined product management, strong engineering culture, and financial rigor. These traits created sticky users and durable profitability—even under macro pressure.

By contrast, many rebranded “AI companies” with weak fundamentals retained little residual value.

We recently spoke with a SaaS platform that claimed its moat was a “chat with your data” feature. After playing with it for a day, we found it was built with standard components—AWS Athena MCP, RAG, Pinecone, Claude, ChatGPT—replicable by two engineers in under a month for roughly $45K.

AI compresses technical differentiation faster than capital markets reprice it. As a result, technology alone is a weak moat unless paired with real user lock-in.

Lessons from HuggingFace: when to add AI

In October, Hugging Face published lessons from training Smol, their LLM. Their reasoning for not training a model applies equally to SaaS AI features:

  • “We have compute” is a resource, not a strategy
  • “Everyone else is doing it” is peer pressure
  • “Will AI make it meaningfully faster, cheaper, or better?”
  • “What does it cost to build and maintain?”

AI features must be justified by the job they perform—not by board pressure or branding.

Before adding AI, teams should ask:

  • What job is being done?
  • How is it solved today?
  • What is the user willing to pay?
  • Will AI make it meaningfully faster, cheaper, or better?
  • What does it cost to build and maintain?

Leaders, founders, and executives, in contrast, should consider how and if AI will impact their domain. To the extent that its impact is visible and defensible, and measurable, there may be reasons to embrace it beyond hype. The Dotcom era demonstrates this quite effectively: businesses that “got into” tech for the sake of getting into it, crashed and burned when the bubble popped, while businesses that saw the long-term outlook for technology and how it would impact existing workflows, thrived (see PayPal).

Without clear answers or defensible positions on the future, AI features become liabilities.

Before AI, comes data

Every AI agent and workflow automation system we built this year depended on one thing: clean, consolidated data.

AI agents read data to make decisions. Automation uses data to trigger actions. Fragmented data increases hallucination risk, latency, and cost.

For one retail client, we consolidated advertising, ecommerce, POS, and CRM data into Databricks before building any AI features. Without this foundation, autonomous systems would not have been viable.

Assistive AI can tolerate fragmented data. Decision-making AI cannot.

Phrased a little more assertively, a mediocre model with superior data will outperform a state-of-the-art model with limited data.

Closing

2025 places SaaS at a real junction point. As Martin Fowler observed, the shift is comparable in impact—not mechanics—to the transition from Assembly to OOP.

Some fundamentals remain unchanged:

  • Every product must do a job
  • Value must exceed cost
  • Users decide what survives

What has changed is how fast we learn, how cheaply we experiment, and how unforgiving the market now exposes weak assumptions.

That reality will shape SaaS well beyond 2025.

2026 comes fast. The companies winning aren't those moving fastest with AI—they're the ones clear on what problem they're solving, what data they actually need, and how to build the right solution (whether that's consolidation, an AI agent, workflow automation, or something custom). If you're thinking through that journey and want to talk through what's actually worth building, let's talk.

With that, we wish you a Merry Christmas and a Happy New Year