Skip to content
    Back to writing
    April 30, 2024 · updated May 9, 2026 · 5 min read

    The 'we use [foundation model X] under the hood' admission.

    The 'we use [foundation model X] under the hood' admission — by Thomas Jankowski, aided by AI
    Six locations, six failure modes— TJ x AI

    Every AI-product investor deck since early 2023 carries some version of the same slide. The slide names the foundation model the product runs on (Claude, GPT-4o, Gemini, Llama, sometimes a mix of two) and explains that the founders chose this particular model because of latency, accuracy, cost, or whatever. The slide is honest in a way previous startup decks were not honest, because the founders are saying out loud what they are actually building on.

    The slide is also a tell. If the model is the foundation model, the moat is not the model. The moat has to be somewhere else in the deck. The careful reader's job is to find it in one of six places, or determine that it is not in the deck at all, and pass.

    A list of the six places to look. For each, the question to ask, the version that holds up, and the version that does not.

    The distribution moat.The product reaches a buyer that competing products structurally cannot reach, because the founders have a pre-existing relationship, channel partnership, OEM agreement, or platform-distribution position that competitors would need years to replicate. Holds up: "we have a signed partnership with the three largest distribution channels in our category and our competitors have none." Does not hold up: "we are running paid search and content marketing." The distribution-moat slide should name the specific distribution mechanism. If it does not, there is no distribution moat.

    The data moat. The product has access to proprietary training or fine-tuning data, or a feedback loop that produces proprietary data, that competitors cannot acquire at any reasonable cost or time horizon. Holds up: "we have eight years of structured data from our prior business that we are now using as the eval-and-fine-tune corpus for the AI product, and the data is exclusive to us." Does not hold up: "we will accumulate user data over time and improve from it." Every product accumulates user data over time. That is not a moat; it is a default. The data moat has to be specific, structured, and not commodity-acquirable.

    The workflow-integration moat. The product is embedded into the buyer's operating workflow at a depth that creates switching cost. The buyer has built dependencies, configured the product to their internal taxonomy, trained their team on the UI, integrated the product to internal data sources. Holds up: "the average customer has integrated us to seven internal systems and trained nineteen users, and replacing us is a six-month implementation project." Does not hold up: "our customers love our product." Customer love is a leading indicator. Workflow integration is a measurable lock-in. The deck should show the integration depth in numbers.

    The evaluation-and-judgement moat. The product applies domain-specific evaluation, judgement, or workflow logic on top of the foundation model that requires expert input the founders have access to and competitors do not. Holds up: "our co-founder is a fellow at the leading academic center for this domain and has built the eval framework that defines product quality in this category, with twelve published papers anchoring it." Does not hold up: "we have prompt-engineered the model carefully." Prompts can be replicated. Domain-expert evaluation frameworks accumulated over a decade cannot. The evaluation moat is defensible only when the underlying expert input is genuinely scarce.

    The compliance-and-regulatory moat. The product has compliance certifications, regulatory approvals, or audit-class licensing that competitors cannot acquire quickly. Holds up: "we are HITRUST-certified and SOC2 Type II audited, our buyer's procurement requires both, and the certifications took us eighteen months to acquire." Does not hold up: "we are SOC2 Type I." Type I is a six-month exercise. Type II is a year-plus and signals a more committed compliance posture. The strength of the compliance moat scales with the depth and the buyer-procurement-requirement match. If the buyer does not require the cert, the cert is not a moat.

    The vertical-integration moat. The product has captured a portion of the value chain (data acquisition, model serving, fine-tuning, application layer) such that the unit economics differ from horizontal competitors. Holds up: "we have built our own inference stack on owned hardware for the latency-sensitive part of the workflow, and our marginal serving cost is 40 percent below what we would pay an API provider, which lets us price below where horizontal competitors can sustain." Does not hold up: "we will eventually move to our own infrastructure." Eventually is not a moat. Today is a moat or it is not.

    The careful reader's posture on the deck is to find which of the six the founders are claiming, evaluate whether the claim is the kind that holds up, and price the round against the strength of the answer. Decks with a strong answer in two or three of the six categories are typically defensible. Decks with no clear answer in any of them are pricing against the foundation-model moat that is not theirs to charge for.

    The market through 2024 was forgiving on this. The market through 2025 has been less forgiving. The market through 2026 is going to be substantially less forgiving than that, because the foundation models are commodifying faster than the application layer is consolidating, and the application-layer products that did not build a moat in one of the six categories are running into the structural problem that any other application-layer team can re-create the same product on the same foundation model in three to six months.

    The "we use [foundation model X] under the hood" admission is fine when the deck has a real answer to the moat question. It is a signal worth attending to when the deck does not.

    The reader's job is to find the answer or pass. The careful version of the answer is in one of the six places above. If it is not, it is probably not anywhere.

    —TJ