Encode institutional knowledge as layers, each owned by the role that produces it, and let the model compose from all of them at generation time.
I made a FigJam board over winter break. Four layer nodes, all connecting down into a database icon at the bottom. It was the first time I felt like this AI project had a clear direction.
You can’t shape-find in this work at the pace the field moves. The AI space in late 2025 and early 2026 was running at a speed that burned through any thought I tried to have inside it — every day another model release, another paper, another architecture pattern that partially did what we were trying to build. The Moonbird research was in peak momentum. None of it left room to actually back up and look at the big picture.
Winter break was the first week since the research started when I wasn’t moving at that pace. The original hypothesis had been to encode the knowledge in custom metadata layers inside the Figma files themselves. The RAG pivot moved the whole thing out — a retrievable, queryable knowledge base that any tool could pull from. The decision had been made mid-stride and I’d kept running.
The project was Moonbird — how we applied the approach specifically for Oracle Health. Domain Foundation is how you apply it anywhere else.
The core of the methodology is simple enough to state in a sentence: encode institutional knowledge as layers, each owned by the role that produces it, and let the model compose from all of them at generation time.
Four layers is what worked inside the design organization I came from. A base layer of universal design principles, owned by the design system team. A domain layer of industry-specific reasoning, owned by strategists and health providers. A component layer of per-artifact intent metadata — when to use, when not to — owned by component authors. A role layer describing user types and their constraints, owned by researchers.
The ownership matters more than the count. A different org’s shape might want five layers, or three. The load-bearing claim is that governance distributes: the people who produce the knowledge also own the layer that encodes it, and no single team becomes the bottleneck.
This is the design group’s portion of the context problem. Other institutional knowledge — customer feedback, regulatory correspondence, the pile of decisions made in hallways and never written down — lives in other systems and needs its own ingestion story.
The technical shape — vector database, MCP server, LLM — isn’t the interesting part. What goes into the database is. Institutional knowledge, not documentation.
Once the knowledge is reachable, a lot opens up. Generation is the obvious example — a designer starts with a prompt and gets back an artifact already aligned to the org’s rules instead of starting from scratch and retrofitting compliance later. A clinical workflow that already respects the deployment context, the component library, and the accessibility constraints for the user roles who’ll actually use it.
Validation is the next one. A PM sketches a workflow over the weekend and runs it past the model to see if it survives. The model checks it against the encoded rules and brings it to a state worth debating. By the time a designer sees it, the question isn’t “does this match the system” — it’s “does this workflow belong, and if so, where does it go from here.”
Decision memory is the one most organizations don’t realize they need. A team argues about a pattern choice they’re sure they’ve had before. Instead of asking the senior designer who’s been there eight years, they ask the model, and it surfaces the last time this came up: what was proposed, what shipped, what got rejected, why. The reasoning behind past decisions stops walking out the door every time someone leaves.
My component design team killed a feature for privacy reasons. A design strategist caught a real edge case — the kind of specific, rare scenario that sits outside anything AI could recognize. The full story is its own case study; the part that matters here is what the AI couldn’t do. It had the clinical literature. It had the workflow. What it couldn’t do was reason about the actual room the feature would land in — the specific human context a strategist sees because they’ve sat with it.
Domain Foundation exists so the why can be captured in a form a model can use. The judgment of experienced people doesn’t walk out the door when they do.
Design teams whose expertise is encoded — a retrievable knowledge base tied to the generation and validation work — do work that doesn’t look like everyone else’s. Design teams whose expertise lives only in people’s heads produce outputs a PM with a good prompt can already reproduce.
The methodology is hard to demo. Half the value is in what an organization chooses to put in the knowledge base, and the choosing is the expertise. The structure is just the container.
The people I find myself talking to now are the ones who’ve already arrived at the same realization: default models produce default output.
New writing, periodically. No spam, no schedule — just work and thinking as it comes together.