Three axes, not one label
Every knowledge tool has a type field. Procedure. Meeting note. Policy. FAQ entry. You tag things, filters appear, and the taxonomy grows.
The problem is not that type labels are useless. The problem is that one label is trying to do the work of three different things simultaneously, and doing all three badly.
What a type label actually collapses
When someone tags a knowledge artifact “procedure,” they are simultaneously answering three distinct questions without realising it:
- Where did this come from? Was it observed directly, extracted from a source, synthesised from multiple sources, or revised from an earlier version?
- What kind of claim is it? A sequence of steps? A stated fact? A belief the team holds? An unverified speculation?
- How much epistemic work went into it? Is it raw captured text, or has it been verified and curated over time?
A single type label conflates all three. The system has no way to know whether the “procedure” was observed yesterday and never verified, or synthesised from a hundred consistent support transcripts over two years. It cannot tell whether “fact” means verified against primary sources or “someone said this once.” It cannot weight two answers differently based on how much evidence backs them.
That information is not lost because it is hard to capture. It is lost because most systems were not designed to ask the question.
Axis 1: provenance origin
The first axis is where the artifact came from, relative to the real-world events it describes.
| Value | Meaning |
|---|---|
observed | Directly captured from an event: a transcript, a raw import, a sensor log |
extracted | Derived from source text by deterministic process or targeted extraction |
synthesised | Constructed from multiple sources by reasoning or summarisation |
revised | An updated version of an existing knowledge artifact |
This maps to W3C PROV-O properties (hadPrimarySource, wasDerivedFrom, wasRevisionOf), formalised precisely because the origin of a fact determines how you cite it, how much you trust it, and when it expires.
An observed artifact is a record of what happened. A synthesised artifact is a claim about what it meant. The difference matters every time someone asks a question where the answer depends on distinguishing “we recorded this” from “we concluded this.” The evidence vs. claims distinction goes deeper than provenance alone, but provenance origin is where the structural split begins.
Axis 2: assertion mode
The second axis is what kind of claim the artifact makes.
| Value | Meaning |
|---|---|
factual | Claimed to be true: “the return period is 30 days” |
procedural | A sequence of steps claimed to achieve a goal |
quoted | Attributed to a specific source, not independently endorsed |
belief | Held as likely true but not verified: “we think this affects macOS 14 only” |
hypothesis | Explicitly speculative; requires validation before acting on it |
This distinction changes how a system should present an answer. A belief returned without that label is a confidence problem: the reader treats it as a verified fact. A hypothesis presented as factual can lead to decisions made on unvalidated assumptions.
Large-scale knowledge graphs solve this implicitly through source credibility scores. The limitation of that approach is that it buries the distinction inside a numerical weight the retrieval system uses but the reader never sees. Making assertion mode explicit gives the reader the information too.
Axis 3: synthesis depth
The third axis is how much epistemic work has been done on the artifact.
| Depth | Description | Example |
|---|---|---|
| 0 | Raw capture, unprocessed | Transcript chunk, imported PDF page |
| 1 | Extracted claim, directly attributable to a single source | Problem/solution pair from one transcript |
| 2 | Synthesised from multiple sources, single topic | Procedure consolidated from five similar transcripts |
| 3 | Cross-topic synthesis, organisational position | Product area knowledge summary |
| 4 | Published artifact, fully curated | Approved help article, formal policy document |
This maps to the Zettelkasten transformation chain and to GraphRAG’s Document to TextUnit to Entity to Community Report hierarchy. The higher the depth, the more deliberate the epistemic work, and the more the artifact represents an organisational belief rather than raw captured data.
Synthesis depth drives several retrieval decisions. A depth-0 chunk should probably not be served directly as an answer. A depth-4 article from a trusted source should be weighted higher than a depth-1 extraction from a single conversation. The depth is also a signal for enrichment: low-depth content with a large vocabulary gap between raw text and likely user queries benefits most from pre-processing to improve retrieval; high-depth curated articles usually do not.
Why three axes instead of a richer taxonomy
The reason you need three separate fields rather than an extended type taxonomy is that the axes are genuinely independent. Consider two artifacts:
A fact manually entered from a verified primary source: provenance observed, assertion mode factual, synthesis depth 1. An AI-generated cross-topic summary of product area knowledge: provenance synthesised, assertion mode factual, synthesis depth 3.
Both are factual. Neither is procedural. But they behave completely differently in the system: different retrieval weights, different citation styles, different invalidation logic. “Factual” alone tells you nothing useful about either of them. You need all three axes.
The same logic applies in the other direction. A belief artifact can reach synthesis depth 4 if someone has spent two years curating an organisational position that is still held with uncertainty. A hypothesis can be synthesised from multiple sources and still remain unverified. No combination of type labels captures this. The three axes do.
Start here
If you are designing a knowledge system or evaluating an existing one, the practical question is: which of these three axes does your current metadata model track?
Most tools track none explicitly. They have a type field that conflates all three, or they have no structured metadata at all and rely on full-text search to compensate.
The minimum viable starting point is three fields on every knowledge artifact, with sensible defaults:
provenance_type:observedfor ingested content,synthesisedfor AI-generated outputassertion_mode:factualfor most artifacts;belieffor anything the team holds with uncertainty;hypothesisfor anything flagged for validationsynthesis_depth: 0 for raw imports; 1 to 2 for extracted content; 3 to 4 for curated or published artifacts
These defaults will be wrong some of the time. A human review gate corrects the high-stakes cases. What you gain immediately is the ability to weight results differently, cite sources with the right level of confidence, and surface the difference between a verified fact and a team belief before the reader acts on it.
The classification does not have to be perfect on day one. It has to exist.
The three axes also map directly onto decisions as first-class entities: a decision carries its provenance (who made it, from which source), an assertion mode (factual, belief, or hypothesis), and a synthesis depth that grows as the decision is revisited and refined.
Next up in this series: why automated topic clustering works well for discovery and reliably fails as an authoritative classification system. Read the self-managing taxonomy is a myth.