The handbook is your knowledge base's first draft
Open a GitLab page and you can read more about how GitLab operates than some employees know about their own company. The handbook runs to over 2,000 pages. It covers everything from travel expenses to the reasoning behind internal process changes. Anyone can read it. Anyone can propose a change. Nothing is decided in private and then filed for compliance reasons.
At Voys, where I spent years before starting Klai, we built an open handbook from day one. Every process, decision, and policy lives there. Internal oracle, external reference, onboarding kit, accountability record, all the same document.
These are rare companies. Not because they figured out a secret. Because they decided early that writing things down is the work, not an afterthought.
Why some handbooks work and most wikis do not
The usual narrative is that wikis fail because people are lazy. That is not what I see. People at handbook companies are not more disciplined than people at wiki companies. They are working in a system that makes writing the path of least resistance, not the path of most effort.
Wikis are publishing tools. You write something, you hit save, and then you hope it stays accurate. The editing flow is heavy. The review flow is heavier. The consequence: you only write what you are confident will stay true for months.
A handbook is edited the way code is edited. Small changes, many of them, merged continuously. A process changed today? Update the page today, reference the change, move on. The expectation is not that every page is perfect. The expectation is that every page is current.
That is the difference. Wikis ask for finished thought. Handbooks accept in-progress thought. One of those scales, the other does not.
The handbook is not the knowledge base
A handbook is deliberately authored explicit knowledge. A knowledge base is what your organisation actually stores across every tool. They are not the same thing, and conflating them is where most knowledge projects stall.
Your knowledge base includes the handbook, plus: support ticket history, decision records, Slack threads, emails, customer conversations, product docs, internal wikis, shared drives. Nobody reads all of that. Nobody can. That is the problem AI-augmented retrieval is supposed to solve.
But a knowledge system without a handbook is an AI that has to invent structure from scratch. It sees fifty variations of how your team explains the same policy and does its best to average them. That average is wrong more often than the handbook would be right.
The handbook is not a replacement for the knowledge base. It is the spine. The rest of the content hangs off it.
What the handbook actually captures
Three things that almost never make it into any other system:
The reasoning, not just the rule. “Our return period is 30 days” is a fact. “Our return period is 30 days because enterprise procurement cycles typically need 3 weeks for internal sign-off, and we want the buffer” is knowledge. The second one survives the policy change. The first one does not.
The defaults. Most companies have unwritten defaults: how fast we reply to customers, what counts as a quick decision versus a strategic one, how we handle a disagreement. These live in the heads of senior people until they leave. A handbook turns defaults into text that new colleagues can read in a week instead of absorb over a year.
The conscious trade-offs. Every company has decisions that look strange without context. Why do we not use that obvious SaaS tool? Why is our pricing structured that way? Why do we not sell to this segment? A good handbook says “we considered X, chose Y, here is why” in three sentences. A bad knowledge base forces every new employee to re-debate the same questions. That is exactly the gap a decision record with rationale closes, and the handbook is where those records belong.
Where AI actually helps
I want to be direct about what AI adds and where it does not.
AI helps with retrieval. A handbook of 2,000 pages is not something you read front-to-back. You ask a question, the system finds the relevant passage, you get an answer. The handbook stays authoritative. The search gets better.
AI helps with extraction. When the handbook is silent on a topic but the answer lives in a Slack thread from six months ago, a retrieval system can surface it with the handbook context so you know where it fits. The handbook’s taxonomy becomes the organising principle for everything else.
AI does not write your handbook for you. It cannot. The handbook is opinionated. It encodes what your company believes, why it believes it, and what you would give up to hold those beliefs. Those are human calls. A model cannot derive them from usage data, and it should not try.
The split is the same 90/10 principle from the first post: AI does the retrieval, the summarisation, the extraction. Humans write the source material. Getting that split right is what makes knowledge tools useful instead of noisy.
The objection everyone raises
“Our company moves too fast for a handbook. We would spend all our time writing and none of it building.”
I have heard this at every company I have worked with, and it is almost always wrong in the same way. The writing time is not additional time. It is time that already gets spent, re-explained in Slack threads, repeated in every onboarding call, rehashed in meetings where the same decision gets re-made because nobody remembers the first one.
A team of twenty people saying something three times a week costs more than writing it down once. The handbook is the compound interest of clear thinking. It just feels expensive because the cost is upfront and the benefit is later.
There is a real version of this objection: early-stage teams where the thing you are building is changing weekly. For those, a full handbook is overkill. But even there, a decision log and an onboarding doc already cover most of the value. The principle scales down.
Where Klai fits
Klai is built assuming a handbook exists, or is being written, or is about to be. The retrieval pipeline works best when the handbook is the highest-quality content in the index. Handbook pages score higher on relevance because they are written as answers to questions, not as artefacts of a process.
When the handbook is incomplete, Klai surfaces the gaps: questions your team asks that the handbook does not answer yet. That is the editorial signal, not “write more”, but “write this specifically, here is how often people need it, here is where it fits in your existing structure”.
The self-improving loop only works when there is a handbook to improve. Without it, the system has no spine to grow from.
Start here
If your company does not have a handbook, do not start by trying to write one. Start smaller: pick the three questions your team answers most often and write down the answer once. Link to it. Next time someone asks, send the link instead of typing the answer. That is the handbook habit. Everything else is scaling that pattern.
If your company has a handbook that has drifted, do not start by auditing every page. Start by asking three recent joiners what they wish had been documented when they started. That list is more accurate than any audit.
The handbook is not a product. It is a commitment to write the thing down. Once that commitment exists, AI-augmented knowledge tools become force multipliers. Without it, they are a louder version of the same confusion.
Next up in this series: why decisions need their own data model, and why rationale is the one field AI still cannot write for you. Read decisions deserve their own data model.