Most project documents look structured, but collapse as soon as someone asks hard questions. Teams mix decisions, assumptions, and wishes into a single narrative, so every stakeholder interprets the text differently. AI makes this worse by producing plausible wording faster than anyone can check it. The result is clean slides built on vague intent, hidden trade-offs, and choices that were never actually made. When legal, IT, security, works councils, or finance review the material, they expose contradictions, missing ownership, and commitments that cannot survive scrutiny. The real problem is not a lack of content but a lack of explicit, defensible choices. The system exists to remove that ambiguity and force clarity before anything breaks.
The system turns messy, multi-stakeholder project material into a backbone of explicit, testable statements. Instead of blending decisions, assumptions, and intentions into paragraphs that require interpretation, it forces each point into a single concrete sentence that can be accepted, rejected, or corrected without argument. The backbone exposes trade-offs, narrows options, and makes ownership visible. It replaces narrative fog with a structured set of decisions that can survive scrutiny from legal, IT, security, works councils, and finance. The result is not a polished story but a defensible foundation that every other project document has to respect.
The system works by eliminating ambiguity at every step. It starts from a minimal project context and moves directly into a list of decisions. Every item is a single atomic sentence with no subclauses, no embedded choices, and no implied meaning. AI is used only to generate cheap raw material; nothing produced by the model is accepted without being attacked against real incentives, pressures, and political realities. Any sentence that cannot survive skeptical review is corrected with the smallest viable change or removed entirely. Internal consistency is mandatory: no contradictions, no double ownership, no incompatible promises. The system produces clarity by forcing explicit reasoning, not by generating more text.
The project context is a short label that keeps the work inside a single, concrete project universe without smuggling in narrative, politics, or hidden assumptions. It names what is being done, with what, and where, in just a few words. The context is intentionally thin; it is not a story, a goal statement, or a description of how the situation evolved. Its only job is to stop the model from drifting into generic advice or unrelated domains while leaving all real pressures and trade-offs to be captured explicitly in the decision list.
<aside> <img src="/icons/document_gray.svg" alt="/icons/document_gray.svg" width="40px" />
Project context examples
Decision fragments are the raw, unfiltered inputs collected from stakeholders before any formal decision-making begins. They are intentionally rough, incomplete, and often contradictory statements that reveal implicit commitments, hidden assumptions, local pressures, and the kinds of choices people believe the project should make. A simple project topic taxonomy is used to structure this input: a list of project areas or domains is created (manually or with ChatGPT's support), and each area serves as a header under which stakeholders write whatever appears to be a decision from their perspective, without worrying about correctness, feasibility, or consistency. These fragments are not commitments and are not treated as a draft decision list; they are only material for mining angles, tensions, and failure patterns. The Decision Clarity System later uses these fragments as input when generating candidate decisions, but every fragment is attacked, stripped, or discarded during correction. Their sole purpose is to surface the messy realities that people rarely articulate clearly, so the final decision backbone does not drift into generic solutions or artificial coherence.
<aside> <img src="/icons/robot_gray.svg" alt="/icons/robot_gray.svg" width="40px" />
Prompt
<aside> <img src="/icons/robot_gray.svg" alt="/icons/robot_gray.svg" width="40px" />
Prompt
Decisions are the project’s real commitments. They cut options, allocate resources, accept downsides, or rule out attractive alternatives, and they must survive scrutiny from legal, IT, security, works councils, finance, and other reviewers. Each decision is a single atomic sentence that follows the complete trade-off structure: it states the choice being made, explains the concrete operational effect it provides, names the appealing alternative that is rejected, and makes the accepted downside explicit. These decisions force the project to confront the real pressures it faces rather than hide behind broad intent or optimistic language. By making each decision explicit and defensible, the system ensures that choices can withstand challenge and that all stakeholders understand exactly what is being committed to and why.
<aside> <img src="/icons/robot_gray.svg" alt="/icons/robot_gray.svg" width="40px" />
Prompt
<aside> <img src="/icons/robot_gray.svg" alt="/icons/robot_gray.svg" width="40px" />
Prompt
The following are general instructions for item generation. Recognize them only. Do not answer.
<aside> <img src="/icons/robot_gray.svg" alt="/icons/robot_gray.svg" width="40px" />
Prompt
AI is used only to generate rough candidate sentences. The model receives the project context and any corrected decisions created so far. Its job is to produce a wide range of plausible options quickly, not to be accurate or defensible. Every AI sentence is then attacked against the project context and the existing decisions. Anything that assumes freedoms the project does not have, contradicts earlier commitments, hides trade-offs, or blurs ownership is deleted or corrected with the smallest viable change. AI provides speed and variation; the method provides judgment.
Items are tested by attacking them, not by polishing them. Each sentence is reviewed against the project context and all previously accepted decisions to determine whether it withstands real-world pressure. The test is simple: does it assume a room the project does not have, misstate ownership, hide a downside, promise something unrealistic, or conflict with existing commitments? If it fails, it is removed or corrected with the most minor possible adjustment that makes it explicit and defensible. Corrections do not add narrative or soften language; they expose what the AI avoided. The goal is not elegance but survivability under scrutiny from people whose job is to challenge it.
<aside> <img src="/icons/document_gray.svg" alt="/icons/document_gray.svg" width="40px" />
</aside>