For most of human history, the value of a solution came partly from its generalisability. A better technique for preserving food, a more reliable way to navigate, a more effective teaching method — these things spread because they worked for more than one person. Shared solutions accumulated into shared knowledge.
AI is quietly disrupting that logic. It is now possible — and increasingly easy — to build something that addresses your problem exactly: your language, your constraint, your particular set of circumstances. The barrier between having a problem and having a working solution has collapsed for millions of people who previously had no access to that kind of tailored response.
This is genuinely remarkable. But it raises a question that we think is worth sitting with: what happens to collective understanding when solutions become hyper-personal? When a nurse builds a tool for her specific patient cohort and never shares it. When a farmer trains a model on his specific land and takes that knowledge with him. When the gap between problem and solution closes — but closes privately.
Cototyping starts from the assumption that both the gains and the losses here are real, and that understanding them requires documentation. We're building an archive of personal-scale AI solutions — not to celebrate them uncritically, but to ask what they reveal about human need, problem-solving, and the relationship between individual ingenuity and collective knowledge.
When solutions are built for one, the knowledge they represent tends not to travel.
Hyper-personal solutions reveal something that generalised ones obscure.
As this community grows, so does the value of what it collectively documents. We take that seriously — and we've made specific commitments about how it's used.
You own what you share. We will never sell individual submissions or personal data to third parties.
Any research or analysis derived from this archive will be published freely — not behind a paywall or used commercially without community consent.
Submissions can be shared publicly, anonymously, or kept private for research purposes only. Granular controls, always.
As patterns emerge from the data, we'll publish what we find and invite the community into how we interpret it — not present conclusions without context.
A single submission tells you about one person's problem. A thousand submissions, structured the same way, start to show you something else: the shape of need that formal systems don't reach.
That's what we're building toward. Not a showcase. Not a marketplace. An archive — rigorous enough to be useful to researchers and policy-makers, human enough to be worth reading for its own sake.
The Alexandrian structure we use for submissions isn't accidental. Christopher Alexander's insight was that patterns become legible only when you document both the problem and the solution — the forces in tension, not just the resolution. We think the same is true here.