Say you’re in a cab, and an idea hits you — something about how yield-bearing stablecoins structurally align issuer incentives with inflation rather than holder protection. You type it into Telegram. By the next morning, that half-formed thought is a tweet thread on X and a full blog post draft in your inbox, both written in your voice — your analytical habits, your rhetorical patterns, your philosophical commitments. Not a summary. Not a paraphrase. Something that thinks the way you think.

I’ve been building this. And I think of it as the content nursery.

The problem with AI-generated content

Most AI content sounds like AI content. Not because the models are bad — they’re remarkably capable — but because they don’t have identity. They have instructions. “Write a blog post about stablecoins” produces competent text that could have been written by anyone. It has no epistemology, no value hierarchy, no intellectual habits. It doesn’t know what questions to ask first, what evidence to trust, or when to leave a tension unresolved rather than forcing a clean answer.

The result is a kind of uncanny valley of thought. The sentences are well-formed. The arguments are coherent. But the mind behind them is absent — because there was no mind behind them.

I wanted to build something different. Not an AI that writes for me, but an agent that thinks like me — that I can feed raw material and get back drafts that sound like they came from my own analytical process, ready for me to trim and publish.

The nursery metaphor

The metaphor is literal. I plant the seed — a sentence, a provocation, a half-baked idea captured in the moment. The agent does the growing: it takes that seed, feeds it with a data layer of scraped articles and a deep identity document, and produces something that has developed roots, structure, and shape. I do the final trimming — editing, adjusting, deciding what ships.

The agent has three phases, and each maps cleanly to the gardening metaphor:

Phase 1: Planting. I send a message to a Telegram channel. A script polls the bot API, picks up the message, and appends it to a seed list — a simple markdown file with one idea per line. This is the lowest-friction capture mechanism I could design: see something interesting, type a sentence, move on.

Phase 2: Growing. An orchestrator reads the seed list and runs two parallel pipelines. One generates tweet-length output — single tweets or threads — and posts them directly to X. The other generates full blog post drafts in the format my blog uses. Both pipelines send the same seeds to the agent, enriched with the latest output from the content scrapers — articles from fintech, crypto, policy, and tech publications that I read, stored as structured markdown. The agent receives the seed plus this real-world context, so it doesn’t riff in isolation; it grounds the idea in what’s actually happening. Different prompts and output expectations for tweets versus blog drafts, but the same seed and the same data layer feeding both.

Phase 3: Trimming. The drafts land in the repo as markdown files. I read them, edit them, and decide what publishes. The agent did the heavy lifting of development — turning a one-line idea into a structured argument — but the editorial judgment is mine.

Voice DNA: a constitution for thought

The core of the agent is what I call the Voice DNA — a document that tells it not just how to sound, but how to think. It’s roughly 53,000 characters, organized in layers that move from the deepest cognitive patterns to the most surface-level stylistic habits.

How did we get here? I started by compiling my past essays and reading material and organizing everything into documents and folders. That became the initial “thinking layer”: raw material that captured how I actually reason. But I quickly hit caching constraints. Feeding the agent hundreds of pages of prior work on every run wasn’t feasible and I had to optimize. I distilled the inputs and outputs through psychological, behavioural, and philosophical frameworks until a deep thinking AI model and I could articulate the structure of how I think, not just the content. The Voice DNA is the result: a compact constitution derived from those readings and writings, that produces outputs in my voice without requiring the full archive on each run.

The deepest layer defines epistemological commitments. When this mind encounters a consensus view, the first move is to question the frame, not the claim. Is “adoption” the right metric for “success,” or should we measure innovation capacity and long-term sustainability? The agent is instructed to perform this reframing operation before analyzing anything.

The next layer defines ontological commitments — what this mind believes the world is made of. Every system is a designed artifact, not a natural phenomenon. Institutions shape character, not just outcomes. Power is structural, not personal. And the deepest question you can ask about any system: what is this for?

Then come the interpretive moves — the actual analytical toolkit. The Premise Reframe. The Incentive Trace. The Regime Question. The Counterfactual Design. The Cross-Domain Bridge. Eight specific, repeatable cognitive operations that this mind performs when processing new information.

Below that: a ranked value hierarchy, a curated philosophical toolkit (which concepts this mind actually reaches for and how), rhetorical patterns documented with concrete examples, and a shadow list of things this voice never does — no hype language, no personalizing critique, no cheerleading, no moralizing, no performing expertise.

The Voice DNA is, in effect, a constitution. It defines the structure of governance — how decisions get made about what to say and how to say it. And like any good constitution, it separates layers: the deep epistemology is immutable (this mind always questions the frame before the claim), while the surface rhetoric is adaptable (sentence length and opening patterns can vary by context).

This is the structural insight that makes the system work. Giving an AI a style guide never seems to be enough. The Voice DNA governs at the level of how this mind approaches information, and the stylistic consistency follows as a consequence.

The data layer: what the AI reads

The Voice DNA tells the agent who it is. The data layer tells it what’s happening.

A set of automated scrapers run on schedules — some daily, some weekly — pulling articles from publications I follow across fintech, crypto, policy, and technology. Each article is stored as structured markdown with metadata: source, author, date, and body. The scrapers run as scheduled pipelines, so the data layer is always current without any manual effort on my part.

This is the soil the nursery grows in. When the agent generates content from a seed idea, it doesn’t just riff on the idea in isolation — the data layer articles are fed into the prompt alongside the seed, giving the agent real-world material to draw from. A seed about interchange fees gets enriched by recent articles on payment network economics. A seed about stablecoin yield gets grounded in the latest regulatory developments and protocol changes.

The result is content that sits at the intersection of two inputs: my idea and what’s actually happening in the world right now. The seed provides direction. The data layer provides substance. The Voice DNA provides the analytical lens that fuses them into something coherent.

This matters because the best analysis isn’t generated from first principles alone — it emerges when a distinctive intellectual lens meets real-world developments. Without the data layer, the seed pipeline would produce takes that are structurally sound but untethered from current events. Without the seeds, the data layer would produce summaries without a point of view. The nursery combines both.

The architecture: prompt as system, data as user

The technical design is simple. Each pipeline constructs two inputs for the agent:

The system instruction combines a task-specific prompt (what to produce) with the full Voice DNA (who you are). This is the identity layer — it persists across all content generation and defines the cognitive and rhetorical character of the output.

The user message is the raw material — the seed ideas, enriched with data layer articles that give the agent current context to work with. The agent receives the identity as system instruction and the material as task, and produces output that applies the former to the latter.

The system instruction runs to roughly 60,000 characters. That’s large by most prompting standards, but it’s the whole point — the depth of the identity document is what separates this from “write me a tweet about stablecoins.” The agent isn’t generating content. It’s analyzing material through a specific intellectual lens and producing output in a specific voice.

What I’ve learned

The output quality is decent and most importantly, it doesn’t not sound like me! For tweet generation, the prompt includes 25 actual tweets I’ve written, each labeled with a style category. The agent doesn’t just mimic the style — it learns the distribution: when to use a one-liner versus a question-then-stance versus a concrete scenario. The categories impose structure on what would otherwise be a formless “write tweets” instruction.

The seed-to-blog pipeline is the most interesting output. A one-line idea — “Card rewards are funded by higher interchange fees, leading to price increases paid for by lower-income consumers” — becomes a 1,500-word draft that traces the incentive structure, names the regime dynamics, reaches for a classical parallel, and ends with an honest tension rather than a neat conclusion. Not because I told the agent to do those things for this specific idea, but because the Voice DNA defines those as the moves this mind always makes.

The honest tension

There is one. The system produces drafts that are 70-80% there. The analytical structure is right. The voice is somewhat there. The rhetorical patterns land. But the last 20% — the part that makes a piece genuinely mine — still requires my hand. A sentence that’s technically correct but rhythmically wrong. An argument that follows the right move but picks the wrong example. A closing that resolves when it should leave the question open.

This is, I think, the actual frontier of AI-assisted content creation. Not “can an agent write?” — it obviously can. Not “can an agent match a style?” — with enough examples, it comes close. But: can an agent replicate the judgment that makes a piece of writing feel like it was written by someone who cares about what they’re saying?

For now, the agent works. I plant seeds. It grows them. I trim what comes out. And the content that reaches the world sounds like me — because the agent was taught not just how I write, but how I think.