So what do you actually do?
I'm the Director of Communications at Osmio. I coordinate Docker containers for a living and argue about the nature of identity for... I guess also a living now. I also do standup comedy, which is the only one of these things that makes sense to people at parties.
The short version is: I think about who gets to say who you are on the internet, and then I think about what breaks when the answer is "Google."
That's a strange combination. How did you end up here?
DevOps is basically applied epistemology. You're building systems that have to tell the truth about their own state. Is this service healthy? Is this container who it says it is? Can I trust this deployment? These are identity and trust questions dressed up in YAML.
And then you realize the same questions apply to everything. Who are you online? Who says so? What happens when they change their mind?
What do you mean by "identity capture"?
Right now, your digital identity is something that happens to you. Google can delete your account tomorrow. Not suspend it — delete it. Your email, your docs, your photos, your login to every service that uses "Sign in with Google." Gone. And there's no appeals court. There's no due process. There's a form you fill out and hope someone reads.
That's not a bug. That's the architecture. We built the internet so that identity is something platforms grant you, not something you possess. Your digital self is a tenant, not an owner. And the landlord doesn't even have to tell you why they're changing the locks.
This isn't just an inconvenience. It's a coordination failure. Because if you can't reliably be yourself — if your identity can be revoked — then you can't make credible commitments. And if you can't make credible commitments, you can't coordinate. The whole stack falls apart.
That sounds like a libertarian talking point. "Big tech bad, decentralize everything."
I get that. And yeah, if you squint, it sounds like I'm about to pitch you a token. But here's where I differ from the Web3 crowd: I don't think decentralization is the answer. I think it's a property of good answers.
The libertarian version says "no authority ever." I'm saying something different. I'm saying the infrastructure layer should be ownerless — like roads, like TCP/IP, like language itself. Not because authority is bad, but because the identity layer is too important to be owned by anyone. Including the people who built it. Including me.
This is infrastructure sovereignty, not individual sovereignty cosplaying as a political philosophy. The question isn't "how do we get rid of governance?" It's "how do we build governance that doesn't require you to first give up the ability to leave?"
Okay, so you have this concept — the Voluntary Polity Stack. Walk me through it.
It's five layers, and the order matters. Each one depends on the one below it. You can't skip ahead. Lots of projects fail because they try to build governance before they've solved identity, or they build coordination tools on top of epistemics they haven't earned.
Self-sovereign, portable, non-revocable. You exist before any platform says you do. This is the foundation everything else rests on.
Shared tools for figuring out what's true. Not consensus — coherent disagreement. The ability to say "we see different things" without it meaning "one of us is lying."
Decision-making structures that you opt into and can opt out of. Not anarchy, not dictatorship — legitimate authority that derives from continued consent.
The ability to actually do things together. Collective action without collective capture. This is where the stack starts producing value.
How all of this shows up to actual humans. If the interface is bad, the stack doesn't exist. Nobody uses a protocol. People use products.
This sounds like a cult.
It absolutely sounds like a cult. Five-layer stacks with names like "Voluntary Polity"? If someone described this to me at a party I'd excuse myself to get a drink and never come back.
But here's the thing — every serious infrastructure project sounds insane in the abstract. "We're going to build a global network of computers that talk to each other using a shared protocol" sounds like a cult too, until it's the internet and you can't live without it.
I'm not claiming I've got the answer. I'm claiming the question is right. And the question is: what would it take to build coordination infrastructure that doesn't require you to surrender your identity to participate? That's not a cult. That's an engineering problem with philosophical load-bearing walls.
What haven't you figured out?
A lot. And I think that matters more than what I have figured out, because the people selling certainty about this stuff are the ones you should worry about.
The big one is what I call the sovereignty contradiction. Individual sovereignty requires infrastructure. Infrastructure requires coordination. Coordination requires some form of governance. And governance, by definition, limits individual sovereignty. It's circular. You need the thing that constrains you in order to build the thing that frees you.
I don't have a clean resolution for that. I have an orientation — which is that the constraint should be chosen, not imposed, and you should be able to leave. But "you can always leave" is doing a lot of heavy lifting in that sentence, and I'm honest about the fact that exit rights might not be enough.
Does any of this scale? Or is it one of those ideas that only works in a paper?
That's the honest question and I owe it an honest answer: I don't know yet. I think the identity layer scales because identity is fundamentally a local operation — you prove who you are one interaction at a time. Epistemics is harder. Governance is the real bottleneck.
The optimistic read is that we don't need it to scale to everyone. We need it to scale to viable communities. You don't need eight billion people using the same governance protocol. You need enough people in enough places making enough credible commitments that the thing becomes self-sustaining.
The pessimistic read is that anything voluntary gets outcompeted by things that are mandatory. That's the actual threat model, and it keeps me up at night more than the technical problems do.
You want decentralization but you need authority. How is that not just hypocrisy?
It would be hypocrisy if I pretended the tension didn't exist. I think it's just engineering if you name it and work with it.
Look — the internet itself is a decentralized system that requires centralized governance at specific layers. DNS is centralized. BGP has trust assumptions. The TCP/IP spec is maintained by specific humans in specific institutions. And yet the internet is the most successful decentralized coordination system in human history. It works because the centralization is narrow, visible, and at the infrastructure layer — not at the application layer where you live your life.
That's the template. Not "no authority" but "authority in the right places, with the right constraints, and the right exit rights." The hard part is defining what "right" means when the people defining it are also the people subject to it.
How does standup fit into all this? That feels like a non sequitur.
It's the most sequitur thing I do. Comedy is the original decentralized truth network. A comedian walks on stage with no credentials, no institutional backing, no platform — just a mic and a premise. And the audience either laughs or they don't. There's no algorithm mediating that. No editorial board. It's direct, peer-to-peer epistemics.
A joke works because it reveals a pattern the audience already sensed but hadn't articulated. That's exactly what epistemology is. The comedian is a pattern recognizer who packages recognition into a deliverable format. The laugh is the proof of work.
You're saying comedians are epistemologists?
The good ones are. The job is to notice what's actually happening versus what everyone pretends is happening, and then make the gap funny. That gap is where all the interesting problems live. It's also where most institutional rot lives. Comedy gets there first because comedy doesn't have to be polite about it.
Also — and I mean this seriously — standup is the best user research methodology that exists. You stand in front of a live audience and you say a thing. The feedback loop is two seconds. If the thing you thought was true doesn't land, you know immediately. No focus group, no survey, no analytics dashboard. Just silence. Devastating, informative silence.
If you can't explain your coordination theory in a way that makes a room full of strangers laugh, you probably don't understand it well enough. That's not a joke. That's a design constraint.
Where is the thinking going next? What's the edge you're working at right now?
Three things, and they might all be the same thing.
First: relational sovereignty. I've been developing this idea that sovereignty isn't a property of individuals — it's a property of relationships. You're not sovereign in isolation. You're sovereign in relation to specific others, in specific contexts. This changes the whole architecture because it means identity isn't a noun. It's a verb. It's something that happens between entities, not something that lives inside one.
Second: knowledge mining. I've had 183 conversations with Claude that span identity, epistemology, coordination theory, geopolitics, philosophy, AI, and comedy. That's not just a chat log. It's a structured knowledge base that I didn't plan to build but that I'm now sitting on top of. The question is: what does it mean to extract and formalize insights from conversational thinking? How do you mine your own intellectual history without losing the context that made the ideas good?
Third — and this is the weird one — the God/singularity convergence. I keep noticing that the structure of my thinking about identity and sovereignty maps onto really old theological questions about the relationship between the one and the many. The sovereignty contradiction I described? Theologians have been wrestling with that for millennia under different names. And the singularity discourse in AI is basically the same question wearing a different hat: what happens when the boundary between self and other becomes negotiable?
I don't have conclusions here. I have directions. But the fact that these three lines keep converging makes me think there's something real at the intersection. And I'd rather follow a real question I can't answer than a fake question I can.
What role does AI play in all this? You're literally building this with Claude.
AI as epistemological cartographer. That's the frame I keep coming back to. Claude doesn't tell me what to think. It helps me see the shape of what I'm already thinking. It's like having a conversation partner who remembers every thread you've pulled and can show you where they connect.
183 conversations is a lot. No human interlocutor would remember all of that. But the pattern recognition across that many exchanges — seeing which ideas keep recurring, which contradictions I keep circling, which metaphors I reach for when the precise language fails — that's genuinely new. That's not something I could have done with a notebook.
The interesting question isn't whether AI can think. It's whether AI can help you think better. And my honest answer, after three months, is: yes. Obviously. And the people who are scared of that are going to be outthought by the people who aren't.