Back to Home

Research

Building products with AI is one thing. Understanding how human-AI collaboration actually works—what patterns succeed, what fails, why—requires a different kind of rigour.

These papers emerge from building real products. Not lab experiments or simulations, but the messy reality of shipping code with AI assistants. N=1 studies with all their limitations, but also with something lab research lacks: ecological validity.

I publish preprints because I believe in open inquiry. The ideas here are working hypotheses—some will be wrong, all need scrutiny. If you see flaws, say so. If you want to replicate or extend this work, I want to hear from you.

Working Papers

Three papers exploring human-AI collaboration—from individual development practices to the future of work.

December 2025
AI DevelopmentDevOpsMethodology

Economic DORA: Practice-Level Analysis of DevOps Metrics in AI-Assisted Solo Development

Measuring What Matters

57 days. 276 commits. One production app built from zero with Claude. Traditional DevOps metrics tell you what happened—Economic DORA reveals why, adding token economics as a first-class dimension.

The PRISM framework extends DORA with AI economics. Token cost as leading indicator. Practice-level granularity. Open dataset and replication protocol included.

Read the preprint
December 2025
Human-AI CollaborationClaude

Dr StrangeDev (or How I Learned to Stop Worrying and Trust the Method)

Constraining Claude for Better Outcomes

54.3% failure rate in November. Confident wrongness at 2,500 tokens per incident. The qualitative story behind Economic DORA's numbers—what actually goes wrong when you let AI execute without constraints.

ADRs force investigation before execution. Problem Agreement catches misunderstanding before code. Evergreen Rules create persistent memory. The structure that constrains Claude is maintained by Claude.

Read the preprint
February 2026
Future of WorkAI AgentsLabour Economics

The Self-Checkout Supervisor Thesis: AI Agents, Portfolio Employment, and the Future of Work

Beyond Automation Anxiety

What if AI agents don't just automate jobs—but enable entirely new employment models? The capital barrier between employer and individual has collapsed. This changes the game theory of work.

Portfolio employment: individuals supervising AI-augmented roles across multiple employers. Critical evaluation of inequality risks and the pipeline problem. Seven testable hypotheses.

Read the preprint

The Through-Line

These papers share a common thread: taking AI seriously enough to study it rigorously, while staying honest about what we don't yet know.

Economic DORA and Dr StrangeDev examine human-AI collaboration at the individual level—what actually works when you're building things with AI assistants. The Self-Checkout Supervisor Thesis zooms out to ask what these patterns might mean for how we organise work.

All of them are working papers. All of them have limitations. All of them invite scrutiny.

Collaborate

I'm actively seeking collaborators to pressure-test and extend this work.

Replication Partners

Economic DORA includes a detailed replication protocol designed for N=20 validation. If you're building with AI assistants and willing to track your process, I want to work with you.

Critical Review

These ideas need pressure-testing. If you see methodological flaws, questionable assumptions, or alternative explanations I've missed—that's valuable. Don't be polite about it.

Extension & Application

Interested in applying these frameworks to your context? Different domains, team sizes, and AI tools would all generate valuable data.

The goal isn't to be right. It's to understand how human-AI collaboration actually works, so we can do it better.

Get in touch

All papers are open access preprints. Citation and replication encouraged.