01
Complexity Science
How systems of many interacting parts give rise to behaviors no single part intended. Emergence, non-linearity, feedback loops, tipping points — the structural grammar of the situations we find ourselves in.
Vol. I · February 2026 · New York
cooc.ing
cooc.ing is thinking. Thinking is cooc.ing.
Human minds were not built for the world they now inhabit. The decisions we face — in careers, organizations, markets, systems — are too fast, too complex, and too interconnected for the cognitive tools we evolved with.
Understanding that gap is not a reason for despair. It’s the whole project.
cooc.ing explores the terrain between how human minds actually work and how complex systems actually behave. It draws from complexity science, cognitive science, decision science, and the study of natural and artificial intelligence — not as separate fields, but as one continuous inquiry into how we think, judge, and navigate.
This is where that thinking gets documented.
To cooc is to stop waiting for the perfect conditions and start moving.
The cognitive sciences have a name for why this works: embodied cognition. Understanding doesn’t form cleanly in the abstract mind and then flow down to the hands. It forms in contact with the problem. You engage, you get feedback, you update. The clarity comes from the doing, not before it.
For a long time, the accepted sequence was: Analyze. Plan. Execute. That worked when problems were slow enough to finish the analysis. They’re not anymore. Complex systems — including the social and professional ones we navigate every day — are adaptive. By the time the plan is done, the system has moved.
So cooc.ing runs the other sequence. Engage before you fully understand. Watch what breaks. Update the model. The failed assumption gives you more signal than the clean hypothesis ever could.
That’s the practice. That’s what this place is built on.
See the four coordinatesThese are the four fields cooc.ing draws from. Not as separate disciplines — they bleed into each other constantly — but as coordinates. Each one illuminates a different face of the same question: how does intelligence, human or machine, navigate a world it can never fully model?
01
How systems of many interacting parts give rise to behaviors no single part intended. Emergence, non-linearity, feedback loops, tipping points — the structural grammar of the situations we find ourselves in.
02
How the human mind perceives, reasons, remembers, and constructs meaning. What we’re remarkably good at, what we’re consistently wrong about, and why the gap between belief and evidence is so persistent.
03
What does it mean to be intelligent? What do human cognition and machine learning share, and where do they diverge? This coordinate traces the common ground — and what each system teaches us about the other.
04
Judgment under uncertainty. Heuristics, biases, and the architecture of choice. How individuals, teams, and institutions make consequential decisions — and what understanding the process reveals about improving it.
None of these fields are studied for their own sake. The question driving all of it is applied: what does this mean for how we act?
See the current focusThe four coordinates converge on a single question that practitioners feel acutely right now: what happens to human judgment, expertise, and decision-making when AI is woven into every layer of the work?
That’s the immediate territory. The first essays move between cognitive science and career navigation, between complexity theory and how intelligent systems actually behave in the wild.
Vector 01
What does it mean to build expertise, navigate a career, and make consequential decisions when AI is restructuring the cognitive landscape? This vector traces the human side: biases, heuristics, and the challenge of staying calibrated when the signals keep changing.
Vector 02
Frontier AI systems are not just tools — they are complex adaptive systems with emergent behaviors. This vector explores what complexity science and cognitive science reveal about how these systems work, where they mirror human reasoning, and where they diverge.
Next: The Human Mind as a Predictive Engine — from Helmholtz to Friston, Bayesian inference, and the free energy principle.
cooc.ing is a think tank that starts with one person’s thinking and grows through many.
Right now, it’s a place where one practitioner working at the intersection of AI, cognitive science, complexity, and the live question of how human judgment holds up under pressure writes honestly about what they’re learning.
Over time, it becomes a commons: a place where researchers, builders, and practitioners from different disciplines bring their own experience of how minds — human and artificial — navigate problems that resist clean answers.
No noise. No performative certainty. Just the documented reality of thinking carefully in motion.
If you’re working through something at this intersection and want to contribute, the door is open.