Why Causalica
Most real problems are not prediction problems. They are counterfactual problems:
What would happen if we change something—compared to if we don’t?
When someone asks:
- “Should we expand this program?”
- “Does this policy reduce harm?”
- “Will this intervention improve outcomes?”
- “Is this relationship causal or just correlated?”
…they’re asking a question about effects, not just patterns.
What this site is for
Causalica exists to make causal thinking usable. Not as a buzzword, not as a list of methods, but as a way to move from:
question → design → evidence → decision
Over time, this site will contain three complementary things:
1) A living textbook
The textbook is the structured backbone: foundations, definitions, examples, and worked intuition.
- Textbook: https://textbook.causalica.com
“Living” means it will evolve: new chapters, improved explanations, and better examples as my understanding (and feedback) improves.
2) Practical notes
Short essays on the things that actually trip people up:
- framing the causal question
- selecting a comparison group
- dealing with selection, measurement, and time
- interpreting estimates without overclaiming
- checking robustness without performing theater
My goal is that each note can be read in 5–10 minutes and leave you with something you can apply immediately.
3) Research engineering
Good causal work often fails for non-mathematical reasons:
- messy data pipelines
- unclear specifications
- analyses that can’t be reproduced
- “final” figures that require manual steps
I care a lot about making the workflow as solid as the argument: clean structure, versioned outputs, and code that survives time.
What I believe about causal inference
Here are a few principles that guide how I think.
Design beats algorithms
The most important step is often not the model; it’s the design:
- What’s the intervention?
- What’s the counterfactual?
- What assumptions make identification possible?
- What would convince a skeptic?
A fancy estimator cannot rescue a broken comparison.
Uncertainty is a feature, not a defect
If assumptions are weak, uncertainty should show up clearly.
A good analysis is often one that says:
- what is likely true
- what is uncertain
- what would change the conclusion
Robustness should be honest
Robustness is not “run 20 specs until one looks good.”
Robustness is asking:
- Which threats matter?
- How large would bias need to be to flip the conclusion?
- Do alternative reasonable choices meaningfully change the estimate?
Where to start
If you’re new to this site:
- Start here: Start here
- Read the textbook: https://textbook.causalica.com
- Browse writing: Writing
If you want this site to be useful, the best thing you can do is tell me:
- what you’re trying to decide
- what data you have
- what part of the causal workflow is most confusing
That feedback will shape what I write next.