Physics and Psychohistory

I think of mathematical models falling into two broad classes. There are those of convinience, which use heuristic arguments and simple reasoning to create and justify equations that vaguely seem to fit a system, and often yield surprisingly good predictions for their relative simplicity, but don't say anything about the nature of reality that underlies the systems being described. The other type of mathematical model is one that makes strong assertions about underlying mechanisms, and, in doing so, posits something concrete and deep about the underlying nature of the system it's trying to explain. These are often found in physics.

Lotka-Voltera equations

Here's an example of the first type of models. These are a system of differential equations we create to describe the growth and decline of certain populations. We justify them based on our intuition that, initially, a population will grow exponentially in times of plenty, and will eventually plateau due to constraints in the environment around them. This gives rise to the logistic curve we know so well, and it's that curve that we use to make predictions. Amongst other things, these equations predict that growth per capita (or per individual) will plateau asymptotically to zero as we reach the "carrying capacity" of the environment. Of course, in practise, we see that this doesn't hold. It provides a decent approximation some of the time, capturing the fact that growth is initially fast, and slows down, but empirically quite wrong for most population systems. And we explain these discrepancies away by mentioning that real systems are more complicated and we are missing variables, and it is a simple model, and so on. The key point is, these are models of convenience that arise from "economy of thought," as physicist and philosopher Ernst Mach put it. We do not believe that this model captures some deep truth about the universe, but instead is more a mental crutch that helps us make sense of why population grows in the general way it does.

General Relativity

Here's an example of the second type of model. Consider Einstein's general theory of relativity, which goes beyond giving some equations we can use to make predictions, but actually asserts that reality itself is a certain way. Specifically, it asserts that mass changes the curvature of space, measured via the energy-momentum tensor, and these changes in the geometric nature of space manifest as visible effects of gravitation. It goes beyond Newton's equations, which are closer to the first type of modelling, in explainging exactly why gravitation exists in the first place. Another example are the ideal gas laws. They actually assume the existance of atoms to reason about the behavior of individual atoms on a mechanical level. The laws are not entirely correct because of the assumptions they make, but they do reflect some underlying reality (atoms exist, or so we believe).

Radioactive Decay

Here's an example of mathematical modelling that is not obviously a model of "convenience" and neither one that reflects some "deep natural truth." Consider the humble uranium atom. In high school, a typical setting in which differential equations is taught is in the context of radioactive decay, since it is obvious that the rate of change of atoms depends on the number thereof. The more atoms there are, for some fixed probability of decay per atom, the more atoms there are that decay. But what quantum mechanics makes difficult is reasoning about whether probability is deeply embedded in reality, or whether there's an underlying structure we simply don't understand. The fact that physicists showed that there is no "hidden variable" governing quantum mechanical behavior (which at this point we all agree is inherently probabilistic, whatever that means) doesn't mean that there isn't some deeper mathematical structure (eg. M-theory) that is at play causing this randomness we observe in quantum mechanical behavior (as well as atomic decay, etc). This has always been troublesome for my intuition because it's not clear whether I should think of QM as a model of convenience like the Lotka-Volterra equations, or a model that reflects the underlying nature of reality, like GR.


The reason I mentioned the ideal gas laws earlier is because in Asimov's classic series, Foundation, the protagonist develops a statistical theory of macroeconomics called "Psychohistory." In it, he uses advanced mathematical statistics alongside psychology to make predictions about societal behavior, in a time when the population of all humans exceeds one quintillion and is spread across the "galactic empire." After reading some Taleb and perhaps even after taking some psychology courses and seeing how crude our methods for understanding the human mind are, it's easy to believe that something so complex as the human psyche and its behavioral consequences will never be understood well, let alone its emergent behavior when many such minds interact.

On the other hand, cognitive psychology -- and particularly the works of Kahneman and Tversky -- have yielded lots of experimental and theoretical results that are exceedingly reproducible. The reason I'm not a fan of psychology at large is because most results don't replicate. That is, most things published in papers are wrong; mere figments of chance and statistics abused by people (psychologists) who have near-zero understanding of statistics. And so the fact that this work (mostly taken from the book Thinking, Fast and Slow) hsa been so thoroughly reproduced and shown to be robust to replication is, to me, astonishing. This was emphasized to me in my introductory economics course last year, where the professors ran several live experiments with our class of a few hundred people, such as running a real-time prediction market, auctions, and asking people to make savings and investing decisions with real money. Again and again, the results were almost exactly in line with those made by an entirely different group of participants at a different place and time!

This list of common behaviors and biases makes little use of grand theories from biology or neuroscience, and instead humbly seeks to make predictions based on empirical evidence, without philosophizing. Much in the style of Taleb's "convex tinkering." This list reminds me of the ideal gas assumptions we make when deriving the famous pV = nRT equation and ideal gas laws. We use these assumptions (which reflect some deep truth about individual atoms' behavior) and statistical reasoning to determine probability distributions of outcomes of fluid systems, and are approximately right, much of the time! If we have solid results on the level of small groups, results that reproduce, it doesn't matter that the bodies in question are sentient. We've already done much of the hard work in finding the laws that determine the behavior of the unit (the small group of people, for example in an auction or prediction market). Now all that is left is to apply probabilistic reasoning to make predictions about many such units (eg. a country or religion) in the framework of statistics. Perhaps the reason we can't do this yet is because 7 billion people is not enough for such behavior to manifest robustly, and indeed we do need closer to a quintillion. Or perhaps there are behaviors that emerge with scale (eg. differ greatly between 100 and 1 million people) and so these psychological results from Kahneman and Tversky are rendered useless. Or perhaps we need a new kind of statistics to reason about such a thing, in the same way that Boltzmann had to make strides in probability theory itself to formalize his laws of thermodynamics (the greatest scientific result ever published, I should add -- alongside evolution, also a theory that is statistical in nature).

Therefore I hold two contradictory beliefs. One is that much of the work of the economist is doomed to failure because human behavior is too complex to predict, even probabilistically. The other is that we do have empirical results that predict such behavior, and this means that better statisticians than those currently living may be able to generalize these to larger groups, and then use them to make accurate predictions about the behavior of a few million people, in response to some policy or conflict. I think there is more to be fleshed out in trying to ascertain what a real theory of psychohistory would look like, and how it would be notably less "theoretical" and much more statistical than current approaches to economics or social science. It would probably have to come from someone trained as mathematical statistician, who understands human behavior on an practical (think bar fights and comedy clubs) and well as formal (Kahneman's work) level.

[TODO: this is kinda trash and unclear w/ no clear narrative thrust atm ]