Science is continually evolving. We are finding out more about ourselves and the universe every day. Some of this evolution builds on what is already known, it layers and iterates on a theory. Another evolution may wipe out an entire theory as its true nature is proven to not ‘fit’ in. Undoubtedly, there must be theories, applicable to our daily lives that we take for granted as being true that simply aren’t. However, where a core truth has been taken as ‘what it is’, then layers have been built on top. What happens as we continue to develop layers upon layers of theories and claims which are actually false or worse, not falsifiable? There are only ever increasing layers being laid on top. To further complicate matters, terminology or definitions of core truths can differ between different fields and sectors — a matter for another time. This layering is conditioning the way we think, it sets the paradigm for what we know and intuit every day, right or wrong.
This layering has been going on forever and it looks like it will continue forever more, even if only because we love of a good story. Things being falsifiable should keep pushing us in the right direction. However, given how omnipresent some systems are now, proving one false might have further reaching implications than what is manageable. Recall here the idea that sub-prime loans (and the many layered mechanisms on top of them) were seen as a good way to give people access to a nice home and stimulate the economy — then information caught up and the 2007–08 Global Financial Crisis struck. It seems then that as this layering continues to happen faster, the focus needs to be shifted (but not dropped) from preventing its happening to minimising collateral consequences from these inevitable events.
Implications are so large because institutions with far reaching interests may have fundamental flaws in their systems, falsification of which creates a sweeping chain reaction. So in trying to prevent some of existential threats (climate, AI, nuclear, economic meltdown) which face us today, which in part stem from some false layering, we need to allow iterating to occur without socialising the downside nationally and globally.
Given all of this, it seems smaller might provide some answers. Smaller does not socialise critical consequences, it does not bring us down with its system. It also allows for more abrupt direction changes, a nimbler approach to rectifying a problem when its identified. Opportunity for rectification means there is also less chance to put efforts into covering up the elephant in the room. We can turn the rudder of the barge rather than on the supertanker because the supertanker knows as much as the barge does about the exact direction to take — not much (or maybe less if you consider the elephant). More smaller nodes shooting off in different directions, none quite right, but each iteration getting closer. In Skin in the Game, Nassim Taleb calls this the ‘bias-variance trade-off’. He explains:
“…you often get better results making some type of ‘errors’, as when you aim slightly away from the target when shooting. I have shown in Antifragile that making some types of errors is the most rational thing to do, as, when the errors are of little costs, it leads to gains and discoveries.”
If the smaller fails, and fail they will as a matter of evolving, the damage and collateral is contained to that locale and discoveries are made. In order to achieve some of the benefits of the large, smaller needs to collaborate and build accountable partnerships, focus on leveraging technology and above-all seek to improve it’s community. To me at least, this seems more ‘win-win’ than the ‘winner take all’ (whether up or down) that currently exists.