

Discover more from Culturologies.co
Two Visions of Intelligent Institutions
How much do people matter? Agent-positive and agent-negative views of building smarter institutions
How do we get more and better things? From college admissions to the process of science to where we should be spending our money, this selfish question has driven a great deal of research towards understanding how it is we ensure our societies are better able to deal with problems. At the outset, there are a lot of ways in which the question can be answered, but fundamentally they come down to two core perspectives: an agent-positive perspective and an agent-negative perspective.
The agent-positive vision of institutional improvement holds that we get better things by having smarter people. Although the problem is ahead of us, if we simply think about things the right way, we will come to our senses and find a better solution. Behind many organizational and institutional failures is a lack of creativity - they didn’t hire the right people, they didn’t think about the problem the right way, they couldn’t see the smoke in the room.
This sort of view is typified in a lot of the ideas of the folks who focus on things like talent searches in venture capital circles, in the weight given to the importance that causal inference plays in individual problem-solving in many visions of evolutionary psychology, and the type of people who pay attention to things like IQ and other metrics for rating an individual’s performance and talent. I would place Dominic Cummings’ failed hiring push two years ago for “weirdos and misfits” in this category and more recent efforts by Elon Musk to purge Twitter of what he viewed as deadwood and hire a bunch of new talent more on board with his institutional vision. In the words of Steven Pinker, a complex idea, “does not arise from the retention of copying errors. It arises because some person knuckles down, racks his brain, musters his ingenuity, and composes or writes or paints or invents something.”
We can contrast this with the agent-negative vision of institutional improvement. The agent-negative vision largely writes human factors out of the equation. Smarter institutions means smarter institutions. While it’s nice to have smart people, they are one small part of the equation of what makes a group smart. Collective thinking itself is something of a brute force process, and even the smartest people cannot overcome the intelligence of our institutions and organizations. Instead, we need to focus on structures, the flow of information, and incentives to make sure that information flows in the right way through the network.
In contrast with the agent-positive view, you can find this type of approach more common in the approach of many organizational scientists (an approach which, compared to VC firms, generally focuses on the success of mature rather than success of developing organizations), in cultural evolutionary approaches which focus on copy error and mutations rather than in human causal inference, and in a lot of HR type of folks who focus on DIE (as in diversity, equity, and inclusion) practices which lead them to believe things like viewpoint diversity comes with increasing the compositional diversity of an institution.
When thinking about this approach, I think about a talk by science writer Matt Ridley on the origins of innovation, titled “When Ideas Have Sex”. Contrasted with Pinker, he notes, “We've gone beyond the capacity of the human mind to an extraordinary degree…that's one of the reasons that I'm not interested in the debate about IQ, about whether some groups have higher IQs than other groups. What's relevant to a society is how well people are communicating their ideas and how well they're cooperating, not how clever their individuals are. So we've created something called the collective brain. We're just the nodes in the network. We're the neurons in this brain. It's the interchange of ideas, the meeting and mating of ideas between them, that is causing technological progress, incrementally, bit by bit.”
When you consider that the end goal of both of these approaches are exactly the same, it is actually rather shocking just how much these two views are seen as exclusive to one another. The divide between causal inference evolutionary psychology folks and random mutation and selection cultural evolution folks is annoyingly large. I did not come up with this divide myself to simply prove a point. The “we need to hire measurably smarter people” and the “we need to hire more different people” perspectives are clearly are clearly at odds with another. And the quotes by Steve Pinker and Matt Ridley, although not referencing each other quite literally, may as well be. Some approaches, like those in I-O psychology, attempt to merge the science of human mind with the unfortunate result being that researchers nevertheless fall on either one side of research emphases over the other.
It’s clear both are necessary. We are deluding ourselves by our refusal to try and have it both ways. It’s clear humans have an extreme capacity for causal inference and it’s also clear that components of social learning have led to the propagation of some ideas which are better than others, even when causal inference is lacking. We have both dumb tinkerers who are worth .3 smart tinkerers and smart tinkerers who are worth three dumb tinkerers. You can put a smart person in a dumb institution and they won’t have very much impact on the world - a lot of talent gets wasted that way. That much is clear. But what would benefit us all is having smart people in smart places. Despite disagreements over what a “smart person” is, there shouldn’t be a disagreement over the idea that what we need is more effective institutional structures to scaffold the talent each person has.
Outside of the applied realm (e.g. where we see hiring practices and HR come into conflict on the issue for more obvious reasons), it strikes me then that a lot of the scientific issue, besides falling on one side of the emphasis divide for career purposes or falling on some type of an ideological divide for being for or against individual metrics, is that the problem is difficult to model. If you are trying to model an adaptive organization, how do you introduce to your agents knowledge of the problem they are living in? In something like an NK landscape, the knowledge of what the “right” solutions are is not or is only opaquely known even by the Deitic Researcher themself. What needs to happen is we need to get back to and take seriously the opposite point of emphasis: causal inference, cause and effect. We build models of random brute force solutions where agents are clueless as to what works and what doesn’t. They know pay-offs, but not why what they did led to the pay-off in their case or why their previous action did not work. We have spent a lot of time in cultural evolution creating lab experiments where individuals do not need to understand their task to complete it* so that we can understand how our behavior can better conform to our simplistic models, but almost zero time creating models where agents need to understand their tasks to complete theirs to better conform to how human behavior works on an intuitive level. At some loss to our ability to accurately scale up our simulations, we need to create models of structured problems which more accurately reflect the natural cause-and-effect nature of the real world. Lines of research might then develop which arise to the level of: how much understanding should a person have to be in this specific role or how much can we offload of this person so they can work on another task?
In the past, before research on this issue was limited by one’s own framework of emphasis, some people had this figured out. It is no longer any wonder to me why Herbert Simon’s Administrative Behavior, a book which was essentially outlined the problem above, spawned a plethora of diverse concepts relevant to both the individual and the institution in this regard including bounded rationality, satsficing, span of control, and organizational learning. In his own words, “It should be perfectly apparent that almost no decision made in an organization is the task of a single individual. Even though the final responsibility for taking a particular action rests with some definite person, we shall always find, in studying the manner in which this decision was reached, that its various components can be traced through the formal and informal channels of communication to many individuals.”
*See:
Derex, Maxime, Jean-François Bonnefon, Robert Boyd, and Alex Mesoudi. (2019). Causal understanding is not necessary for the improvement of culturally evolving technology. Nature human behaviour 3(5), 446-452.
Harris, J. A., Boyd, R., & Wood, B. M. (2021). The role of causal knowledge in the evolution of traditional technology. Current Biology, 31(8), 1798-1803.
Two Visions of Intelligent Institutions
"How do we get more and better things?" Here is an idea. We tax resources (carbon, natural ecosystems, minerals, etc) but do not give the money to the government. Half of the tax goes to citizens as a dividend to compensate for things getting more expensive. The other half is used to build public goods, with the citizens voting on the projects that receive funding.
It seems that both optimism and pessimism are yet equally contagious, and the implicit cultural norms of any organization supersede individual wit and capacity (for better or for worse)