Today’s economy is increasingly powered by speculation and the raising and allocation of capital to ideas (rather than concrete, manufactured assets). While there are potentially many causes for this shift, its effects on social wellbeing are controversial. Some claim that the speculative economy has resulted in unprecedented growth and innovation while others allege that the shift has mainly resulted in massive wealth gains for the already rich, while blithely risking the securities of many who live at already marginal means.
I argue that speculation in the economy is perversely mirrored by an aversion to risk in other ‘markets’ like healthcare and education. It is curious that the attitudes funding modern-day American capitalism i.e., risk-tolerance, big-ideas, anti-authoritarianism, utopianism (see this talk, for example), does not pervade to healthcare or education, which is generally conservative and heavily reliant on institutional authority. Of course, there are historical and social factors that account for these divergent paths, which go far beyond any analysis I might be able to provide here. What these institutions and the economy do however have in common is a reliance on insider-based decision-making. In these massive, social contracts with the public, the interests of the many are largely managed by the decisions of the few. These relatively small collectives, responsible for managing the public good or servicing the public, can be referred to as trusts. However, these institutions are not without their problems. Most significantly is the problem of corruption. The most common paradigm of corruption is when the few conspire to reap undue benefits at the expense of their fiduciary obligations to the many. This happens in economic ventures, health care and science, and education.
While there is a place—perhaps increasingly so—for anti-trust law, I suspect that will never and can never be enough. Especially when one considers that the prosperity of a nation may depend on its ability to smooth corruption into the fabric of legislation and social mores. I propose that instead of (addition to?) coding anti-trust into legislation, the strategies of successful and sustainable capitalism will depend on anti-trust as a decision-making strategy. What does this mean and how will this be operationalized? Roughly speaking, in standard business operations, the few are responsible for making decisions that will affect the choices and actions of the many. This paradigm is aptly encapsulated in Steve Job’s mantra and ethos: “People don’t know what they want until you show it to them.” Of course, there are good reasons for this, both historical and current. The historical reason: expertise; the current one: efficiency. In the past decades and centuries, access to education was the purview of the elite. Expertise was needed for making decisions that the uneducated layman could not possibly comprehend—for example, how to build a house or design a piece of music. The genius of capitalism was to relentlessly create markets for specialization that were ultimately better at processing information, making decisions and coordinating labor and production than central, state bureaucracies. This, for example, is thought to explain why the Dutch were able to establish mercantile and colonial superiority in the 16th and 17th centuries, while holding off much bigger rival powers such as France and England.
Nowadays however, the public has much more access to education and methods of production and collaboration than ever before. The expertise that was once entrusted to a few experts is now either outsourced to computer programs and/or distributed amongst an increasingly dispersed network of specialists. In the institutions of finance, healthcare and education, we may be seeing the death of the expert and the rise of the specialist. And what is the ultimate paradigm of a hyper-focused specialist? A machine of course. At the turn of this century, prominent futurists such as Ray Kurzweil or Noah Yuval Harari, have prognosticated a recent future where machine learning vastly supersedes distributed human intelligence—to either utopian or fascist ends. But what if we were to choose—to the extent that choosing is possible—a very human alternative?
The early promise of the internet—which has been massively triumphant in its realization—was to allow users to access nearly unlimited pools of information. The modern internet—or Web 2.0—also allows users to participate in influencing the information-accessing and social experiences of other users. From this technology has arisen an entirely new class of expert: the many. In its current form—what is commonly called “crowdsourcing”—the many is entrusted to make decisions in the fields of public policy, navigation, engineering, design, and journalism to name a few examples. While there is a long history of anti-populist sentiment against true democratic decision-making, its success and utility cannot be denied. Of course, like any other technology, it has a niche in the institutional ecosystem and will not supplant all other forms of decision-making. Due to their massive energy and time efficiencies, for example, crowdsourcing will never replace the hyper-specialization of machine programs or small networks of human specialists. Crowdsourcing must also be approached with deliberate attention to what information is being solicited and to what ends. Humans are notoriously susceptible to peer influence, but if the information that is volunteered from the many is relatively autonomous (e.g., driving speeds, routes), then it may become a fairly unbiased, broadly representative and ecological valid dataset that can—and should—be skillfully exploited towards decision-making. Whatever its flaws, this option is surely something that all institutions can develop as a means of progressing towards a more sustainable and successful—in broadly human metrics of well-being—ecology. We are seeing businesses already take the lead here, with the ubiquitous web-based review, becoming a key aspect that shapes how and what goods and services are delivered. However, even here things may not be going far enough. In fact, as mentioned earlier, all institutions tend to sway towards extreme cultures of risk-taking (which accumulates wealth and generates innovation, at the expense of security—especially damaging for those with less affluence) or risk-aversion (which preserves traditions and social order at the expense of equal access to opportunity, innovation and even agreed-upon change). In the hands of a few experts—or worse, specialists—these cultures will not change. They will probably become even more polarized. Obviously, the virtual and anonymous crowd can be massively susceptible to polarization and influence as well. These problems will have to be seriously addressed. Perhaps this crisis will be the central problem of education in the coming decades. However, attending to the input of the many and harnessing that expertise into decision-making abilities, appears to be the best way to capture the broadest swath of ‘skin in the game’, so to speak. If there exist ways to compensate this form of distributed labor, then a form of distributed welfare could actually sustain corporate growth! Perhaps more meaningful data could be gathered as individuals begin to rediscover the value of their labor—reflected not necessarily in the proximate production of goods and services but in the certainty of ownership and democratic participation in larger, world-making industries. By giving a stake to those who share in the success or failure of a given venture—be that education, health-care, law or even venture capital—institutions may pivot away from an insular culture that is resistant to change and embrace a deeper understanding of its beneficiaries.