
According to this essay by Dr. David Krakauer, we are facing a growing schism in the ways we have come to know the world. The growing fissure is between two epistemes or knowledge-systems that structure our approach to science and the ways we interact with naturalistic and psychosocial events or phenomena. These approaches are most generally described as Understanding and Prediction. Pragmatically speaking, understanding is the endeavor to describe the components, structure, mechanisms and purposes of a given phenomenon in terms of accessible, relevant concepts. For example, we may understand how a computer works by examining and then describing its hardwiring using models, illustrations or analogies. Prediction, on the other hand, does not concern itself with such projects but rather with anticipating the antecedents and consequences (i.e., the before and after) of a given phenomenon. For example, we can reasonably predict what the output of a computer program will be once we analyse a series of inputs and corresponding outputs. Understanding and prediction are distinct operations that require different sets of cognitive resources.
Understanding can involve an extremely varied and idiosyncratic set of cognitive processes that we may describe in terms of creativity, story-telling, selective attention, and even prejudices. However, at its most elementary level, understanding literally requires one to have a position from which to stand in relation to the phenomenon being studied. Using the common example of an elephant in a dark room, it is easily demonstrated that ‘understanding’ the phenomenon crucially depends on where you stand in relation to it. The object may be perceived as soft, thin (ear) or hard, thick object (trunk) or long, stringy (tail). Of course, as scientific technologies and theories evolve, so to does our ability to take a broader stance or position in relation to the object. The elephant can be seen as an organic, whole—a mammal of the proboscidean taxonomic family. Of course, this understanding presumes a specifically human positionality—or frame of reference—to the elephant. A tick may understand the elephant as warm food. We may also wish to add the qualifier that one’s position—or philosophical horizon—should be ‘conscious’, however that is defined. Can we say that grass has a position in relation to the elephant? It may be but such forms of consciousness would appear wildly exotic to us. Understanding is also approximate. We superimpose a form or ‘idea’ onto the phenomenon, which invariably reduces its complex features into something that we find accessible, appealing and even aesthetic. The elephant becomes literally or symbolically associated with power, regality, wisdom, age, memory. We are drawn further towards ideas that are simple, symmetric, and anthropomorphic. Although these ideas do not account for the phenomenon in its totality, we find these approximations appealing and utterly relevant to our own knowledge-projects. Quite simply, we desire to ‘know’ things because they may fit into or enhance our personal narrative. When it comes to knowledge, relevance is often more important than complete accuracy and precision. Furthermore, in making understanding relevant (e.g., whether we are modelling cosmological events or diagnosing a patient’s illness), we are much more easily able to position this understanding with respect to our other knowledge-projects. In doing so, we create grander, synthetizing narratives which give us an appreciation of the relationship between parts to wholes and a greater sense of our relevance in such a complex world. This would be difficult, if not impossible to accomplish, if we were to get bogged down in the minutiae of details, which may reveal contradictions, incompletions, ambiguities and unknowlables that render the phenomenon recalcitrant to any knowledge-project.
Prediction requires a much smaller set of cognitive operations. It is quite programmatic: observation of phenomenal events as they occur in the field–>memorization of when events occur–>probabilistic calculation of an event or events occurring or co-occurring given certain field conditions. Prediction produces information about the probability of occurrences or co-occurrences between events/phenomena in the world. While this way of producing information constitutes a knowledge-system, it does not itself constitute a knowledge-project. Again, such projects require a narrative understanding of the relationship between ourselves and certain conditions in the world. In its simplicity, prediction can be extremely effective. With a large enough data set and a enough computational power to memorize the occurrences of all events and analyse patterns of occurrences and co-occurrences, computers can far outperform humans in extremely complicated predictive tasks such as playing chess or detecting tumors. While such computations cannot be said to resemble understanding per se, we must be wary that predictive decisions based on understanding (i.e., narrative simplification) are prone to many types of biases such as preferring simplicity, aesthetics and familiarity. Most computers and AI, are entirely predictive and lack understanding; therefore they tend to be unfettered by such biases. Indeed, there is some evidence that children excel at associative learning (e.g., learning phonetic associations and syntactic rules) more easily than adults, partly because they do not have a pre-established pattern (i.e., understanding) that interferes with this new learning. In a purely predictive task, a $100 computer program easily can outperform a grandmaster in chess. But what computer can, through reasoning on its own, provide a compelling answer to Why people play chess?
Thus, Dr. Krakauer asserts that as computational power proportionately increases, our human capacity to understand cannot possibly hope to keep up with corresponding increases in technology’s predictive capabilities. There is reasonable concern about the place of understanding in the modern world. As higher-order tasks (e.g., policy making, urban planning, diagnostics, architecture, bioengineering, financial trading) come to depend on extremely sophisticated and accurate predictive capabilities, human judgment and understanding will be rightfully regarded as increasingly liable to error. Prediction, unabetted or unfettered by understanding will—and should—inform these operations in the 21st century. But then we must ask, why and how is the human position relevant? In what way does knowledge-making—or pursuing a knowledge-project (simplified and anthropomorphized though it may be)—add value to the work that needs to be done in creating a better world?
There are whole genres of dystopian fictions in which a hellish reality awaits us as we cede to machines the responsibilities to determine, protect and govern a supposedly better future for our species. This is particularly well portrayed in the anime Blame! Here, the future is decided by machines who were originally created by humans to autonomously self-replicate and serve the directive of building and designing a better environment for humans. Ironically, these machines are so incredibly efficient that humans cannot adapt. The machines called “builders” continue to engineer monstrously large projects long after most of the human race has perished. A security system (the “Safeguard”), once designed to be a fail-safe system to guard against hacking now effectively terminates all humans who do not have the ‘net terminal gene’ required to deactivate the security protocol. Ironically, a system designed entirely free of human understanding also leaves humans utterly displaced and disenfranchised—literally without a position to stand. How will we live in the modern world? Our understanding is rife with errors and machines do so many of our tasks better than we can hope to. Yet, without a place for us to establish our knowledge-projects, what is the point of improving the world at all? Where will we find our place in a utopia that cannot afford our messiness?