Imagine a game with thousands of pieces, hundreds of victory conditions, and rules that are partially known. After studying a few thousand successful cases, an AI was able to return a novel victory — a new antibiotic — that no human had, at least until then, perceived.
Even after the antibiotic was discovered, humans could not articulate precisely why it worked. The AI did not just process data more quickly than humanly possible; it also detected aspects of reality humans have not detected, or perhaps cannot detect.
For millennia, humanity has occupied itself with the exploration of reality and the quest for knowledge. The process has been based on the conviction that, with diligence and focus, applying human reason to problems can yield measurable results. When mysteries loomed — the changing of seasons, the movements of the planets, the spread of disease — humanity was able to identify the right questions, collect the necessary data, and reason its way to an explanation. Over time, knowledge acquired through this process created new possibilities for action, yielding new questions to which reason could be applied.
Humanity has traditionally assigned what it does not comprehend to 1 of 2 categories: either a challenge for future application of reason or an aspect of the divine, not subject to processes and explanations vouchsafed to our direct understanding.
We will begin to give AI specific instructions about how exactly to achieve the goals we assign it. Much more frequently, we will present AI with ambiguous goals and ask: “How, based on your conclusion, should we proceed?”
Defense establishments and commanders face evolutions no less profound. When multiple militaries adopt strategies and tactics shaped by machines that perceive patterns human soldiers and strategists cannot, power balance will be altered and potentially more difficult to calculate.
Technology that was initially believed to be an instrument for the transcendence of national differences and the dispersal of objective truth may, in time, become the method by which civilizations and individuals diverge into different and mutually unintelligible realities.
Although some of the world’s best engineers had already tackled the problem, DeepMind’s AI program further optimized cooling, reducing energy expenditures by an additional 40% — a massive improvement over human performance.
On what basis could that sacrifice be overridden? Would the override be justified? Will humans always know what calculations AI has made? Will humans be able to detect unwelcome (AI) choices or reverse unwelcome choices in time? If we are unable to fathom the logic of each individual decision, should we implement its recommendations on faith alone? If we do not, do we risk interrupting performance superior to our own? Even if we can fathom the logic, price, and impact of specific alternatives, what if our opponent is equally reliant on AI?
That current human-machine partnership requires both a definable problem and measurable goal is reason not to fear all-knowing, all-controlling machines; such inventions remain the stuff of science fiction.
But there is a difference between choosing from a range of options and taking an action n— in this case, making a purchase; in other cases, adopting a political or philosophical position or ideology — without ever knowing what the initial range of possibilities or implications was, entrusting a machine to preemptively shape the options.
Although AI can draw conclusions, make predictions, and make decisions, it does not possess self-awareness — in other words, the ability to reflect its role in the world. It does not have intention, motivation, morality, or emotion; even without these attributes, it is likely to develop different and unintended means of achieving assigned objectives.
Throughout history, human beings have struggled to fully comprehend aspects of our experience and lived environments. Every society has, in its own way, inquired into the nature of reality: How can it be understood? Predicted? Shaped? Moderated? As it has wrestled with these questions, every society has reached its own particular set of accommodations with the world. At the center of these accommodations has been a concept of the human mind’s relationship with reality — its ability to know its surroundings, to be fulfilled by knowledge, and, at the same time, to be inherently limited by it.
The allegory likens humanity to a group of prisoners chained to the wall of a cave. Seeing shadows cast on the wall of the cave from the sunlit mouth, the prisoners believe them to be reality. The philosopher, Socrates held, is akin to the prisoner who break free, ascends to level ground, and perceives reality in the full light of day.
The promised reward for individuals who followed the “correct” faith and adhered to this path toward wisdom was admission to an afterlife, a plane of existence held to be more real and meaningful than observable reality. In these Middle Ages, humanity, at least in the West, sought to know God first and the world second. The world was only to be known through God; theology filtered and ordered individuals’ experiences of the natural phenomena before them. When early modern thinkers and scientists such as Galileo began to explore the world directly, altering their explanations in light of scientific observation, they were chastised and persecuted for daring to omit theology as an intermediary.
During the medieval epoch, scholasticism became the primary guide for the enduring quest to comprehend perceived reality, venerating the relationship between faith, reason, and the church — the latter remaining the arbiter of legitimacy when it came to beliefs and (at least in theory) the legitimacy of political leaders.
The most advanced societies and learned minds in Europe were suddenly confronted with a new aspect of reality: societies with different gods, diverging histories, and, in many cases, their own independently developed forms of economic achievement and social complexity. For the Western mind, trained in the conviction of its own centrality, these independently organized societies posed profound philosophical challenges. Separate cultures with distinct foundations and no knowledge of Christian scripture had developed parallel existences, with no apparent knowledge of (or interest in) European civilization, which the West had assumed was self-evidently the pinnacle of human achievement.
Copernicus’s vision of a heliocentric system, Newton’s laws of motion, van Leeuwenhoek’s cataloging of a living microscopic world — these and other developments led to the general sentiment that new layers of reality were being unveiled. The outcome was incongruence: societies remained united in their monotheism but were divided by competing interpretations and explorations of reality. They needed a concept — indeed, a philosophy — to guide their quest to understand the world and their role in it.
The philosophers of the Enlightenment answered the call, declaring reason — the power to understand, think, and judge — both the method and purpose for interacting with the environment. The relationship between humanity’s first question (the nature of reality) and second question (its role in reality) became self-reinforcing: if reason begat consciousness, then the more humans reasoned, the more they fulfilled their purpose. Perceiving and elaborating on the world was the most important project in which they were or would ever be engaged. The age of reason was born.
A student of traditionalists and a correspondent with pure rationalists, Kant regretfully found himself agreeing with neither, instead seeking to bridge the gap between traditional claims and his era’s newfound confidence in the power of the human mind. In his Critique, Kant proposed that “reason should take on anew the most difficult of all its tasks, namely, that of self-knowledge.” Reason, Kant argued, should be applied to understand its own limitations.
According to Kant’s account, human reason had the capacity to know reality deeply, albeit through an inevitably imperfect lens. Human cognition and experience filters, structures, and distorts all that we know, even when we attempt to reason “purely” by logic alone. Objective reality in the strictest sense — what Kant called the thing-in-itself — is ever-present but inherently beyond our direct knowledge. Kant posited a realm of noumena, or “things as they are understood by pure thought,” existing independent of experience or filtration through human concepts. However, Kant argued that because the human mind relies on conceptual thinking and lived experience, it could never achieve the degree of pure thought required to know this inner essence of things. At best, we might consider our mind reflects such a realm. We may maintain beliefs about what lies beyond and within, but this does not constitute true knowledge of it.
The generations after Kant, the quest to know the thing-in-itself took 2 forms: ever more precise observation of reality and ever more extensive cataloging of knowledge. Vast new fields of phenomena seemed knowable, capable of being discovered and cataloged through the application of reason. In turn, it was believed, such comprehensive catalogs could unveil lessons and principles that could be applied to the most pressing scientific, economic, social, and political questions of the day. The most sweeping effort in this regard was the Encyclopedie, edited by the French philosopher Denis Diderot. In 28 volumes, 75K entries, and 18K pages, Encyclopedie collected the diverse findings and observations of great thinkers in numerous disciplines, compiling their discoveries and deductions and linking the resulting facts and principles.
Developing a quantum mechanics to describe this substratum of physical reality, Heisenberg and Bohr challenged long-standing assumptions about the nature of knowledge. Heisenberg emphasized the impossibility of assessing both the position and momentum of a particle accurately and simultaneously. This “uncertainty principle” (as it came to be known) implied that a complete accurate picture of reality might not be available at any given time. Further, Heisenberg argued that physical reality did not have independent inherent form, but was created by the process of observation: “I believe that one can formulate the emergence of the classical “path” of a particle succinctly… the “path” comes into being only because we observe it.”
The quest to define and catalog all things, each with its own sharply delineated boundaries, was mistaken, he held. Instead, one should seek to define “This and similar things” and achieve familiarity with the resulting concepts, even if they had “blurred” or “indistinct” edges. Later, in the late 20th century and the early 21st, this thinking informed theories of AI and ML. Such theories posited that AI’s potential lay partly in its ability to scan large data sets to learn types and patterns and then to make sense of reality by identifying networks of similarities and likeness with what the AI already knew. Even if AI would never know something in the way a human mind could, an accumulation of matches with the patterns of reality could approximate and sometimes exceed the performance of human perception and reason.
The Enlightenment world — with its optimism regarding human reason despite its consciousness of the pitfalls of flawed human logic — has long been our world. Scientific revolutions, especially in the 20th century, have evolved technology and philosophy, but the central Enlightenment premise of a knowable world being unearthed, step-by-step, by human minds has persisted. Throughout 3 centuries of discovery and exploration, humans have interpreted the world as Kant predicted they would according to the structure of their own minds. But as humans began to approach the limits of their cognitive capacity, they became willing to enlist machines — computers — to augment their thinking in order to transcend those limitations.
Speed is partly to blame, as is inundation. For all its many wondrous achievements, digitization has rendered human thought both less contextual and less conceptual. Digital natives do not feel the need, at least not urgently, to develop concepts that, for most of history, have compensated for the limitations of collective memory. They can (and do) ask search engines whatever they want to know, whether trivial, conceptual, or somewhere in between. Search engines, in turn, use AI to respond to their queries. In the process, humans delegate aspects of their thinking to technology. But information is not self-explanatory; it is context-dependent. To be useful — or at least meaningful — it must be understood through the lenses of culture and history.
When information is contextualized, it becomes knowledge. When knowledge compels convictions, it becomes wisdom. Yet the internet inundates users with the opinions of thousands, even millions, of other users, depriving them of the solitude required for sustained reflection that, historically, has led to the development of convictions. As solitude diminishes, so, too, does fortitude — not only to develop convictions but also to be faithful to them, particularly when they require the traversing of novel, and thus often lonely, roads. Only convictions — in combination with wisdom — enable people to access and explore new horizons.
Turing suggested setting aside the problem of machine intelligence entirely. What mattered was not the mechanism but the manifestation of intelligence. Because the inner lives of other beings remain unknowable, our sole means of measuring intelligence should be external behavior.
Having operated for decades on the basis of precisely defined code, computers produced analyses that were similarly limited in their rigidity and static nature. Traditional programs could organize volumes of data and execute complex computations but could not identify images of simple objects or adapt to imprecise inputs. The imprecise and conceptual nature of human thought proved to be a stubborn impediment in the development of AI.
AIs are imprecise, dynamic, emergent, and capable of “learning.” AI “learns” by consuming data, then drawing observations and conclusions based on the data. While previous systems required exact inputs and outputs, AI with imprecise function require neither.
Unlike classical algorithms, which consist of steps for producing precise results, ML algorithms consist of steps for improving upon imprecise results.
In reality, however, designing a machine and rendering it capable of useful activity — even with the advent of modern computing — has proved devilishly difficult. A central challenge, it turns out, is how — and what — to teach it.
Early attempts to create practically useful AIs explicitly encoded human expertise — via collections of rules and facts — into computer systems. But much of the world is not organized discretely or readily reducible to simple rules or symbolic representations.
In practice, the approach of formulating abstract models and then attempting to match them with highly variable inputs thereby proved virtually unworkable.
In the 1990s, a set of renegade researchers set aside many of the earlier era’s assumptions, shifting their focus to ML. While ML dated to the 1950s, new advances enabled practical applications. The methods that have worked best in practice extract patterns from large datasets using neural networks. In philosophical terms, AI’s pioneers had turned from the early Enlightenment’s focus on reducing the world to mechanistic rules to constructing approximations of reality. To identify an image of a cat, they realized, a machine had to “learn” a range of visual representations of cats by observing the animal in various contexts. To enable ML, what mattered was the overlap between various representations of a thing, not its ideal — in philosophical terms.
Not only do humans not understand the many connections AI revealed between a compound’s properties and its antibiotic capabilities, but even more fundamentally, the properties themselves are not amenable to being expressed as rules. A ML algorithm that improves a model based on underlying data, however, is able to recognize relationships that have eluded humans.
Like a classical algorithm, a ML algorithm consists of a sequence of precise steps. But those steps do not directly produce a specific outcome, as they do in a classical algorithm. Rather, modern AI algorithms measure the quality of outcomes and provide means for improving those outcomes, enabling them to be learned rather than directly specified.
But neural network training is resource-intensive. The process requires substantial computing power and complex algorithms to analyze and adjust to large amounts of data. Unlike humans, most AIs cannot simultaneously train and execute. Rather, they divide their effort in 2 steps: training and inference. During the training phase, the AI’s quality measurement and improvement algorithms evaluate and amend its model to obtain quality results. Then, in the inference phase, the AI did not reach conclusions by reasoning as humans reason; it reached conclusions by applying the model it developed.
Because the application of AI varies with the tasks it performs, so, too, must the techniques developers use to create that AI. This is a fundamental challenge of deploying ML: different goals and functions require different training techniques.
When marketers want to identify their customer base, or when fraud analysts seek potential inconsistencies among reams of transactions, unsupervised learning allows AIs to identify patterns or anomalies without having any information regarding outcomes. In unsupervised learning, the training data contains only inputs. Then programmers task the learning algorithm with producing groupings based on some specified weight of measuring the degree of similarity.
The applications of these so-called generators are staggering. If successfully applied to coding or writing, an author could simply create an outline, leaving the generator to fill in the details. Or an advertiser or filmmaker could supply a generator with a few images or a storyboard, then leave it to the AI to create a synthesis ad or commercial.
Unlike earlier generations of AI, in which people distilled a society’s understanding of reality in a program’s code, contemporary ML AIs largely model reality on their own. While developers may examine the results generated by their AIs, the AIs do not “explain” how or what they learned in human terms. Much as with humans, one cannot really know what has been learned and why.
AI’s brittleness is a reflection of the shallowness of what it learns. Associations between aspects of inputs and outputs on supervised or reinforcement learning are very different from human understanding, with its many degrees of conceptualization and experience. The brittleness is also a reflection of AI’s lack of self-awareness. An AI is not sentient. It does not know what it doesn’t know. Accordingly, it cannot identify and avoid what to humans might be obvious blunders. This inability of AI to check otherwise clear errors on its own underscores the importance of developing testing that allows humans to identify the limits of an AI’s capacities, to review its proposed courses of action, and to predict when an AI is likely to fail.
Since ML will drive AI for the foreseeable future, humans will remain unaware of what an AI is learning and how it knows what it has learned. While this may be disconcerting, it should not be: human learning is often similarly opaque. All humans often act on the basis of intuition and thus are unable to articulate what or how they learned. To cope with this opacity, societies have developed myriad professional certification programs, regulations, and laws.
Most AIs, though, train in a phase distinct from the operational phase: their learned models — the parameters of their neural networks — are static when they exit training. Because an AI’s evolution halts after training, humans can assess its capacities without fear that it will develop unexpected, undesired behaviors after it completes its tests. Auditing datasets provided another QC check.
As of this writing, AI is constrained by its code in 3 ways. First, the code sets the parameters of the AI’s possible actions. Second, AI is constrained by its objective function, which defines and assigns what it is to optimize. Finally, AI can only process inputs that it is designed to recognize and analyze.
It is reasonable to expect that over time, AI will progress at least as fast as computing power has, yielding a millionfold increase in 15-20 years. Such progress will allow the creation of neural networks that, in scale, are equal to the human brain. As of this writing, GPT-3 has about 10^11 such weights. But recently, Beijing Academy of Sciences announced a generative language model with 10 times as many weights as GPT-3. This is still 10^4 times fewer than estimates of the human brain’s synapses.
Whether AI or AGI, human developers will continue to play an important role in creation and operation. The algorithms, training data, and objectives for ML are determined by the people developing and training the AI, thus they reflected those people’s values, motivations, goals, and judgment.
Will people, relying on automatic translation, exert less effort trying to understand other cultures and nations, increasing their natural tendency to see the world through the lens of their own culture? Or might people become more intrigued by other cultures? Can automatic translation somehow reflect differing cultural histories and sensibilities?
Although in principle most network platforms are content-agnostic, in some situations their community standards become as influential as national laws. Content that a network platform and its AI permit or favor may rapidly gain prominence; content they diminish or sometimes even outright prohibit may be relegated to obscurity.
Having more participants increases the chances that a transaction will occur and that its valuation will be “accurate,” given that the transaction reflects a larger number of individual negotiations between buyers and sellers. Once a stock exchange has gathered a critical mass of users in a given market, it tends to become the first stop for new buyers and sellers — leaving little incentive or opportunity for another exchange to compete by offering precisely the same service.
To a large extent, AI is judged by the utility of its results, not the process used to reach those results. This signals a shift in priorities from earlier eras, when each stop in a mental or mechanical process was either experienced by a human being (a thought, a conversation, an administrative process) or could be paused, inspected, and repeated by human beings.
We cannot avoid answering these questions because our communications, as now constructed, can no longer operate without AI-assisted networks.
But what of the content that is never viewed by the public? When the prominence or diffusion of a message is so curtailed that its existence is, in effect, negated, we have reached a state of censorship.
In Beijing, Washington, and some European capitals, concern has been expressed about the implications of conducting broad aspects of national economic and social life on network platforms facilitated by AI designed in other, potentially rival, countries.
Yet Europe continues to face disadvantages for the initial scaling of new network platforms because of its need to serve many languages and national regulatory apparatuses in order to reach its combined market. By contrast, national network platforms in the US and China are able to start at a continental scale, allowing their companies to better afford the investment needed in order to continue scaling in other languages.
Historic global powers such as France and Germany have prized independence and the freedom to maneuver in their technology policy. However, peripheral European states with recent and direct experience of foreign threats — such as post-Soviet Baltic and Central European states — have shown greater readiness to identify with a US-led “technosphere.”
That AI-enabled network platforms created by one society may function and evolve within another society and become inextricable from that country’s economy and national political discourse marks a fundamental departure from prior eras. Previously, sources of information and communication were typically local and national in scope — and maintained no independent ability to learn.
Creators and operators may come to better understand network platforms’ objectives and limits but remain unlikely to intuit probable governmental concerns or broader philosophical objections in advance.
Strategists need to consider the lessons of prior eras. They should not assume that total victory is possible in each commercial and technological contest. Instead, they should recognize that prevailing requires a definition of success that a society can sustain over time. This, in turn, requires answering the kinds of questions that eluded political leaders and strategic planners during the Cold War era: What margin of superiority will be required? At what point does superiority cease to be meaningful in terms of performance? What degree of inferiority would remain meaningful in a crisis in which each side used its capabilities to the fullest?
For as long as history has been recorded, security has been the minimum objective of an organized society. Cultures have differed in their values, and political units have differed in their interests and aspirations, but no society that could not defend itself — either alone or in alignment with other societies — has endured.
With each augmentation of power, major powers have taken one another’s measure — assessing which side would prevail in a conflict, what risks and losses such a victory would entail, what would justify them, and how the entry of another power and its arsenal would affect the outcome. The capacities, objectives, and strategies of varied nations were set, at least theoretically, in an equilibrium, or a balance of power.
3 empires witnessed the collapse of their institutions. Even the victors were depleted for decades and suffered a permanent diminution of their international roles. A combination of diplomatic inflexibility, advanced military technology, and hair-trigger mobilization plans had produced a vicious circle, making global war possible but also unavoidable. Casualties were so enormous that the need to justify them made compromise impossible.
This insight presaged the central paradox of Cold War strategy: that the dominant weapons technology of the era was never used. The destructiveness of weapons remained out of proportion to achievable objectives other than pure survival.
At its core, nuclear deterrence was a psychological strategy of negative objectives. It aimed to persuade an opponent not to act by means of a threatened counteraction. This dynamic depended both on a state’s physical capacities and on an intangible quality: the potential aggressor’s state of mind and its opponent’s ability to shape it. Viewed through the lens of deterrence, seeming weakness could have the same consequences as an actual deficiency; a bluff taken seriously could prove a more useful deterrent than a bona fide threat that was ignored. Unique among security strategies (at least until now), nuclear deterrence rests on a series of untestable abstractions: the deterring power could not prove how or by what margin something had been prevented.
The Cold War hegemons expended tremendous resources on expanding their nuclear capabilities at the same time as their arsenals grew increasingly remote from the day-to-day conduct of strategy. The possession of these arsenals did not deter nonnuclear states — China, Vietnam, Afghanistan — from challenging the superpowers, nor did it stop Central and Eastern Europeans from demanding autonomy from Moscow.
Since then, every nuclear power confronting a nonnuclear opponent has reached the same conclusion, even when facing defeat at the hands of its nonnuclear foe.
In quest of security, humanity had produced an ultimate weapon and elaborate strategic doctrines to accompany it. The result was a permeating anxiety that such weaponry might ever be used. Arms control was a concept intended to assuage this dilemma.
At the same time, a handful of nations acquired their own modest nuclear arsenals, calculating that they only needed an arsenal sufficient to inflict devastation — not achieve victory — in order to deter attacks.
Throughout history, a nation’s political influence has tended to be roughly correlative to its military power and strategic capabilities — its ability, even if exerted primarily through implicit threats, to inflict damage on other societies. Yet an equilibrium based on a calculus of power is not static or self-maintaining; instead, it relies first on a consensus regarding the constituent elements of power and the legitimate bounds of their use. Likewise, maintaining equilibrium requires congruent assessments among all members of the system — especially rivals — regarding states’ relative capabilities and intentions as well as of the consequences of aggression. Finally, the preservation of equilibrium requires an actual, and recognized, balance. When a participant in the system enhances its power disproportionately over others, the system will attempt to adjust — either through the organization of countervailing force or the accommodation of a new reality. When the calculation of equilibrium becomes uncertain, or when nations arrive at fundamentally different calculations of relative power, the risk of conflict through miscalculation reaches its height.
Conventional and nuclear weapons exist in physical space, where their deployments can be perceived and their capabilities at least roughly calculated. By contrast, cyber weapons derive an important part of their utility from their opacity; their disclosure may effectively degrade some of their capabilities. Their intrusions exploit previously undisclosed flaws in software, obtaining access to a network or system without the authorized user’s permission or knowledge. In the contingency of DDoS attacks, a swarm of seemingly valid information requests may be used to overwhelm systems and make them unavailable for their intended use.
Cyber arms-control negotiators (which do not yet exist) will need to solve the paradox that discussion of a cyber weapon’s capability may be one and the same with its forfeiture (permitting the adversary to patch a vulnerability) or its proliferation (permitting the adversary to copy the code or method of intrusion).
No major cyber actor, governmental or nongovernmental, has disclosed the full range of its capabilities or activities — not even to deter actions by others. Strategy and doctrine are evolving uncertainly in a shadow realm, even as new capabilities are emerging.
Historically, countries planning for battle have been able to understand, if imperfectly, their adversaries’ doctrines, tactics, and strategic psychology. This has permitted the development of adversarial strategies and tactics as well as a symbolic language of demonstrative military actions, such as intercepting a jet nearing a border or sailing a vessel through a contested waterway.
These issues must be considered and understood before intelligent systems are sent to confront one another. They acquire additional urgency because the strategic use of cyber and AI capabilities implies a broader field for strategic contests. They will extend beyond historic battlefields to, in a sense, anywhere that is connected to a digital network.
In these arenas, it is imperative to ensure an appropriate role for human judgment in overseeing and directing the use of force. Such limitations will have only limited meaning if they are adopted only unilaterally — by one nation or a small group of nations. Governments of technologically advanced countries should explore the challenges of mutual restraint supported by enforceable verification.
A wide range of actors and institutions participate in shaping technology with strategic implications. Not all will regard their missions as inherently compatible with national objectives as defined by the federal government. A process of mutual education between industry, academia, and government can help bridge this gap and ensure that key principles of AI’s strategic implications are understood in a common conceptual framework. Few eras have faced a strategic and technological challenge so complex and with so little consensus about either the nature of the challenge of even the vocabulary necessary for discussing it.
The unresolved challenge of the nuclear age was that humanity developed a technology for which strategists could find no viable operational doctrine. The dilemma of the AI age will be different: its defining technology will be widely acquired, mastered, and employed. The achievement of mutual strategic restraint — or even achieving a common definition of restraint — will be more difficult than ever before, both conceptually and practically.
The paradox of an international system is that every power is driven to act — indeed must act — to maximize its own security. Yet to avoid a constant series of crises, each must accept some sense of responsibility for the maintenance of general peace. And this process involves a recognition of limits. The military planner or security official will think (not incorrectly) in terms of worst-case scenarios and prioritize the acquisition of capabilities to meet them. The statesman (who may be one and the same) is obliged to consider how these capabilities will be used and what the world will look like afterward.
For many decades, memories of a smoldering Hiroshima and Nagasaki forced recognition of nuclear affairs as a unique and grave endeavor. As former secretary of state George Shultz told Congress in 2018, “I fear people have lost that sense of dread.”
Leading cyber and AI powers should endeavor to define their doctrines and limits (even if not all aspects of them are publicly announced) and identify points of correspondence between their doctrines and those of rival powers.
Reason not only revolutionized the sciences, it also altered our social lives, our arts, and our faith. Under its scrutiny, the hierarchy of feudalism fell, and democracy, the idea that reasoning people should direct their own governance, rose. Now AI will again test the principles upon which our self-understanding rests.
Optimizing the distribution of resources and increasing the accuracy of decision making is good for society, but for the individual, meaning is more often derived from autonomy and the ability to explain outcomes on the basis of some set of actions and principles. Explanations supply meaning and permit purpose; the public recognition and explicit application of moral principle supply justice. But an algorithm does not offer reasons grounded in human experience to explain its conclusions to the general public. Some people, particularly those who understand AI, may find this world intelligible. But others, greater in number, may not understand why AI does what it does, diminishing their sense of autonomy and their ability to ascribe meaning to the world.
Those who experience dislocation, even if short-term, may derive little consolation from knowing that it is a temporary aspect of a transition that will increase a society’s overall quality of life and economic productivity. Some may find themselves freed from drudgery to focus on the more fulfilling elements of their work. Others may find their skills no longer cutting edge or even necessary.
These tensions — between reasoned explanations and opaque decision making, between individuals and large systems, between people with technical knowledge and authority and people without — are not new. What is new is that another intelligence, one that is not human and often inexplicable in terms of human reason, is its source. What is also new is the pervasiveness and scale of this new intelligence. Those who lack knowledge of AI or authority over it may be particularly tempted to reject it. Frustrated by its seeming usurpation of their autonomy or fearful of its additional effects, some may seek to minimize their use of AI and disconnect from social media or other AI-mediated network platforms, shunning its use (at least knowingly) in their daily lives.
Algorithms promote what seizes attention in response to the human desire for stimulation — and what seizes attention is often the dramatic, the surprising, and the emotional. Whether an individual can find space in this environment for careful thought is one matter. Another is that the now-dominant forms of communication are non-conducive to the promotion of tempered reasoning.
Pre-AI algorithms were good at delivering “addictive” content to humans. AI is excellent at it.
AI’s dynamism and capacity for emergent — in other words, unexpected — actions and solutions distinguish it from prior technologies. Unregulated and unmonitored, AI could diverge from our expectations and, consequently, our intentions. The decision to confine, partner with, or defer to it will not be made by humans alone.
Much like the statesmen of 1914 failed to recognize that the old logic of military mobilization, combined with new technology, would pull Europe into war, deploying AI without careful consideration may have grave consequences.
Frequently, existing principles will not apply. In the age of faith, courts determined guilt during ordeals in which the accused faced trial by combat and God was believed to dictate victory. In the age of reason, humanity assigned guilt according to the precepts of reason, determining culpability and meting out punishment consistent with notions such as causality and intention. But AI do not operate by human reason, nor do they have human motivation, intent, or self-reflection.
As part of addressing such questions, we should seek ways to make it auditable — that is, to make its processes and conclusions both checkable and correctable.
Imperfection is one of the most enduring aspects of human experience, especially of leadership. Often, policy makers are distracted by parochial concerns. Sometimes, they act on the basis of faulty assumptions. Other times, they act out of pure emotion. Still other times, ideology warps their vision.
Human reason has the peculiar fate in one species of its cognitions that it is burdened with questions which it cannot dismiss, since they are given to it as problems by the nature of reason itself, but which it also cannot answer, since they transcend every capacity of human reason.