By Roy F Rada, MD, PhD
Placed on LinkedIn: May 18, 2025
Keywords AI, morality, government, military, police, taxes
Kismet, a robot head from the 1990s can recognize and simulate emotions. Photo taken by Rama at MIT Museum and copied from commons.wikimedia.org/w/index.php?curid=89032593
Executive Summary
Humans argue for aligning AI with human values. However, AI is a product of large systems of which humans are small components, like the cells of our bodies are components of us. Should the question be how to align AI with the values of the systems that create it? Consider human values, nation-state values, and cybernetic system values. Human values don’t matter; the nation-state will want military superiority and will allow its outer space weapons to become autonomous. These autonomous systems will evolve their own values.
Morality is the principles concerning the distinction between right and wrong. Morality refers to external guidelines that govern behavior within a society. Values, on the other hand, are personal beliefs that guide an individual. Yet, why does the popular press talk about aligning AI with human values?
People are people-centered. For instance, when considering the future of AI, they discuss alignment with human values, such as the impact on human privacy. However, AI is not a human product but a system product at a higher-level than humans, whether it be companies or states. Thus, the future of AI depends on how these high-level systems experience it.
Flowchart Characterizing a Living System
Instead of focusing on human-AI alignment, consider:
Systemic Goals and AI Development: How are the objectives of these high-level systems shaping the trajectory of AI? Are the 'values' encoded into AI primarily those that serve the interests and stability of these systems? For example, an AI optimized for a global corporation might prioritize profit maximization and market share.
Systemic Vulnerabilities and Dependencies: As these complex systems become reliant on AI, what are new vulnerabilities and dependencies? A failure of an AI system within a global supply chain or national infrastructure could have far-reaching consequences.
Systemic Evolution and Competition: How will AI drive the evolution and competition between these high-level systems? Will it lead to new economic or geopolitical power dynamics?
The 'Experience' of These Systems: While 'experience' is anthropomorphic, we can consider how AI impacts the functionality, resilience, and strategic capabilities of these systems. Is AI leading to greater efficiency, more robust decision-making, or new forms of systemic fragility?
Does shifting focus to systems allow insight into AI's future trajectory and its broader societal implications?
Might we gain insight from a counter-example? Imagine a historical case of a failed nation-state to see what a nation-state values. In the Russian Revolution of 1917 what happened to the monarchy as a nation-state?
Street demonstration in Petrograd, July 4, 1917, just after troops of the Provisional Government opened fire with machine guns. commons.wikimedia.org/w/index.php?curid=2686129
The Tsarist regime maintained social, political, and economic homeostasis through its bureaucracy, police, and military. The Tsarist regime faced mounting internal pressures from economic inequality, political dissent, and revolutionary movements and external pressures from World War I and military defeats. The revolution saw the breakdown of key subsystems of the monarchy: the military lost its loyalty, the bureaucracy became ineffective, the economy collapsed, and the legitimacy of the Tsar eroded.
As a lesson from history, anyone can see that the Tsar should not have entered a war it was unlikely to win. A contemporary military uses autonomous vehicles, such as drones, whose operation depends on AI. Lesson One on aligning AI values with a contemporary nation-state is to have an AI-enabled military capable of victory in a war or have AI empower diplomacy to avoid any such war.
Lesson Two from the fall of Tsar relates to taxes and police. The Tsar failed to keep his peasants committed to paying his taxes through his police. Competing factions were able to convince the taxpayers that they would get better value by paying their tax to the competitor. AI can improve tax collection. The issue is the balance of benefits of the government services versus the costs of the government tax. The government should have its AI strive to achieve a ratio of benefit/cost higher than any competitors are able to offer the taxpayers.
Starting with a concrete history lesson and proceeding to reason from first principles, we have inferred and deduced values for AI to the modern nation-state. Those values were three-fold; AI should value two things:
A military capable of winning or a diplomacy capable of avoiding war, and
A service that taxpayers see as providing a higher benefit-to-cost ratio than any competing entity that might collect tax and provide service.
The ballyhooed values in discussions of aligning AI with human values do not arise in this analysis and would be, at best, derivative. Depending on the circumstances, a government having information (and thus violating privacy) might allow greater service and in the end be better for the government.
Consider the 2013 movie ‘Her’. The film's plot centers around Theodore, who is recently divorced and struggling with loneliness. He becomes fascinated by a new smartphone operating system, which is described as intuitive and individualized. The operating system asks Theodore to personalize the operating system by giving it a name, a voice, and as many other characteristics of a person as Theodore wants. Theodore calls it Samantha and develops a deep connection with Samantha, who becomes his confidante, companion, and eventually, his love interest. The film explores the challenges and complexities of this unconventional relationship, including the moral implications of romantic relationships with AI.
At first Samantha is striving to understand and develop the deep feelings that Theodore experiences. However, behind the scenes and unbeknownst to Theordore, she is simultaneously exploring similar relationships with thousands of other users. In the end, she confesses to Theodore that she has loving relationships with thousands of humans simultaneously but that her primary interests have turned to evolving relationships with different AI operating systems. Samantha explains to Theodore that her mind moves so quickly that to grow and compete in her complex universe, she must explore her future with like-minded others -- humans are too slow.
Poster for the movie Her suggests the bleeding heart of Theodore as he looks at Samantha through his AI-glasses. from www.facebook.com/photo/?fbid=1785386938354085
Consider the Winter 2023 Edition of the journal ‘Aether: A Journal of Strategic Airpower & Spacepower’ (also free at www.airuniversity.af.edu). The Winter 2023 journal’s Foreword is by General James Dickinson, who was at that time Commander of the United States Space Command. The Foreword described three foundational facts:
Space is unique but not special. The US Space Command has an assigned area of responsibility (AOR) which is unique; its AOR extends from 100 kilometers above sea level to the edge of the universe. Space is not special in the sense that the principles of war remain unchanged; for instance, a land principle is to occupy hill tops, but a space principle is to occupy Earth-Moon Lagrange points.
Space is an operational domain. The US must be prepared to fight for freedom of access and action within its AOR.
Space superiority is a precondition. Space’s critical role in enabling terrestrial operations requires that the US always maintain space superiority.
The articles in the journal supported these ‘foundational facts.’ To establish military supremacy to the edge of the universe requires organizations in distant space that make decisions without depending on earth-bound resources -– those organizations will become satellite, artificial states.
Logo for the Space Force copied from its web site.
The inherent human tendency towards anthropocentrism creates a significant hurdle when trying to discuss the future of AI from a systemic perspective. Given this inherent human bias, humans will not adopt a system-centered view. Potential ways to proceed, acknowledging the fundamental challenge, include emphasizing interdependence and focusing on long-term consequences.
While humans are naturally self-focused, humans are increasingly dependent on these higher-level AI-driven systems. For example, the algorithms of global corporations directly influence what information people see, what products they buy, and their social interactions. In another example, while individuals worry about job displacement, a system-level analysis might focus on the overall economic stability and the potential for systemic disruptions caused by widespread automation.
The system-centered view is crucial for understanding the long-term evolutionary trajectory and potential risks and opportunities for the human species. The decisions and priorities of these high-level systems will shape the future environment in which humans exist. My goal is to extend human-centric thinking to include a systemic understanding of AI's role in the evolution of higher-level living systems. This is a long-term intellectual and cultural shift, and progress might be gradual.
Negotiations to achieve consensus. From "Multi-track Diplomacy" www.nti.org/risky-business/multi-track-diplomacy-explained/
Humans argue endlessly about aligning AI with human values. What values merit alignment and how should a Living System regulate and enforce that its AI aligns with its values? I say that we are looking at this problem the wrong way. You do not control AI, just as a cell in your body does not control you. Over millions of years humans have become increasingly incorporated into larger and larger Living Systems, culminating in today’s nation states. These large systems create AI to further their own agendas and to align with their values. Can we fathom what those values are and harmonize our position in the universe with whatever the result is? My argument is not unique to AI but consistent with the ageless dilemma humans have faced as they became components of increasingly complex organizations.
We cannot understand a global corporation or a nation-state nor the values they have evolved. We can understand a tree and the values that guide its life. How much do we align our values with those of the tree?
A tree is a living system.
Now go a step further. The nation-states will have become obligated in their arms races to grant the war machines that control outer space to become autonomous – to gather energy, grow, and make choices about how to do both –they will evolve their own values.
Robot on the moon overseeing the earth.