1 - 5 of 5
Number of results to display per page
Search Results
2. Colonel John Boyd's Thoughts on Disruption A Useful Effects Spiral from Uncertainty to Chaos
- Author:
- Brian R. Price
- Publication Date:
- 03-2023
- Content Type:
- Journal Article
- Journal:
- Journal of Advanced Military Studies
- Institution:
- Marine Corps University Press, National Defense University
- Abstract:
- A close examination of John R. Boyd’s concept of disruption as recorded in his 1987 presentation, “An Organic Design for Command and Control.” This article draws attention to a series of disruptive actions Boyd lists, including uncertainty, doubt, mistrust, confusion, disorder, fear, panic, and chaos, noting that the list begins with the mildest effect but that it progresses regularly toward collapse and chaos. The author argues that Boyd was specific in listing these effects in order and notes that this cycle could be developed into a useful effects spiral, which, once understood, can be catalyzed to enhance enemy disruption in a Joint all-domain operations (JADO) environment. In the postscript, this article argues that officers seeking to operate in a multi- or all-domain environment can benefit from a broad educational base to unlock creativity in approaching wicked problem sets. This creativity, when coupled with concepts like the effects spiral, can enhance traditional maneuver and combat, triggering an opponent’s collapse without the need for annihilation.
- Topic:
- Education, Military Affairs, Psychological Warfare, and Uncertainty
- Political Geography:
- Global Focus
3. Future Warfare and Responsibility Management in the AI-based Military Decision-making Process
- Author:
- Alessandro Nalin and Paolo Tripodi
- Publication Date:
- 03-2023
- Content Type:
- Journal Article
- Journal:
- Journal of Advanced Military Studies
- Institution:
- Marine Corps University Press, National Defense University
- Abstract:
- The application of artificial intelligence (AI) technology for military use is growing fast. As a result, autonomous weapon systems have been able to erode humans’ decision-making power. Once such weapons have been deployed, humans will not be able to change or abort their targets. Although autonomous weapons have a significant decision-making power, currently they are not able to make ethical choices. This article focuses on the ethical implications of AI integration in the military decision-making process and how the characteristics of AI systems with machine learning (ML) capabilities might interact with human decision-making protocols. The authors suggest that in the future, such machines might be able to make ethical decisions that resemble those made by humans. A detailed and precise classification of AI systems, based on strict technical, ethical, and cultural parameters would be critical to identify which weapon is suitable and the most ethical for a given mission.
- Topic:
- Ethics, Weapons, Artificial Intelligence, Machine Learning, and Decision-Making
- Political Geography:
- Global Focus
4. PART II: Whale Songs of Wars Not Yet Waged: The Demise of Natural-Born Killers through Human-Machine Teamings Yet to Come
- Author:
- Ben Zweibelson
- Publication Date:
- 03-2023
- Content Type:
- Journal Article
- Journal:
- Journal of Advanced Military Studies
- Institution:
- Marine Corps University Press, National Defense University
- Abstract:
- Current human-machine dynamics in security affairs positions the human operator in the loop with artificial intelligence to conduct decisions and actions. As technological advancements in AI cognition, speed, and weapon sophistication increase, human operators are increasingly being shifted to an on the loop where AI takes more responsibility in warfare and defense decisions, whether tactical or even strategic. Human operators are also falling off the loop, trailing enhanced AI systems as the biological and physical limits because humans are not the same for artificial intelligence in narrow applications. Those likely will expand toward general AI in the coming decades, presenting significant strategic, organizational, and even existential concerns. Further, how natural humans respond and engage with increasingly advanced, even superintelligent AI as well as a singularity event will feature disruptive, transformative impacts on security affairs and even at a philosophical level discerning what war is.
- Topic:
- Science and Technology, War, Artificial Intelligence, and Machine Learning
- Political Geography:
- Global Focus
5. PART I: The Singleton Paradox On the Future of Human-Machine Teaming and Potential Disruption of War Itself
- Author:
- Ben Zweibelson
- Publication Date:
- 03-2023
- Content Type:
- Journal Article
- Journal:
- Journal of Advanced Military Studies
- Institution:
- Marine Corps University Press, National Defense University
- Abstract:
- Technological innovation has historically been applied in war and security affairs as a new tool or means to accomplish clear political or societal goals. The rise of artificial intelligence posits a new, uncharted way forward that may be entirely unlike previous arms races and advancements in warfare, including nuclear weapons and quantum technology. This article introduces the concept of a singleton as a future artificial intelligent entity that could assume central decision making for entire organizations and even societies. In turn, this presents what is termed a “singleton paradox” for security affairs, foreign policy, and military organizations. An AI singleton could usher in a revolutionary new world free of war and conflict for all of human civilization or trigger a catastrophic new war between those with a functioning singleton entity against those attempting to develop one, along with myriad other risks, opportunities, and emergent consequences.
- Topic:
- Security, Science and Technology, Machine Learning, Transhumanism, and War Studies
- Political Geography:
- Global Focus