Articles of Interest - Week 11/18 - 11/24
- Walker Robinson
- Nov 24, 2024
- 6 min read
Henry A. Kissinger, Eric Schmidt, and Craig Mundie, Foreign Affairs. November 18th, 2024.
This article explores AI's potential impact on international relations, military strategy, and the global order. It argues that AI will fundamentally reshape warfare and geopolitical power structures as we know them. AI will completely alter the global theatre by introducing unprecedented challenges to traditional concepts like deterrence, military advantage, and strategic decision-making. The authors suggest that AI could either enhance objective decision-making in military and diplomatic contexts (while preserving essential human judgment) or lead to a destabilizing imbalance in global power dynamics if the adoption rates vary drastically among nations. The article explores how AI might transform warfare by enabling unseen precision, speed, and autonomous capabilities while potentially removing human casualties as the primary metric of conflict. Additionally, the authors suggest that AI could usher in a restructuring of the Westphalian international order. This eventuality would see the emergence of corporate entities or other social organizations to rival traditional nation-states. The article ends by discussing a potential "crisis of confidence" that could go hand-in-hand with AI's use for geopolitical problem-solving. If removing the human from the equation leads to greater peacebuilding, we will likely face a harsh realization that we were the problem all along. The line "To know that a solution has always existed but has never been conceived by us would be crushing to human pride" introduces a new and interesting line of thought, but one that has not been explored too deeply in this article (best saved for an opinion piece I think).
The final lines of the opening paragraph, stating, "AI in war will illuminate the best and worst expressions of humanity. It will serve as the means both to wage war and to end it," perfectly encapsulates just how complex the impact of AI will be in the world of geopolitics and national security. I found this article both compelling and highly concerning. Given the opaque and unpredictable way AI could develop, I worry about the emergence of an "AI Security Dilemma" as it could be more destabilizing than the nuclear arms race. The authors correctly identify how the inability to accurately gauge adversaries' AI capabilities could drive nations toward dangerous acceleration of AI deployment at the expense of safety considerations, a topic discussed frequently in my other pieces. Their suggestion that AI could potentially erode the Westphalian system is particularly concerning. It suggests a future where the "traditional" state-based security frameworks may become obsolete in the face of emerging corporate-technological power centers. However, the piece somewhat understates the more near-term threats that could emerge during the transition period, where states might leverage AI capability to exploit advantages in gray-zone conflicts. The authors' speculation about AI's potential to optimize global governance, while interesting, I think, underestimates how competing national interests and existing power structures would resist the "rationalization" of international relations. This eventuality could lead to a more fragmented and volatile global order rather than a more stable one. AI's impact on geopolitics and conflict over the next few decades is tough to predict, which is why it is so dangerous. While AI offers many fantastic opportunities for the world, these benefits will never see the light of day if we do not first address the challenges.
Anna Tong, Michael Martina, Reuters. November 19th, 2024.
The article discusses a proposal from the U.S.-China Economic and Security Review Commission that advocates for a “Manhattan Project-scale” approach to developing artificial general intelligence (AGI). The proposal pushes for the U.S. government to emphasize heavy investments and public-private partnerships to achieve AGI. This proposal stems in response to growing technological competition with China. The commission addresses multiple different aspects of U.S.-China relations, including recommendations to eliminate the de minimis trade exemption that allows Chinese goods valued under $800 to enter with minimal screening. The proposal highlights concerns about infrastructure bottlenecks in AI development, particularly regarding data center capacity, and aligns with similar calls from industry leaders like OpenAI for increased government involvement in AI advancement.
While I understand the urgency and desire to reach AGI before an adversary I am cautious that focusing on rapid development will backfire and lead to safety oversights and increased risks. What's really interesting about this proposal is how it shows the U.S. is starting to treat AGI development like it once treated nuclear weapons–as a make-or-break technology that could completely reshape global power. The parallel becomes even more concerning when we consider that the rush to develop nuclear capabilities led to some significant consequences. Many of which we're still dealing with today. Unlike nuclear weapons, AGI's capabilities and threats are highly unpredictable. The consequences of cutting corners or prioritizing speed over safety could be far more erratic and potentially irreversible than we've seen in the past.
Patrick Kulp, Tech Brew. November 19th, 2024.
California's Diablo Canyon Power Plant has partnered with an AI startup, Atomic Canyon, to implement generative AI to manage regulatory documentation. This integration marks the first deployment of AI technology in a U.S. nuclear facility. The initiative utilizes specially trained models called FERMI that were developed in collaboration with the U.S. Department of Energy. The initiative comes amid an emerging entanglement between tech companies and nuclear power companies. Major tech companies are increasingly turning to nuclear power to support their extremely energy-intensive AI operations.
The use of AI in nuclear facility management reflects a broader trend of modernizing critical infrastructure. That being said, it is interesting how it addresses one of the nuclear industry's most persistent challenges: managing the complex web of regulatory compliance and documentation. The timing, in particular, is significant given the growing energy demands of AI development itself. The demand creates a somewhat symbiotic relationship where AI helps streamline nuclear operations while nuclear power enables further AI advancement. However, it is absolutely worth mentioning that this raises questions about safety and regulation regarding AI being used in such delicate (for lack of a better word) environments. Mistakes and errors in a nuclear plant could produce catastrophic damage. Should these nuclear plants rely too heavily on non-human systems, it could increase the risk of errors, or at least the human capability to understand how the errors came about. A whole new host of regulations will likely be required to ensure the safe integration of AI in critical infrastructure like nuclear plants, and how we, as a planet (not just the U.S.), navigate these challenges will be incredibly impactful on the future of AI advancement and critical infrastructure development.
Andy Greenberg, WIRED. November 21st, 2024.
China's extensive state surveillance apparatus has unintentionally created a thriving black market for citizens' private data. Chinese government insiders and contractors sell access to sensitive personal information online. SpyCloud researchers uncovered several Telegram-based services that offer searchable access to Chinese citizens' personal data. This data includes phone records, bank details, and location data for relatively small cryptocurrency payments. These services actively recruit employees from Chinese surveillance agencies and government contractors. By offering daily payments for their cooperation and utilizing cryptocurrency and mixing services to avoid detection, these black market groups are able to profit from the CCP's surveillance state.
This investigation brings up an often unforeseen vulnerability in government surveillance-state systems: the human element. China's massive investment in state surveillance has created conditions for its own systematic undermining. Underpaid government workers exploit their access for profit and personal gain. Perhaps most ironically, the same data meant to maintain state control has become a commodity that can be used to compromise the very officials and security personnel that are tasked with maintaining/enforcing that control. The Chinese surveillance system has raised a lot of criticism on the global stage, especially regarding ethnic and religious minorities within the country, so this development is certainly a bit ironic. Whether there's a difference between the CCP collecting and using personal data and genetic information and a shady black market purchaser is debatable. Still, it remains an interesting point of conversation for the future of state surveillance systems. I, for one, would be curious to see if similar corruption occurs in other countries that employ parallel systems.