top of page

Articles of Interest - Week 11/25 -12/1

  • Writer: Walker Robinson
    Walker Robinson
  • Dec 1, 2024
  • 5 min read

Updated: Jan 27

Daria Impiombato , Dr Nathan Attrill , Albert Zhang , Fergus Ryan, Bethany Allen, Australian Strategic Policy Institute. November 26th, 2024. 


This report from ASPI examines how China is developing and deploying "persuasive technologies." These technologies are "any digital system that shapes users' attitudes and behaviors by exploiting physiological and cognitive reactions or vulnerabilities." The study focuses on three key types of technologies: generative AI, neurotechnology (like brain-computer interfaces), and ambient/immersive technologies. Through detailed case studies of Chinese companies like Midu, Silicon Intelligence, Suishi and Goertek, the report demonstrates how the CCP already uses these technologies for domestic control and international influence. These technologies are often developed through joint projects between the Chinese military and the private sector.


This research stresses a concerning connection between China's technological capabilities and its strategic ambitions. What makes these "persuasive technologies" particularly worrisome from a national security perspective is their ability to subconsciously influence behavior, making them difficult to detect or prevent. The report's findings suggest that China's private sector development of these technologies and the CCP's willingness to weaponize them for political purposes create new threats of foreign interference. These tools will also likely allow China to conduct influence operations (both domestic and abroad) with unprecedented precision and widespread scale. Additionally, the complexity of this technology enables a greater degree of plausible deniability, making it more difficult for countries on the receiving end of these operations to retaliate or attribute the operations to China. The current situation around TikTok is a great example of how the U.S. government is growing more concerned over this type of subconscious foreign influence. 




INHR, CNAS. December 1st, 2024.


A two-year dialogue between U.S., Indian, European, and Chinese experts has led to the production of this white paper on a proposed framework for military AI testing. This framework includes a set of 22 practices designed to address the unique challenges of integrating AI into military systems. Rather than just offering technical specifications, the paper combines practical guidance from data collection challenges in combat environments to human-machine teaming. Importantly, it emphasizes a cautious approach toward using AI in any military context. The paper highlights continuous required testing and the need to account for "edge cases" that could lead to catastrophic failures. If AI systems are deployed in the unpredictable conditions of modern warfare, threats and issues will likely appear rapidly. Because of the speed at which this technology operates, it offers little time for humans to react and attempt to prevent these threats. 


I find this paper fascinating as it attempts a delicate balancing act between promoting international cooperation on military AI and navigating each country's national security concerns. After all, no country will be completely transparent about its military capabilities, especially in an emerging field like AI. Yet one thing is obvious: AI-enabled military systems introduce incredible risks that no single nation can fully address alone. The paper's repeated emphasis on human control and the "precautionary principle" stresses a deep unease among Western and Eastern experts. There is immense potential for unintended consequences as militaries race to adopt these technologies. The global approach to international norms for AI will be a long and challenging process for military AI, let alone AI as a whole. I believe that, for the good of humanity, the world's leading military powers will need to establish means of cooperation and transparency regarding this technology because, without this, the entire world is at risk. Major props should be given to this team of experts for realizing the importance of addressing this issue before it becomes too late, especially because the countries represented are commonly considered adversaries.  




Janet Egan, Michael Depp, Noah Greene and Caleb Withers, CNAS. November 26th, 2024.


CNAS has produced recommendations for the 2025 French AI Action Summit, emphasizing the need for countries to move beyond the theoretical discussions and look closer at practical implementation that will yield concrete results. The think tank's vision calls for concrete steps in AI governance, including establishing clear compute thresholds for AI development, creating an international network of AI evaluation centers, and implementing privacy-preserving verification mechanisms. The recommendations advocate for creating platforms for international expert collaboration and methods for sharing computing resources across borders. What CNAS calls for the summit to avoid is empty words without action. 


CNAS's approach reflects growing concerns about the gap between diplomatic rhetoric and actual, tangible AI governance. Their emphasis on compute thresholds as a metric for oversight is interesting, as it offers a measurable way to identify potentially risky AI developments. Hopefully, this will provide adequate oversight without harming innovation or imposing excessive regulatory burdens. I do not fully agree with the recommendation to distribute global AI governance across existing international organizations instead of creating a single overarching authority. My concern on this front lies mainly with the current lackluster ability of international organizations to enforce cooperation and oversight. I worry that relying too heavily on the existing organizations to tackle global AI governance may leave us lacking any real power to shape AI development safely. That being said, there isn't an obvious answer to global AI governance, so taking the time to discuss various methods is beneficial to all involved. Finding a sufficient balance between safety and technological advancement will be a challenge not only for the countries leading the AI push but also for the world as a whole. This balancing act will likely remain a focal point of geopolitics for the next few decades. 




Ryan Consaul, Gregory Smith, RAND. November 26th, 2024.


According to a new analysis conducted by RAND, the U.S. government is exploring how to leverage government contracts to ensure the safe development of AI. The report discusses how federal funding terms and conditions (the rules and requirements attached to government contracts that private companies must uphold) could be used to enforce safe and responsible AI development practices. The Office of Management and Budget recommended that government agencies require real-world performance testing and monitoring of the AI systems they procure from the private sector.


While the U.S. government has used contract terms to influence industry behavior in the past, AI presents unique challenges. The report cited past examples like Chinese telecommunication equipment and cybersecurity reporting requirements in which the U.S. government used this method of enforcement. However, AI technology evolves rapidly, and its capabilities are more complicated to define and measure. Should the controls be too rigid, innovative companies may avoid government work altogether (as evidenced by GAO's findings of companies declining contracts due to bureaucratic burden). That said, government agencies risk deploying unsafe or unsecured AI systems if the conditions are too loose. The solution proposed by the paper is to develop flexible but comprehensive terms that focus on ongoing evaluation and risk management rather than fixed technical specifications, which could help resolve this issue. However, its effectiveness will need to be tested.


bottom of page