top of page

Articles of Interest: March 22 - 28, 2025

  • Writer: The Prometheus Security Project Team
    The Prometheus Security Project Team
  • Mar 28
  • 4 min read





Caiwei Chen, MIT Technology Review. March 27, 2025.



Over the past two years, China has built hundreds of new AI data centers to capitalize on the global AI boom. However, most of these centers now sit unused, almost 80%, according to the author's sources. Following ChatGPT's emergence in 2022, the CCP was quick to designate AI infrastructure and development as a national priority, but as the years have gone on, the initial infatuation with AI has begun to slow down, leaving many of these data center projects obsolete. The emergence of DeepSeek's breakthrough model, R1, only worsened the situation as their breakthrough proved that AI advancement could occur without the large amounts of computing power that these hundreds of data centers housed. That said, the CCP has continued to view this infrastructure as a central component of their AI ambitions despite the financial loss these data centers represent.


China's AI data center predicament reveals deeper tensions in its technological competition with the U.S. The haste to build up unneeded infrastructure reflects Beijing's top-down approach to innovation, where political goals sometimes override financial logic. The emergence of DeepSeek's highly efficient model likely represented a potential inflection point in AI development globally. By developing a model on par with U.S. AI capabilities at a lower cost and with fewer computational resources, China has demonstrated adaptability in the face of U.S. export controls. This innovation path, doing more with less, may prove more sustainable than brute-force computing investments, the approach many U.S. companies have taken. (You can read more about that here.) China's continued commitment to AI infrastructure despite widespread misallocation indicates that CCP leadership views the AI competition with the U.S. through a long-term rather than a short-term profit-driven perspective. 







Maggie Miller and Dana Nickel, Politico, March 25th, 2025



When the editor-in-chief of The Atlantic was added to a Signal group with multiple White House and Intelligence Community officials, he was stunned to find that battle plans for airstrikes against the Houthis were announced through the chat. This has led to questions from security experts about how safe Signal really is from decryption by powers hostile to the US.


When National Security Advisor Mike Waltz created a Signal group of White House senior officials to coordinate messaging for an impending attack on the Houthi movement in Yemen, he opened a debate into whether or not Signal is safe to communicate classified information. The app is considered safer than other messaging apps but is still not regarded as ideal for communicating classified information. This is because Signal data is often stored on desktop or laptop devices that can be easily hacked. Additionally, Signal is not accessible on most federal devices, meaning that its very possible that many members of Waltz’s chat were using personal devices when in this Signal group. If there are additional lapses in security by additional administration officials in the future, it could have a calamitous effect on the ability of the US government to keep its conversations about military planning classified. 


While both Russia and Ukraine have employed unmanned systems, Ukraine has a clear advantage in the production of unmanned air, sea, and ground vehicles. By using systems in a coordinated manner like this, Ukraine’s military gave a textbook example of using a drone swarm in a combat setting. It also showed how a combatant that is numerically and materially inferior to its enemy can use unmanned systems to even the odds.



Miller, Maggie, and Dana Nickel. “‘It’s so Unbelievable’: Cyber World Stunned over War Planners Using Signal.” Politico, March 25, 2025. https://www.politico.com/news/2025/03/25/signal-cybersecurity-trump-war-planning-00246881.





Matt O’Brien and Sarah Parvini, The Associated Press. March 27, 2025.



ChatGPT’s new image generator tool uses art styles inspired by Hayao Miyazaki, Studio Ghibli founder and artist. Miyazaki has openly criticized the use of AI to create art as there are concerns about copyright and grotesque content creation.


There continues to be a lack of policy or implementation of law to protect artists’ work from training AI programs. If well-acclaimed companies such as Studio Ghibli have their art style stolen and OpenAI is not held accountable, then smaller industries or rising artists face the same licensing and permission violations. 


Beyond taking away the work of a human artist, further uses of AI to create imaging or animation for media or film could become visually harmful and emotionally damaging. After seeing an AI animation of  “a writhing body dragging itself by its head”, Miyazaki provides an anecdote about a friend with a disability and how disrespectful he finds the video. In training AI how to animate scenes of gore or horror, there is a lack of humanity and aspect of portrayal lost in the experience of the person the story is meant to tell. 



O’Brien, Matt and Parvini, Sarah. “ChatGPT’s viral Studio Ghibli-style images highlight AI copyright concerns,” March 27. 2025




bottom of page