Articles of Interest: April 19-25, 2025
- The Prometheus Security Project Team
- Apr 25
- 4 min read

Mohar Chatterjee, Politico. April 22, 2025.
Concerns about superintelligent AI risks are reemerging after being sidelined for months in Washington. The Trump administration has embraced an "accelerationist" approach, prioritizing rapid AI development over safety guardrails. Startups are developing self-coding AI systems aimed at achieving superintelligence with minimal human oversight. AI safety experts warn about the growing gap between accelerating AI capabilities and slow-moving government policy, with some predicting superintelligent systems could emerge by 2027. While California explores state-level regulations, federal momentum remains focused on deployment rather than oversight. As Chatterjee states “the message from the top is simple: build fast, and worry later.”
The current approach of dismantling existing safety frameworks and policy directives has essentially created an unregulated testing ground for increasingly powerful systems. After the Trump administration repealed the Biden administration's AI policy the gap left behind has yet to be filled. That said, this policy gap isn't just about different regulatory philosophies. There exists a deeper disagreement about whether superintelligent AI represents an existential threat or a competitive advantage over adversaries like China, and whether risking that possible threat is worth it in order to achieve a possible advantage.
For ordinary people, this debate has implications far beyond abstract technological or far off foreign policy concerns. The development of self-improving AI systems without effective oversight frameworks could fundamentally reshape economic systems, national security dynamics, and global power structures before democratic institutions have time to respond. When experts from academic institutions warn that "humanity might be destroyed," the disconnect between their alarm and the current policy direction represents one of the most significant governance challenges of our time. Determining who will control the AI systems that potentially surpasses human capabilities and ensuring that these systems are aligned with human interests, will likely be the defining question of the next few decades.
Chatterjee, Mohar. "Superintelligent AI fears: They’re baaa-ack." Politico, April 22, 2025. https://www.politico.com/newsletters/digital-future-daily/2025/04/22/superintelligent-ai-fears-theyre-baaa-ack-00303613.
Billy Perrigo, TIME. April 22, 2025.
Time Magazine published an exclusive article detailing how an AI firm working on AI security for the US government published a report highlighting the vulnerabilities of several AI datacenters. This report was passed around the White House before Time Magazine got ahold of it and published a redacted copy of it. It shows how vulnerable datacenters used for AI development purposes are to espionage from actors hostile to American interests.
In order to assist the development of stronger AI models, several tech companies in the US began creating datacenters. This was necessary to hold the massive amounts of data needed to help create powerful new models. This is seen as a modern-day version of the Manhattan project for AI. However, these datacenters have been identified as vulnerable to the Chinese intelligence services, which seek to undermine America’s AI development. This is striking because it shows the extent to which Chinese intelligence gathering can affect the international AI development race. It is strikingly similar to the Soviet intelligence gathering effort conducted to get even with America’s atomic weapons program at the end of WWII. Unless America’s AI R&D becomes more secure, America’s rivals will likely get even in the AI race.
Perrigo, Billy. “Exclusive Report: Every AI Datacenter Is Vulnerable to China.” TIME, April 22, 2025. https://time.com/7279123/ai-datacenter-superintelligence-china-trump-report/.
Eduardo Baptista and Alessandro Diviggiano, Reuters, April 19, 2025.
On Saturday, April 19th, humanoid robots competed against human beings in Beijing’s Yizhuang half-marathon. 21 robot teams competed against thousands of flesh-and-blood runners in what was billed as a historic first. Robots in China have competed in foot races before, but never alongside humans.
The winning robot, Tiangong Ultra, completed the 13.1-mile course in 2 hours and 40 minutes, over twice the time of the winner of the human winners (The men's winner finished in 1 hour and 2 minutes; The women's winner finished in 1 hour and 11 minutes).
Tiangong Ultra was less than half an hour slower than the average human speed. Is this cause for alarm? Not yet, at least. Robots still have a great deal of progress if they hope to overtake the average runner, and even more to beat the fastest humans. The winning robot required three battery changes over 13 miles, and many others fell over during the run.
It is slightly unnerving to watch a non-living thing run a marathon, but they each required a team of engineers and helpers who sometimes had to physically assist the robots. I believe we are safe from running robots besting humanity in marathons…for now.
Baptista, Eduardo and Diviggiano, Alessandro. "Humanoid Robots Run Chinese Half-Marathon." Reuters, April 19, 2025. https://www.reuters.com/world/china/china-pits-humanoid-robots-against-humans-half-marathon-2025-04-19/.
Associated Press, The Guardian. April 24, 2025.
Using an outside source of a non-lawyer company, some of the multiple-choice questions on the California state bar exam were created with the aid of artificial intelligence. There has been both negative and positive discourse on the use of AI for the exam.
Considering that the legal field and policymakers have been pushing for more regulations on AI, having questions written by AI instead of barred lawyers on an exam that certifies lawyers to practice in their states is rather contradictory. Not only were the questions not written by barred lawyers, they were not reviewed or approved by any either. By allowing the multiple choice questions to be written by AI, the aspect of anecdotal or real-life experience being implemented in the exam is lacking, which could harm the future generation of lawyers that took this exam. There are concerns with using and depending on AI, and the comment by Andrew Perlman, dean of Suffolk University Law School and advisory council member of the American Bar Association, claiming that “we will be worried in the future about the competence of lawyers who don’t use these tools,” is unfair considering some lawyers would like to avoid reliance on AI use. Further, if AI can be used on the bar exam, it can spread to other career fields, or even to younger generations’ standardized tests, as well.
Associated Press, “AI helped write bar exam questions, California state bar admits.” The Guardian, April 24, 2025. https://www.theguardian.com/us-news/2025/apr/24/california-bar-exam-ai.