US-China Relations AI News & Updates
Tech Leaders Warn Against AGI Manhattan Project in Policy Paper
Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and CAIS Director Dan Hendrycks published a policy paper arguing against a "Manhattan Project for AGI" approach by the US government. The authors warn that an aggressive US push for superintelligent AI monopoly could provoke retaliation from China, suggesting instead a defensive strategy focused on deterrence rather than racing toward AGI dominance.
Skynet Chance (-0.15%): The advocacy by prominent tech leaders against racing toward AGI and for prioritizing defensive strategies rather than rapid development significantly reduces the likelihood of uncontrolled deployment of superintelligent systems. Their concept of "Mutual Assured AI Malfunction" highlights awareness of catastrophic risks from misaligned superintelligence.
Skynet Date (+4 days): The paper's emphasis on deterrence over acceleration and its warning against government-backed AGI races would likely substantially slow the pace of superintelligence development if adopted. By explicitly rejecting the "Manhattan Project" approach, these influential leaders are advocating for more measured, cautious development timelines.
AGI Progress (-0.1%): The paper represents a significant shift from aggressive AGI pursuit to defensive strategies, particularly notable coming from Schmidt who previously advocated for faster AI development. This stance by influential tech leaders could substantially slow coordinated efforts toward superintelligence development.
AGI Date (+3 days): The proposed shift from racing toward superintelligence to focusing on defensive capabilities and international stability would likely extend AGI timelines considerably. The rejection of a Manhattan Project approach by these influential figures could discourage government-sponsored acceleration of AGI development.
US and UK Decline to Sign Paris AI Summit Declaration as 61 Countries Commit to Ethical AI Development
At the Artificial Intelligence Action Summit in Paris, 61 countries, including China and India, signed a declaration focusing on ensuring AI is 'open, inclusive, transparent, ethical, safe, secure and trustworthy,' but the US and UK declined to sign. US Vice President JD Vance emphasized America's commitment to maintaining AI leadership and avoiding 'ideological bias,' while EU President Ursula von der Leyen defended the EU AI Act as providing unified safety rules while acknowledging the need to reduce red tape.
Skynet Chance (+0.04%): The US and UK's refusal to join a multilateral AI framework potentially weakens global coordination on AI safety measures, creating opportunities for less cautious AI development paths. This fragmented approach to governance increases the risk of competitive pressures overriding safety considerations.
Skynet Date (-2 days): The geopolitical polarization around AI regulation and the US emphasis on maintaining supremacy could accelerate unsafe AI deployment timelines as countries compete rather than cooperate. This competitive dynamic may prioritize capability advancement over safety considerations, potentially bringing dangerous AI scenarios forward in time.
AGI Progress (+0.01%): The summit's outcome indicates a shift toward prioritizing AI development and competitiveness over stringent safety measures, particularly in the US approach. This pro-innovation stance may slightly increase the overall momentum toward AGI by reducing potential regulatory barriers.
AGI Date (-2 days): The US position focusing on maintaining AI leadership and avoiding 'overly precautionary' approaches suggests an acceleration in the AGI timeline as regulatory friction decreases. The competitive international environment could further incentivize faster development cycles and increased investment in advanced AI capabilities.