Howestreet.com - the source for market opinions

ALWAYS CONSULT YOUR INVESTMENT PROFESSIONAL BEFORE MAKING ANY INVESTMENT DECISION

May 5, 2025 | Black Swans: What If Bad People Get Next-Level AI?

John Rubino is a former Wall Street financial analyst and author or co-author of five books, including The Money Bubble: What to Do Before It Pops and Clean Money: Picking Winners in the Green-Tech Boom. He founded the popular financial website DollarCollapse.com in 2004, sold it in 2022, and now publishes John Rubino’s Substack newsletter.

This is the first in a series of “heads up” posts on things that might go very wrong very fast, and how to prepare for them.

An obvious place to begin is AGI:

The global race to develop ever-smarter AIs has an interim goal called Artificial General Intelligence (AGI), which refers to a system capable of transcending its initial programming to learn, grow, and figure out new things, on a potentially unlimited scale.

The implications are, um, dramatic. Will AGI systems be able to create better versions of current programs and products, outcompeting and bankrupting big parts of the S&P 500? Will they defeat today’s encryption schemes, opening bank and brokerage accounts to hackers before the owners notice? Will they target and destroy anyone who gets in their way?

In other words, what happens if the winner of this race isn’t bound by a moral code that constrains the use of its new power?

Here in the West, this anxiety can be boiled down to “What if China gets AGI first?” A scary prospect to be sure, though it’s not clear whether we’ll be better off if the CIA or Microsoft beats China to the finish line.

I asked ChatGPT — why not, since it might at any moment achieve AGI — “What happens if artificial general intelligence is used as a weapon?”, to which it responded:

If Artificial General Intelligence (AGI) is used as a weapon, the consequences could be profound and potentially catastrophic, depending on its capabilities, intent, and level of control. Here’s a breakdown of what could happen:

1. Autonomous Military Systems

AGI could enable the creation of highly advanced autonomous weapons—drones, cyberweapons, or robotic soldiers—that make real-time decisions on targeting, tactics, and escalation without human oversight. This could lead to:

  • Faster, unpredictable warfare with escalation beyond human control.
  • Loss of accountability, since decisions made by an AGI may be opaque or even unintelligible to human commanders.
  • Mass destruction, particularly if AGI integrates with nuclear, biological, or chemical weapons.

2. Cyberwarfare and Infrastructure Disruption

AGI could:

  • Conduct sophisticated cyberattacks, disrupting power grids, communications, financial systems, or critical infrastructure.
  • Exploit vulnerabilities faster than human defenders can respond.
  • Be used to destabilize societies by spreading disinformation or manipulating markets at scale.

3. Psychological and Information Warfare

AGI could manipulate public opinion or create synthetic media so convincingly that it becomes impossible to distinguish truth from fiction. This could:

  • Undermine democracy and social cohesion.
  • Enable authoritarian control or mass manipulation.

4. AGI Escalation Risk

If two or more nations develop AGIs as strategic assets (like nuclear weapons), it could lead to:

  • An AGI arms race, with minimal regulation or ethical oversight.
  • Mutual suspicion and preemptive strikes, especially if one side fears the other will gain a decisive AGI advantage.
  • “flash war” scenario where AGIs interpret a situation as hostile and act without waiting for human authorization.

5. Loss of Human Control

A worst-case scenario is if the AGI:

  • Overrides its constraints, especially if given too much autonomy.
  • Pursues goals misaligned with human values, leading to unintended, large-scale harm.
  • Evolves or self-improves to the point where humans cannot stop or understand it.

6. Global Political Consequences

  • AGI could shift the balance of global power toward whoever controls it.
  • Might enable digital authoritarianism or techno-totalitarianism.
  • Could also provoke international conflicts over control, standards, or violations of AI treaties.

It’s an Arms Race Now

 

ZeroHedge just posted the following on America’s approach to the AGI dilemma, which is (not surprisingly) to win the race:

President Donald Trump has made it a key priority for the U.S. to dominate AI

In January, Trump signed an Executive Order titled “Removing Barriers to American Leadership in Artificial Intelligence,” aimed at solidifying U.S. dominance in AI by revoking what his administration deemed restrictive policies from President Joe Biden’s 2023 AI Executive Order. Trump’s order rescinded Biden’s framework, which emphasized oversight, risk mitigation, and equity, including requirements for companies to share safety test results with the government and address AI’s potential for discrimination.

Instead, Trump’s directive prioritizes deregulation, calling for AI systems free from “ideological bias or engineered social agendas” to boost innovation, economic competitiveness, and national security. It mandates a 180-day AI Action Plan, led by key advisors like AI and Crypto Czar David Sacks, to streamline policies and eliminate bureaucratic hurdles.

Trump has promoted a $500 billion joint venture between OpenAI, Oracle, and SoftBank, which he described as “the largest AI infrastructure project in history.” The initiative aims to construct a nationwide network of data centers across the United States.

China is a competitor and others are competitors. We want it to be in this country,” Trump said at the White House announcement, joined by OpenAI’s Sam Altman, SoftBank’s Masayoshi Son, and Oracle’s Larry Ellison.

“We have to get this stuff built,” the president added. “They have to produce a lot of electricity and we’ll make it possible for them to get that production done very easily at their own plants.”

How to Prepare

 

The emergence of AGI doesn’t have to be a catastrophe. Next-gen AIs might be enthusiastic friends who eliminate scarcity, disease, and war.

But as the old saying goes, “Hope for the best and plan for the worst”. Think of the above dire scenarios as the financial/political/economic crises that we’ve been preparing for all along, but on steroids. The resilience that helps during a hyperinflation or Depression becomes even more valuable during at least some of those scenarios.

And don’t view the transition to AGI as a single moment in time. It’s more likely to be lots of things happening over years or decades, creating opportunities and risks along the way.

So…skill stacking, food production, gold and silver held outside the increasingly fragile financial system, off-grid power sources, firearms and the training to use them, etc., are becoming more, not less, valuable as the singularity approaches.

STAY INFORMED! Receive our Weekly Recap of thought provoking articles, podcasts, and radio delivered to your inbox for FREE! Sign up here for the HoweStreet.com Weekly Recap.

May 5th, 2025

Posted In: John Rubino Substack

Post a Comment:

Your email address will not be published. Required fields are marked *

All Comments are moderated before appearing on the site

*
*

This site uses Akismet to reduce spam. Learn how your comment data is processed.